Re: [vdsm] [Users] vdsm unresponsive with python exception
by Dan Kenigsberg
On Thu, Apr 11, 2013 at 03:51:07PM -0500, Tony Feldmann wrote:
> That was the issue. Found out yesterday that vdsm.log was somehow changed
> to root:root. Just now got a chance to put it back on the mailing list.
> How does the ownership of that file get cahnged. When the issue occurred I
> am certain there was no one on the system.
http://gerrit.ovirt.org/#/c/12940/ (Separating supervdsm log to
supervdsm.log file) solves the issue. unfortunately, only on the master
branch of vdsm.
I think that this is a nasty issue that has to be backported to the
ovirt-3.2 branch as well, and merits to be part of ovirt-3.2.2.
Regards,
Dan.
>
>
> On Thu, Apr 11, 2013 at 2:15 PM, Joop <jvdwege(a)xs4all.nl> wrote:
>
> > Dan Kenigsberg wrote:
> >
> >> On Wed, Apr 10, 2013 at 08:59:01AM -0500, Tony Feldmann wrote:
> >>
> >>
> >>> I am having a strange issue in my ovirt cluster. I have 2 hosts, 1
> >>> running
> >>> engine and added as a host and one other system added as a host. Both
> >>> systems are running gluster across local disks for shared storage.
> >>> Everything was working fine until last night, where my system that is
> >>> also
> >>> running the engine when unresponsive in the admin page. All vms were
> >>> still
> >>> running that were on the host. I shut down the vms that were on the host
> >>> from within the guest os as I was not able to do anything to the vm with
> >>> the host in unresponsive state. After getting the vms off and rebooting
> >>> the host, the vdsmd service says that it is running, but it continually
> >>> restarts the vdsm process and dumps out these messages: detected
> >>> unhandled
> >>> Python exception in '/usr/share/vdsm/vdsm'. All services say they are up
> >>> and running but the host stays in unresponsive state and the vdsm process
> >>> keeps respawning. There is also no data in the vdsm.log. Can anyone
> >>> shed
> >>> any light on this for me?
> >>>
> >>>
> >>
> >> vdsm-devel(a)fedorahosted.org may be a better place to ask vdsm-specific
> >> questions.
> >>
> >> Could you log into the non-operational host as root, and stop the vdsm
> >> service.
> >>
> >> Then become the vdsm user with
> >>
> >> su -s /bin/bash - vdsm
> >>
> >> and run /usr/share/vdsm/vdsm manually. Do you see anything in
> >> particular?
> >>
> >>
> >>
> > Please have a look at the permissions/owner of /var/log/vdsm/vdsm.log.
> > Should be vdsm:kvm and not root:root
> >
> > Joop
> >
> >
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
11 years, 2 months
getVdsStats - no network sessions/statistics
by derez@redhat.com
Hi,
I've recently updated to latest VDSM code (commit c506391442bc996031aa871a4dfd61368b5a30db).
Ever since, whenever I run a VM, my host moves to Non-Operational state (oVirt).
* getVdsStats output [1]: doesn't include any network sessions.
* Error from oVirt engine log: - "Host '120' moved to Non-Operational state because interface/s 'em1,'
are down which needed by network/s 'ovirtmgmt, ' in the current cluster"
* Reverting latest commit (http://gerrit.ovirt.org/#/c/13838)
seems to solve the issue.
Thanks,
Daniel
[1]
vdsClient 0 getVdsStats:
cpuIdle = '100.00'
cpuSys = '0.00'
cpuSysVdsmd = '0.00'
cpuUser = '0.00'
cpuUserVdsmd = '0.00'
dateTime = '2013-04-14T08:27:01 GMT'
elapsedTime = '1465'
generationID = 'c6c1a126-48d6-44a1-8bd7-80648436a615'
ksmCpu = 0
ksmPages = 100
ksmState = False
memAvailable = 11064
memCommitted = 577
memShared = 0
momStatus = 'active'
netConfigDirty = 'False'
rxRate = '0.00'
statsAge = '1463.61'
storageDomains = {'064a6bef-39f4-4bb4-a811-b8206fa7dd1c': {'code': 0,
'delay': '0.0126550197601',
'lastCheck': '3.9',
'valid': True},
'14a320f9-06c8-41b9-bb53-4fc3dc5717b0': {'code': 0,
'delay': '0.0133948326111',
'lastCheck': '3.9',
'valid': True},
'd2066c79-a6f9-41e8-8ac8-642f84e415c0': {'code': 0,
'delay': '0.0102300643921',
'lastCheck': '7.2',
'valid': True}}
swapFree = 7952
swapTotal = 7983
txRate = '0.00'
vmActive = 1
vmCount = 1
vmMigrating = 0
11 years, 2 months
Fwd: oVirt storage is down and doesn't come up
by Limor Gavish
Hi,
For some reason, without doing anything, all the storage domains became
down and restarting VDSM or the entire machine do not bring it up.
I am not using lvm
The following errors appear several times in vdsm.log (full logs are
attached):
Thread-22::WARNING::2013-04-12
19:00:08,597::lvm::378::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
Volume group "1083422e-a5db-41b6-b667-b9ef1ef244f0" not found']
Thread-22::DEBUG::2013-04-12
19:00:08,598::lvm::402::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' released the operation mutex
Thread-22::DEBUG::2013-04-12
19:00:08,681::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3'
Thread-22::DEBUG::2013-04-12
19:00:08,681::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active
users)
Thread-22::DEBUG::2013-04-12
19:00:08,681::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding
out if anyone is waiting for it.
Thread-22::DEBUG::2013-04-12
19:00:08,682::resourceManager::648::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3',
Clearing records.
Thread-22::ERROR::2013-04-12
19:00:08,682::task::850::TaskManager.Task::(_setError)
Task=`e35a22ac-771a-4916-851f-2fe9d60a0ae6`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 939, in connectStoragePool
masterVersion, options)
File "/usr/share/vdsm/storage/hsm.py", line 986, in _connectStoragePool
res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 695, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1232, in __rebuild
masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1576, in getMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain:
'spUUID=5849b030-626e-47cb-ad90-3ce782d831b3,
msdUUID=1083422e-a5db-41b6-b667-b9ef1ef244f0'
Thread-22::DEBUG::2013-04-12
19:00:08,685::task::869::TaskManager.Task::(_run)
Task=`e35a22ac-771a-4916-851f-2fe9d60a0ae6`::Task._run:
e35a22ac-771a-4916-851f-2fe9d60a0ae6
('5849b030-626e-47cb-ad90-3ce782d831b3', 1,
'5849b030-626e-47cb-ad90-3ce782d831b3',
'1083422e-a5db-41b6-b667-b9ef1ef244f0', 3942) {} failed - stopping task
Thread-22::DEBUG::2013-04-12
19:00:08,685::task::1194::TaskManager.Task::(stop)
Task=`e35a22ac-771a-4916-851f-2fe9d60a0ae6`::stopping in state preparing
(force False)
Thread-22::DEBUG::2013-04-12
19:00:08,685::task::974::TaskManager.Task::(_decref)
Task=`e35a22ac-771a-4916-851f-2fe9d60a0ae6`::ref 1 aborting True
Thread-22::INFO::2013-04-12
19:00:08,686::task::1151::TaskManager.Task::(prepare)
Task=`e35a22ac-771a-4916-851f-2fe9d60a0ae6`::aborting: Task is aborted:
'Cannot find master domain' - code 304
*[wil@bufferoverflow ~]$ **sudo vgs --noheadings --units b --nosuffix
--separator \| -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
*
No volume groups found
*[wil@bufferoverflow ~]$ **mount*
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs
(rw,nosuid,size=8131256k,nr_inodes=2032814,mode=755)
securityfs on /sys/kernel/security type securityfs
(rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup
(rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/cpuset type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup
(rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup
(rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/blkio type cgroup
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup
(rw,nosuid,nodev,noexec,relatime,perf_event)
/dev/sda3 on / type ext4 (rw,relatime,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /proc/fs/nfsd type nfsd (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs
(rw,relatime,fd=34,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
tmpfs on /tmp type tmpfs (rw)
configfs on /sys/kernel/config type configfs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/sda5 on /home type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /boot type ext4 (rw,relatime,data=ordered)
kernelpanic.home:/home/KP_Data_Domain on
/rhev/data-center/mnt/kernelpanic.home:_home_KP__Data__Domain type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.100.101.100,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.100.101.100)
bufferoverflow.home:/home/BO_ISO_Domain on
/rhev/data-center/mnt/bufferoverflow.home:_home_BO__ISO__Domain type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.100.101.108,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.100.101.108)
*[wil@bufferoverflow ~]$ **sudo find / -name
5849b030-626e-47cb-ad90-3ce782d831b3*
/run/vdsm/pools/5849b030-626e-47cb-ad90-3ce782d831b3
*[wil@bufferoverflow ~]$* *sudo find / -name
1083422e-a5db-41b6-b667-b9ef1ef244f0*
/home/BO_Ovirt_Storage/1083422e-a5db-41b6-b667-b9ef1ef244f0
I will extremely appreciate any help,
Limor Gavish
11 years, 2 months
AUTO: Yih-Herng Chuang/Seattle/IBM is out of the office, (returning 04/15/2013)
by Yih-Herng Chuang
I am out of the office until 04/15/2013.
For FSM security tech issues, please contact team lead Ronald Long. For
others, please contact my manager John Aguiar. I will respond your email
when I am back to work.
Note: This is an automated response to your message "vdsm-devel Digest,
Vol 23, Issue 10" sent on 04/12/2013 6:00:04.
This is the only notification you will receive while this person is away.
11 years, 2 months
Re: [vdsm] [Engine-devel] ovirt-host-deploy and multible bridges
by sabose@redhat.com
[Adding vdsm-devel]
On 04/09/2013 03:40 PM, Sahina Bose wrote:
> Hi all,
>
> I'm testing the bootstrapping of host without reboot on Fedora 18. After
> host's bootstrap,
> Ifconfig output returns this:
>
> ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> inet 10.70.37.219 netmask 255.255.254.0 broadcast 10.70.37.255
> <snipped>
>
> virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
> inet 192.168.122.1 netmask 255.255.255.0 broadcast
> 192.168.122.255
> <snipped>
>
> Running*glusterHostsList* vdsm verb, returns the ip address
> 192.168.122.1, whereas my host has been added with ip address 10.70.37.219
>
> If I reboot the host, the virbr0 bridge is removed, and there's no issue.
>
> The vdsm verb glusterHostsList - returns ipAddress of host + output of
> gluster peer probe. This is needed because a periodic sync job needs
> to make sure that the hosts added in engine are in sync with the
> gluster cli (hosts could also be added/removed from gluster cli).
>
> How can we make sure glusterHostsList picks the correct ipAddress?
> Reading the inetinfo based on bridge has been vetoed as we are doing
> away with bridges.
>
> It would also work if virbr0 was updated in vds_interfaces table.
> Since this is not happening either - we have an issue.
>
> thanks
> sahina
>
>
>
> _______________________________________________
> Engine-devel mailing list
> Engine-devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/engine-devel
11 years, 2 months
Re: [vdsm] [Users] vdsm unresponsive with python exception
by Dan Kenigsberg
On Wed, Apr 10, 2013 at 08:59:01AM -0500, Tony Feldmann wrote:
> I am having a strange issue in my ovirt cluster. I have 2 hosts, 1 running
> engine and added as a host and one other system added as a host. Both
> systems are running gluster across local disks for shared storage.
> Everything was working fine until last night, where my system that is also
> running the engine when unresponsive in the admin page. All vms were still
> running that were on the host. I shut down the vms that were on the host
> from within the guest os as I was not able to do anything to the vm with
> the host in unresponsive state. After getting the vms off and rebooting
> the host, the vdsmd service says that it is running, but it continually
> restarts the vdsm process and dumps out these messages: detected unhandled
> Python exception in '/usr/share/vdsm/vdsm'. All services say they are up
> and running but the host stays in unresponsive state and the vdsm process
> keeps respawning. There is also no data in the vdsm.log. Can anyone shed
> any light on this for me?
vdsm-devel(a)fedorahosted.org may be a better place to ask vdsm-specific
questions.
Could you log into the non-operational host as root, and stop the vdsm
service.
Then become the vdsm user with
su -s /bin/bash - vdsm
and run /usr/share/vdsm/vdsm manually. Do you see anything in
particular?
Dan.
11 years, 2 months
RFC: is it possible to configure hosts in cluster to be NTP peers
by David Jaša
Hi,
ovirt still doesn't configure NTP on host installation and relies on
administrator not forgetting to set it up correctly, mainly because it
is quite hard to configure it correctly automatically.
There is one thing IMO that could be configured automatically and that
could alleviate situation somewhat: make hosts in the cluster NTP peers
so that when clocks go wrong in the cluster for any reason, the error is
the same on all hosts in the cluster.
The files could be stored in /etc/{ntp,chrony}/vdsm.conf for instance
and referenced with includefile or include ..." in /etc/ntp.conf
or /etc/chrony.conf respectively.
What seems tricky though is that non-Up hosts should be excluded from
peer list because there are higher chances that their clocks are not
configured properly, so engine (or some host?) should trigger changes to
NTP configuration pretty frequently.
What do you think about these issues? I don't want to report bugs/RFEs
on topic before I'll see your reply.
David
--
David Jaša, RHCE
SPICE QE based in Brno
GPG Key: 22C33E24
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
11 years, 2 months
How to map the oVirt engine version to VDSM version by git tags?
by shuming@linux.vnet.ibm.com
Hi,
I am looking for a way to map the oVirt version to VDSM version
and engine version by git tags. I can run "git tag -l" under
engine git workspace and vdsm git workspace. Here are the output
from these two "git tag -l" command.
Under oVirt engine workspace:
-bash-4.1$ git tag -l
ovirt-engine-3.0.0_0001
ovirt-engine-3.1.0
ovirt-engine-3.2.0
ovirt-engine-3.2.1
Under vdsm workspace:
-bash-4.1$ git tag -l
v4.10.0
v4.10.1
v4.10.2
v4.10.3
v4.9.0
v4.9.1
v4.9.2
v4.9.3
v4.9.3.1
v4.9.3.2
v4.9.3.3
v4.9.4
v4.9.5
v4.9.6
I can checkout the oVirt 3.2.1 snapshot in engine workspace by
"git checkout ovirt-engine-3.2.1". But how can I get the VDSM snapshot
by the tags in VDSM workspace? How can I know which change-set is for
oVirt 3.2.1 in VDSM workspace?
--
---
舒明 Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626 Tieline: 9051626 E-mail: shuming(a)cn.ibm.com or shuming(a)linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC
11 years, 2 months