On Thu, Oct 11, 2012 at 03:44:25PM +0800, Shu Ming wrote:
Hi,
I found some "dd" operations were launched contiguously in my vdsm.log. Is this harmful? How was this operation caused?
That's storage.storage_mailbox.SPM_MailMonitor, polling for lvextend requests. dd is used, since in the old days, vdsm did not have storage.fileUtils.DirectFile.
The behavior is expected, but I cannot say that it is harmless. The mailbox should be high on http://wiki.ovirt.org/wiki/Vdsm_TODO#refactoring since forking so much is a waste, as well as using strings instead of bytearrays. Making the module as a separate, testable entity, is important, too.
From vdsm.log:
Dummy-51000::DEBUG::2012-10-11 15:38:57,243::__init__::1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/6f6d4801-7447-48ea-b516-627d83e7801e/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
After reading the code, every mailbox should be 4096 byte size. And the total mailbox size is host * 4096. Ony one host is here, so the total mailbox size here is 4096. why should the 'dd' operation read 1024000 byte which is 1000K byte much lager than 4096 here?
2012-10-11 18:54, Dan Kenigsberg:
On Thu, Oct 11, 2012 at 03:44:25PM +0800, Shu Ming wrote:
Hi,
I found some "dd" operations were launched contiguously in my vdsm.log. Is this harmful? How was this operation caused?
That's storage.storage_mailbox.SPM_MailMonitor, polling for lvextend requests. dd is used, since in the old days, vdsm did not have storage.fileUtils.DirectFile.
The behavior is expected, but I cannot say that it is harmless. The mailbox should be high on http://wiki.ovirt.org/wiki/Vdsm_TODO#refactoring since forking so much is a waste, as well as using strings instead of bytearrays. Making the module as a separate, testable entity, is important, too.
From vdsm.log:
Dummy-51000::DEBUG::2012-10-11 15:38:57,243::__init__::1249::Storage.Misc.excCmd::(_log) 'dd if=/rhev/data-center/6f6d4801-7447-48ea-b516-627d83e7801e/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None)
On Thu, Oct 11, 2012 at 11:38:19PM +0800, Shu Ming wrote:
After reading the code, every mailbox should be 4096 byte size. And the total mailbox size is host * 4096. Ony one host is here, so the total mailbox size here is 4096. why should the 'dd' operation read 1024000 byte which is 1000K byte much lager than 4096 here?
The controlling parameter is MAX_HOST_ID=250, not the number of current cluster members.
于 2012-10-14 5:15, Dan Kenigsberg:
On Thu, Oct 11, 2012 at 11:38:19PM +0800, Shu Ming wrote:
After reading the code, every mailbox should be 4096 byte size. And the total mailbox size is host * 4096. Ony one host is here, so the total mailbox size here is 4096. why should the 'dd' operation read 1024000 byte which is 1000K byte much lager than 4096 here?
The controlling parameter is MAX_HOST_ID=250, not the number of current cluster members.
I am wondering if we can do some optimization here, like to read and write the block size linear to the current cluster members.
On Sun, Oct 14, 2012 at 09:44:39PM +0800, Shu Ming wrote:
于 2012-10-14 5:15, Dan Kenigsberg:
On Thu, Oct 11, 2012 at 11:38:19PM +0800, Shu Ming wrote:
After reading the code, every mailbox should be 4096 byte size. And the total mailbox size is host * 4096. Ony one host is here, so the total mailbox size here is 4096. why should the 'dd' operation read 1024000 byte which is 1000K byte much lager than 4096 here?
The controlling parameter is MAX_HOST_ID=250, not the number of current cluster members.
I am wondering if we can do some optimization here, like to read and write the block size linear to the current cluster members.
There is a big place for optimization (and testability), as I've mentioned in a previous post.
We do not have a cluster membership algorithm; only Engine knows how many host are currently in the cluster. This knowledge can be propagated to the SPM, I do not see an imminent race in this, but I guess a couple of problems lurk there.
Dan.
vdsm-devel@lists.fedorahosted.org