I have begun work at changing how API clients can control storage connections when interacting with VDSM.
Currently there are 2 API calls: connectStorageServer() - Will connect to the storage target if the host is not already connected to it. disconnectStorageServer() - Will disconnect from the storage target if the host is connected to it.
This API is very simple but is inappropriate when multiple clients and flows try to access the same storage.
This is currently solved by trying to synchronize things inside rhevm. This is hard and convoluted. It also brings out issues with other clients using the VDSM API.
Another problem is error recovery. Currently ovirt-engine(OE) has no way of monitoring the connections on all the hosts an if a connection disappears it's OE's responsibility to reconnect.
I suggest a different concept where VDSM 'manages' the connections. VDSM receives a manage request with the connection information and from that point forward VDSM will try to keep this connection alive. If the connection fails VDSM will automatically try and recover.
Every manage request will also have a connection ID(CID). This CID will be used when the same client asks to unamange the connection. When multiple requests for manage are received to the same connection they all have to have their own unique CID. By internally mapping CIDs to actual connections VDSM can properly disconnect when no CID is addressing the connection. This allows each client and even each flow to have it's own CID effectively eliminating connect\disconnect races.
The change from (dis)connect to (un)manage also changes the semantics of the calls significantly. Whereas connectStorageServer would have returned when the storage is either connected or failed to connect, manageStorageServer will return once VDSM registered the CID. This means that the connection might not be active immediately as the VDSM tries to connect. The connection might remain down for a long time if the storage target is down or is having issues.
This allows for VDSM to receive the manage request even if the storage is having issues and recover as soon as it's operational without user intervention.
In order for the client to query the current state of the connections I propose getStorageConnectionList(). This will return a mapping of CID to connection status. The status contains the connection info (excluding credentials), whether the connection is active, whether the connection is managed (unamanged connection are returned with transient IDs), and, if the connection is down, the last error information.
The same actual connection can return multiple times, once for each CID.
For cases where an operation requires a connection to be active a user can poll the status of the CID. The user can then choose to poll for a certain amount of time or until an error appears in the error field of the status. This will give you either a timeout or a "try once" semantic depending on the flows needs.
All connections that have been managed persist VDSM restart and will be managed until a corresponding unmanage command has been issued.
There is no concept of temporary connections as "temporary" is flow dependent and VDSM can't accommodate all interpretation of "temporary". An ad-hoc mechanism can be build using the CID field. For instance a client can manage a connection with "ENGINE_FLOW101_CON1". If the flow got interrupted the client can clean all IDs with certain flow IDs.
I think this API gives safety, robustness, and implementation freedom.
Nitty Gritty:
manageStorageServer =================== Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected. ---- unmanageStorageServer ===================== Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList() ---- getStorageServerList ==================== Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected. ---- unmanageStorageServer ===================== Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList() ---- getStorageServerList ==================== Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} }
On 23/01/12 23:54, Saggi Mizrahi wrote:
I have begun work at changing how API clients can control storage connections when interacting with VDSM.
Currently there are 2 API calls: connectStorageServer() - Will connect to the storage target if the host is not already connected to it. disconnectStorageServer() - Will disconnect from the storage target if the host is connected to it.
This API is very simple but is inappropriate when multiple clients and flows try to access the same storage.
This is currently solved by trying to synchronize things inside rhevm. This is hard and convoluted. It also brings out issues with other clients using the VDSM API.
Another problem is error recovery. Currently ovirt-engine(OE) has no way of monitoring the connections on all the hosts an if a connection disappears it's OE's responsibility to reconnect.
I suggest a different concept where VDSM 'manages' the connections. VDSM receives a manage request with the connection information and from that point forward VDSM will try to keep this connection alive. If the connection fails VDSM will automatically try and recover.
Every manage request will also have a connection ID(CID). This CID will be used when the same client asks to unamange the connection. When multiple requests for manage are received to the same connection they all have to have their own unique CID. By internally mapping CIDs to actual connections VDSM can properly disconnect when no CID is addressing the connection. This allows each client and even each flow to have it's own CID effectively eliminating connect\disconnect races.
The change from (dis)connect to (un)manage also changes the semantics of the calls significantly. Whereas connectStorageServer would have returned when the storage is either connected or failed to connect, manageStorageServer will return once VDSM registered the CID. This means that the connection might not be active immediately as the VDSM tries to connect. The connection might remain down for a long time if the storage target is down or is having issues.
This allows for VDSM to receive the manage request even if the storage is having issues and recover as soon as it's operational without user intervention.
In order for the client to query the current state of the connections I propose getStorageConnectionList(). This will return a mapping of CID to connection status. The status contains the connection info (excluding credentials), whether the connection is active, whether the connection is managed (unamanged connection are returned with transient IDs), and, if the connection is down, the last error information.
The same actual connection can return multiple times, once for each CID.
For cases where an operation requires a connection to be active a user can poll the status of the CID. The user can then choose to poll for a certain amount of time or until an error appears in the error field of the status. This will give you either a timeout or a "try once" semantic depending on the flows needs.
All connections that have been managed persist VDSM restart and will be managed until a corresponding unmanage command has been issued.
There is no concept of temporary connections as "temporary" is flow dependent and VDSM can't accommodate all interpretation of "temporary". An ad-hoc mechanism can be build using the CID field. For instance a client can manage a connection with "ENGINE_FLOW101_CON1". If the flow got interrupted the client can clean all IDs with certain flow IDs.
I think this API gives safety, robustness, and implementation freedom.
Nitty Gritty:
manageStorageServer
Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} } _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Hi Saggi,
I see the added value in the above functionality and I think it is a needed functionality in VDSM.
Your suggestion includes 2 concepts: - Persist connection - auto-reconnect on failures - Reference counting (with CID granularity)
Here are some comments:
* Assuming you meant that the new API will be a replacement to the current API (based on previous chats we had on this topic) I think you are missing a needed functionality to support non-persisted connection. creating a storage domain is an example where it can be useful. The flow includes connecting the host to the storage server creating storage domain and disconnecting from the storage server. Let's assume VDSM hangs while creating the storage domain any unmanageStorageServer will fail, the engine rolls back and tries to create the storage domain on another host, there is no reason for the host to reconnect to this storage server. In the above flow i would use non-persist connection if I had one.
* In the suggested solution the connect will not initiate an immediate connect to the storage server instead it will register the connection as handled connection and will actually generate the connect as part of the managed connection mechanism. I argue that this modeling is implementation driven which is wrong from the user perspective. As a user I expect connect to actually initiate a connect action and that the return value should indicate if the connect succeeded, the way you modeled it the API will return true if you succeeded 'registering' the connect. You modeled the API to be asynchronous with no handler (task id) to monitor the results of the action, which requires polling in the create storage domain flow which I really don't like. In addition you introduced a verb for monitoring the status of the connections alone I would like to be able to monitor it as part of the general host status and not have to poll on a new verb in addition to the current one.
As part of solving the connection management flows in OE I am missing:
- A way to clear all managed connections. use case: We move a host from one data center to another and we want the host to clear all the managed connections. we can ask for the list of managed connection and clear them but having clearAll is much easier.
- Handling a list of Ids in each API verb
- A verb which handles create storage domain and encapsulates the connect create and disconnect.
Thanks, Livnat
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Tuesday, January 24, 2012 12:43:39 PM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 23/01/12 23:54, Saggi Mizrahi wrote:
I have begun work at changing how API clients can control storage connections when interacting with VDSM.
Currently there are 2 API calls: connectStorageServer() - Will connect to the storage target if the host is not already connected to it. disconnectStorageServer() - Will disconnect from the storage target if the host is connected to it.
This API is very simple but is inappropriate when multiple clients and flows try to access the same storage.
This is currently solved by trying to synchronize things inside rhevm. This is hard and convoluted. It also brings out issues with other clients using the VDSM API.
Another problem is error recovery. Currently ovirt-engine(OE) has no way of monitoring the connections on all the hosts an if a connection disappears it's OE's responsibility to reconnect.
I suggest a different concept where VDSM 'manages' the connections. VDSM receives a manage request with the connection information and from that point forward VDSM will try to keep this connection alive. If the connection fails VDSM will automatically try and recover.
Every manage request will also have a connection ID(CID). This CID will be used when the same client asks to unamange the connection. When multiple requests for manage are received to the same connection they all have to have their own unique CID. By internally mapping CIDs to actual connections VDSM can properly disconnect when no CID is addressing the connection. This allows each client and even each flow to have it's own CID effectively eliminating connect\disconnect races.
The change from (dis)connect to (un)manage also changes the semantics of the calls significantly. Whereas connectStorageServer would have returned when the storage is either connected or failed to connect, manageStorageServer will return once VDSM registered the CID. This means that the connection might not be active immediately as the VDSM tries to connect. The connection might remain down for a long time if the storage target is down or is having issues.
This allows for VDSM to receive the manage request even if the storage is having issues and recover as soon as it's operational without user intervention.
In order for the client to query the current state of the connections I propose getStorageConnectionList(). This will return a mapping of CID to connection status. The status contains the connection info (excluding credentials), whether the connection is active, whether the connection is managed (unamanged connection are returned with transient IDs), and, if the connection is down, the last error information.
The same actual connection can return multiple times, once for each CID.
For cases where an operation requires a connection to be active a user can poll the status of the CID. The user can then choose to poll for a certain amount of time or until an error appears in the error field of the status. This will give you either a timeout or a "try once" semantic depending on the flows needs.
All connections that have been managed persist VDSM restart and will be managed until a corresponding unmanage command has been issued.
There is no concept of temporary connections as "temporary" is flow dependent and VDSM can't accommodate all interpretation of "temporary". An ad-hoc mechanism can be build using the CID field. For instance a client can manage a connection with "ENGINE_FLOW101_CON1". If the flow got interrupted the client can clean all IDs with certain flow IDs.
I think this API gives safety, robustness, and implementation freedom.
Nitty Gritty:
manageStorageServer
Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} } _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Hi Saggi,
I see the added value in the above functionality and I think it is a needed functionality in VDSM.
Your suggestion includes 2 concepts:
- Persist connection - auto-reconnect on failures
- Reference counting (with CID granularity)
It's not reference counting and the API user should not assume it is a reference count. Each CID can only registered once. Subsequent requests to register will fail if the CID is already registered. There shouldn't be any assumptions between a manage call and how many physical connections are actually created. Optimizations like internal multiplexing are implementation detail and might change.
Here are some comments:
- Assuming you meant that the new API will be a replacement to the
current API (based on previous chats we had on this topic)
It is
I think you are missing a needed functionality to support non-persisted connection.
the problem is with the term non-persisted. Everything is transient depending on the scale of time you consider "temporary". I leave the decision on what is temporary to the API user and give him the freedom to implement any connection lifecycle mechanism he chooses. I assume that all components, including VDSM, can crash in the middle of a flow and might want to either recover and continue or roll back. I leave it the the user to decide how to handle this as he is the one managing the flow and not VDSM.
creating a storage domain is an example where it can be useful. The flow includes connecting the host to the storage server creating storage domain and disconnecting from the storage server.
I think you are confusing create storage domain flow in general and how create storage domain flow now in the RHEV GUI. A flow can have multiple strategies waiting for connection availability: * If it's a non interactive process I might not care if the actual connect takes 3 hours. * On the other hand if it's an interactive connect I might only want to wait for 1 minute even if an actual connect request takes a lot more because of some problem. * If I am testing the connection arguments I might want to wait until I see the connection succeed or lastError get a value no matter how long it takes. * I might want to try as long as the error is not credential related (or other non transient issue). * I might want to try until I see the connection active for X amount of time (To test for intermittent disconnects).
All of these can be accommodated by the suggested API.
Let's assume VDSM hangs while creating the storage domain any unmanageStorageServer will fail, the engine rolls back and tries to create the storage domain on another host, there is no reason for the host to reconnect to this storage server.
That is true but there is no way for VDSM to know if the connection is deprecated or not. For all I know rhevm might be having issues and will continue on with the flow in a few minutes.
In the above flow i would use non-persist connection if I had one.
Again, what does no-persist means.
- In the suggested solution the connect will not initiate an
immediate connect to the storage server instead it will register the connection as handled connection and will actually generate the connect as part of the managed connection mechanism.
The mechanism guarantees maximum availability so it will immediately connect. The command might return before an actual connection succeeded as the Manage part is done.
I argue that this modeling is implementation driven which is wrong from the user perspective.
VDSM is pretty low on the stack and has to accommodate many API users. I think it's wrong to model an API when you don't consider how things actually behave and try and glue stuff to appease a GUI flow. GUI flows change all the time, APIs don't so having a flexible API that supports multiple use patterns and does not enforce arbitrary limitations is better then one that is tightly coupled to 1 user flow.
As a user I expect connect to actually initiate a connect action and that the return value should indicate if the connect succeeded, the way you modeled it the API will return true if you succeeded 'registering' the connect. You modeled the API to be asynchronous with no handler (task id) to monitor the results of the action, which requires polling
The API is not asynchronous it is perfectly synchronous. When manageStorageConnection() returns the connection is managed. You will have maximum connection uptime. You will have to poll and check for the liveness of the connection before using it as some problems may occur preventing VDSM from supplying the connection at the moment.
in the create storage domain flow which I really don't like. In addition you introduced a verb for monitoring the status of the connections alone I would like to be able to monitor it as part of the general host status and not have to poll on a new verb in addition to the current one.
As part of solving the connection management flows in OE I am missing:
- A way to clear all managed connections.
use case: We move a host from one data center to another and we want the host to clear all the managed connections. we can ask for the list of managed connection and clear them but having clearAll is much easier.
Nope, you should get all active connections. Cherry pick the ones you own using some ID scheme (RHEVM_FLOWID_CON?) and only clear you own connections. There might be other client using VDSM that you will forcibly disconnect.
- Handling a list of Ids in each API verb
Only getDeviceList will have a list of IDs handed to it. It makes no sense in other verbs.
- A verb which handles create storage domain and encapsulates the
connect create and disconnect.
This is a hackish ad-hoc solution. Why not have one for the entire pool? Why not have one for a VM?
Thanks, Livnat
I will try and sum the points ups here: manageConnection is not connectStorageServer. They are different. The latter means connect to the storage server, the former means manage it. They are both synchronous.
non-persistence makes no sense. Auto unmanage does. If anyone suggests a valid mechanism to auto clean CIDs that is correct and accommodates interactive and non interactive flows that I will be willing to accept it. Timeouts are never correct as no flow is really time capped and it will create more issues that it will solve.
Polling the CID to track connection availability is the correct way to go as what you really want to do is not connect to the storage bu rather have it available. Polling is just waiting until the connection is available or condition has been triggered. This give the flow manager freedom of what the condition is (see above).
Cleaning the connections, like closing FDs, freeing memory. and other resource management is a pain. I understand, and having a transaction like mechanism to lock resources to a flow will be great but this is outside the scope of this change.
VDSM being a tiny cog in the cluster can never have enough information to know when a flow started or finished. This is why I leave it to the management to manage these resources. I just prevent collisions (with the CIDs) and handle resource availability.
How to implement stuff: I suggest this CID scheme:
For connections that persist across engine restarts.
OENGINE_<resource type>_<resource id>_CON<connection id> EX: OENGINE_DOMAIN_2131-321dsa-dsadsa-232_CON1
For connections that are managed for flows and do not might not persist engine restart
OENGINE_<engine isntanceid>_FLOW_<flow id>_CON<connection id> EX: OENGINE_4324-23423dfd-fsdfsd-21312_FLOW_1023_CON1
Note: instance id is a uuid generate on each instance run to differentiate between running instances simply.
How to poll for connections: (in pythonic pseudo code) --------------------------------------------------------- def pollConenctions(vdsm host, stringList CidList, func(void)bool stopContion, int interval): clist = CidList.copy() while (not stopCondition) and (len(clist) > 0): statuses = host.getStorageConnectionsStatuses() for id in statuses: if not id.startswith("OENGINE"): # This is not an engine connection, ignore continue
# Check the scheme and see if it has an instance ID after the prefix or not if isPersistantConnection(id): continue
instanceId, flowId, conId = parseCID(id)
# Clean connections from past instances if instanceId != global_instance_id # Ignore errors here as some other thread may be clearing this ID as well # at any case VDSM is taking care of thread safety. host.unmanageStorageConnection(id)
if id in CidList: if statuses[id].connected: clist.remove(id)
sleep(interval) ------------------------------------------------- It's easy to see how you can modify this template to support multiple modes of tracking * Pass a flow id instead of a CID list to track a flow * Exit when at least X connections succeeded * call getDeviceList after every successful connect and check if the lun you are looking for is available if it is continue and let the other connections complete at their own pace for multipathing. * connect to multiple hosts and return once 1 host has connected successfuly * you can also add an install id or a cluster id if you want to have multiple engines managing the same VDSM and not have them step on each others toes.
and much, much more.
Implementing this will give you everything you want with maximum correctness and flexibility. This will also make the transition to event driven communication with VDSM simpler.
Hi Saggi,
I see the added value in the above functionality and I think it is a needed functionality in VDSM.
Your suggestion includes 2 concepts:
- Persist connection - auto-reconnect on failures
- Reference counting (with CID granularity)
It's not reference counting and the API user should not assume it is a reference count. Each CID can only registered once.
By reference counting with CID granularity I meant that as long as you have more than one CID registered on a connection the connection will be managed by the host.
Subsequent requests to register will fail if the CID is already registered. There shouldn't be any assumptions between a manage call and how many physical connections are actually created. Optimizations like internal multiplexing are implementation detail and might change.
Here are some comments:
- Assuming you meant that the new API will be a replacement to the
current API (based on previous chats we had on this topic)
It is
I think you are missing a needed functionality to support non-persisted connection.
the problem is with the term non-persisted. Everything is transient depending on the scale of time you consider "temporary". I leave the decision on what is temporary to the API user and give him the freedom to implement any connection lifecycle mechanism he chooses. I assume that all components, including VDSM, can crash in the middle of a flow and might want to either recover and continue or roll back. I leave it the the user to decide how to handle this as he is the one managing the flow and not VDSM.
I would call connections that don't need to reconnect upon failure - non persistent connection, it is not a function of time.
There are operations that upon failure can be done on another host and there is no reason to reconnect to the storage target as it is not interesting for the user any more.
creating a storage domain is an example where it can be useful. The flow includes connecting the host to the storage server creating storage domain and disconnecting from the storage server.
I think you are confusing create storage domain flow in general and how create storage domain flow now in the RHEV GUI. A flow can have multiple strategies waiting for connection availability:
- If it's a non interactive process I might not care if the actual connect takes 3 hours.
- On the other hand if it's an interactive connect I might only want to wait for 1 minute even if an actual connect request takes a lot more because of some problem.
- If I am testing the connection arguments I might want to wait until I see the connection succeed or lastError get a value no matter how long it takes.
- I might want to try as long as the error is not credential related (or other non transient issue).
- I might want to try until I see the connection active for X amount of time (To test for intermittent disconnects).
All of these can be accommodated by the suggested API.
That is great but i am not sure how is it related to non-persist connection.
Let's assume VDSM hangs while creating the storage domain any unmanageStorageServer will fail, the engine rolls back and tries to create the storage domain on another host, there is no reason for the host to reconnect to this storage server.
That is true but there is no way for VDSM to know if the connection is deprecated or not. For all I know rhevm might be having issues and will continue on with the flow in a few minutes.
If there is an error VDSM doesn't need to reconnect the non-persist connection, and it should be up to the VDSM user to ask for persist connection or non-persist connection.
In the above flow i would use non-persist connection if I had one.
Again, what does no-persist means.
- In the suggested solution the connect will not initiate an
immediate connect to the storage server instead it will register the connection as handled connection and will actually generate the connect as part of the managed connection mechanism.
The mechanism guarantees maximum availability so it will immediately connect. The command might return before an actual connection succeeded as the Manage part is done.
I argue that this modeling is implementation driven which is wrong from the user perspective.
VDSM is pretty low on the stack and has to accommodate many API users. I think it's wrong to model an API when you don't consider how things actually behave and try and glue stuff to appease a GUI flow. GUI flows change all the time, APIs don't so having a flexible API that supports multiple use patterns and does not enforce arbitrary limitations is better then one that is tightly coupled to 1 user flow.
I am not sure why do you think i am looking on the GUI flow, as I actually was referring to the engine as the user of VDSM . The engine has to support different clients, the UI is only one of them.
As a user I expect connect to actually initiate a connect action and that the return value should indicate if the connect succeeded, the way you modeled it the API will return true if you succeeded 'registering' the connect. You modeled the API to be asynchronous with no handler (task id) to monitor the results of the action, which requires polling
The API is not asynchronous it is perfectly synchronous. When manageStorageConnection() returns the connection is managed. You will have maximum connection uptime. You will have to poll and check for the liveness of the connection before using it as some problems may occur preventing VDSM from supplying the connection at the moment.
in the create storage domain flow which I really don't like. In addition you introduced a verb for monitoring the status of the connections alone I would like to be able to monitor it as part of the general host status and not have to poll on a new verb in addition to the current one.
As part of solving the connection management flows in OE I am missing:
- A way to clear all managed connections.
use case: We move a host from one data center to another and we want the host to clear all the managed connections. we can ask for the list of managed connection and clear them but having clearAll is much easier.
Nope, you should get all active connections. Cherry pick the ones you own using some ID scheme (RHEVM_FLOWID_CON?) and only clear you own connections. There might be other client using VDSM that you will forcibly disconnect.
I hope that VDSM is going to serve many types of clients but clients hybrid mode is the less interesting use case IMO. How often will you have more than one virtualization manager manages the same host? I think not a common use case, and if this is not the common use case i expect the API to be more friendly to the single manager use case.
moving the host from one data center to another is a clear use case where clearAll API would be useful, and i am sure other clients will find this API useful as well.
- Handling a list of Ids in each API verb
Only getDeviceList will have a list of IDs handed to it. It makes no sense in other verbs.
I disagree, if I need to connect a host to a storage domain I need to execute number of API calls which is linear to the number of storage servers i use for the storage domain, again not a friendly API.
- A verb which handles create storage domain and encapsulates the
connect create and disconnect.
This is a hackish ad-hoc solution. Why not have one for the entire pool? Why not have one for a VM?
I think we are going to remove pool in 4.0 so probably not, and for VM well that's an interesting idea :)
Thanks, Livnat
I will try and sum the points ups here: manageConnection is not connectStorageServer. They are different. The latter means connect to the storage server, the former means manage it. They are both synchronous.
non-persistence makes no sense. Auto unmanage does. If anyone suggests a valid mechanism to auto clean CIDs that is correct and accommodates interactive and non interactive flows that I will be willing to accept it. Timeouts are never correct as no flow is really time capped and it will create more issues that it will solve.
I am not sure what is the problem with the non-persistent mechanism i suggested earlier.
Polling the CID to track connection availability is the correct way to go as what you really want to do is not connect to the storage bu rather have it available. Polling is just waiting until the connection is available or condition has been triggered. This give the flow manager freedom of what the condition is (see above).
Cleaning the connections, like closing FDs, freeing memory. and other resource management is a pain. I understand, and having a transaction like mechanism to lock resources to a flow will be great but this is outside the scope of this change.
VDSM being a tiny cog in the cluster can never have enough information to know when a flow started or finished. This is why I leave it to the management to manage these resources. I just prevent collisions (with the CIDs) and handle resource availability.
How to implement stuff: I suggest this CID scheme:
For connections that persist across engine restarts.
OENGINE_<resource type>_<resource id>_CON<connection id> EX: OENGINE_DOMAIN_2131-321dsa-dsadsa-232_CON1
For connections that are managed for flows and do not might not persist engine restart
OENGINE_<engine isntanceid>_FLOW_<flow id>_CON<connection id> EX: OENGINE_4324-23423dfd-fsdfsd-21312_FLOW_1023_CON1
Note: instance id is a uuid generate on each instance run to differentiate between running instances simply.
How to poll for connections: (in pythonic pseudo code)
def pollConenctions(vdsm host, stringList CidList, func(void)bool stopContion, int interval): clist = CidList.copy() while (not stopCondition) and (len(clist) > 0): statuses = host.getStorageConnectionsStatuses() for id in statuses: if not id.startswith("OENGINE"): # This is not an engine connection, ignore continue
# Check the scheme and see if it has an instance ID after the prefix or not if isPersistantConnection(id): continue instanceId, flowId, conId = parseCID(id) # Clean connections from past instances if instanceId != global_instance_id # Ignore errors here as some other thread may be clearing this ID as well # at any case VDSM is taking care of thread safety. host.unmanageStorageConnection(id) if id in CidList: if statuses[id].connected: clist.remove(id) sleep(interval)
I would not use sleep, I would use a scheduler based monitoring and release the thread in between cycles.
It's easy to see how you can modify this template to support multiple modes of tracking
- Pass a flow id instead of a CID list to track a flow
- Exit when at least X connections succeeded
- call getDeviceList after every successful connect and check if the lun you are looking for is available if it is continue and let the other connections complete at their own pace for multipathing.
- connect to multiple hosts and return once 1 host has connected successfuly
- you can also add an install id or a cluster id if you want to have multiple engines managing the same VDSM and not have them step on each others toes.
and much, much more.
Implementing this will give you everything you want with maximum correctness and flexibility. This will also make the transition to event driven communication with VDSM simpler.
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
------------------------
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed 1. Wait until a connection is made, if it fails don't retry and automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1. If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow. Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
instead of doing: (with your suggestion) ---------------- manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow) --- manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
Now that we got that out of the way lets talk about the 2nd use case. API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and automatically unmanage.
- If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
On Thu, Jan 26, 2012 at 12:22:42PM +0200, Livnat Peer wrote:
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
I will speak up to represent other potential users of the VDSM API. My vote is with Saggi here to keep the API simple and have an unmanage call that operates on a single connection only. Every programming language has looping constructs that make it easy to implement unmanageAll. Why clog up vdsm's API with an extra function just to avoid writing a 'for' loop?
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and automatically unmanage.
- If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 01/26/2012 03:01 PM, Adam Litke wrote:
On Thu, Jan 26, 2012 at 12:22:42PM +0200, Livnat Peer wrote:
On 25/01/12 23:35, Saggi Mizrahi wrote:
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
I will speak up to represent other potential users of the VDSM API. My vote is with Saggi here to keep the API simple and have an unmanage call that operates on a single connection only. Every programming language has looping constructs that make it easy to implement unmanageAll. Why clog up vdsm's API with an extra function just to avoid writing a 'for' loop?
A very good reason to have that kind of bulk operations (in general, maybe not in this case) is to reduce the number of network round trips and thus improve performance. The loop is very easy to write, and very expensive to execute.
On 01/26/2012 04:21 PM, Juan Hernandez wrote:
On 01/26/2012 03:01 PM, Adam Litke wrote:
On Thu, Jan 26, 2012 at 12:22:42PM +0200, Livnat Peer wrote:
On 25/01/12 23:35, Saggi Mizrahi wrote:
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
I will speak up to represent other potential users of the VDSM API. My vote is with Saggi here to keep the API simple and have an unmanage call that operates on a single connection only. Every programming language has looping constructs that make it easy to implement unmanageAll. Why clog up vdsm's API with an extra function just to avoid writing a 'for' loop?
A very good reason to have that kind of bulk operations (in general, maybe not in this case) is to reduce the number of network round trips and thus improve performance. The loop is very easy to write, and very expensive to execute.
It's expensive mainly because we do not have a persistent connection between VDSM and Engine, and that it's not compressed (with TLS compression or internally). Y.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
<snip> Again trying to sum up and address all comments
Clear all: ========== My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
List of connections as args: ============================ Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this. If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections: ====================== The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and
automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
On Thu, Jan 26, 2012 at 10:00:57AM -0500, Saggi Mizrahi wrote:
<snip> Again trying to sum up and address all comments
Clear all:
My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
List of connections as args:
Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
I would be strongly opposed to introducing a sqlite database into vdsm just to enable "convenience mode" for this API. Does the operation really need to be atomic? Why not just perform each connection sequentially and return a list of statuses? Is the only motivation for allowing a list of parameters to reduce the number of API calls between engine and vdsm)? If so, the same argument Saggi makes above applies here.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this. If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections:
The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and
automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, engine-devel@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Thursday, January 26, 2012 1:58:40 PM Subject: Re: [vdsm] [Engine-devel] [RFC] New Connection Management API
On Thu, Jan 26, 2012 at 10:00:57AM -0500, Saggi Mizrahi wrote:
<snip> Again trying to sum up and address all comments
Clear all:
My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
List of connections as args:
Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
I would be strongly opposed to introducing a sqlite database into vdsm just to enable "convenience mode" for this API. Does the operation really need to be atomic? Why not just perform each connection sequentially and return a list of statuses? Is the only motivation for allowing a list of parameters to reduce the number of API calls between engine and vdsm)? If so, the same argument Saggi makes above applies here.
I try and have VDSM expose APIs that are simple to predict. a command can either succeed or fail. The problem is not actually validating the connections. The problem is that once I concluded that they are all OK I need to persist to disk the information that will allow me to reconnect if VDSM happens to crash. If I naively save them one by one I could get in a state where only some of the connections persisted before the operation failed. So I have to somehow put all this in a transaction.
I don't have to use sqlite. I could also put all the persistence information in a new dir for every call named <UUID>.tmp. Once I wrote everything down I rename the directory to just <UUID> and fsync it. This is guarantied by posix to be atomic. For unmanage, I move all the persistence information from the directories they sit in to a new dir named <UUID>. Rename it to a <UUDI>.tmp, fsync it and then remove it.
This all just looks like more trouble then it's worth to me.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this. If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections:
The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and
automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
On 26/01/12 21:21, Saggi Mizrahi wrote:
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, engine-devel@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Thursday, January 26, 2012 1:58:40 PM Subject: Re: [vdsm] [Engine-devel] [RFC] New Connection Management API
On Thu, Jan 26, 2012 at 10:00:57AM -0500, Saggi Mizrahi wrote:
<snip> Again trying to sum up and address all comments
Clear all:
My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
List of connections as args:
Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
I would be strongly opposed to introducing a sqlite database into vdsm just to enable "convenience mode" for this API. Does the operation really need to be atomic? Why not just perform each connection sequentially and return a list of statuses? Is the only motivation for allowing a list of parameters to reduce the number of API calls between engine and vdsm)? If so, the same argument Saggi makes above applies here.
I try and have VDSM expose APIs that are simple to predict. a command can either succeed or fail. The problem is not actually validating the connections. The problem is that once I concluded that they are all OK I need to persist to disk the information that will allow me to reconnect if VDSM happens to crash. If I naively save them one by one I could get in a state where only some of the connections persisted before the operation failed. So I have to somehow put all this in a transaction.
I don't have to use sqlite. I could also put all the persistence information in a new dir for every call named <UUID>.tmp. Once I wrote everything down I rename the directory to just <UUID> and fsync it. This is guarantied by posix to be atomic. For unmanage, I move all the persistence information from the directories they sit in to a new dir named <UUID>. Rename it to a <UUDI>.tmp, fsync it and then remove it.
This all just looks like more trouble then it's worth to me.
I agree with Adam, I don't think the operation should be atomic, having only some of the connections persisted is a perfectly valid outcome if the API returns a list of statuses.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this. If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections:
The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and
automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Adam Litke" agl@us.ibm.com, engine-devel@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Thursday, January 26, 2012 3:16:32 PM Subject: Re: [vdsm] [Engine-devel] [RFC] New Connection Management API
On 26/01/12 21:21, Saggi Mizrahi wrote:
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, engine-devel@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Thursday, January 26, 2012 1:58:40 PM Subject: Re: [vdsm] [Engine-devel] [RFC] New Connection Management API
On Thu, Jan 26, 2012 at 10:00:57AM -0500, Saggi Mizrahi wrote:
<snip> Again trying to sum up and address all comments
Clear all:
My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
List of connections as args:
Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
I would be strongly opposed to introducing a sqlite database into vdsm just to enable "convenience mode" for this API. Does the operation really need to be atomic? Why not just perform each connection sequentially and return a list of statuses? Is the only motivation for allowing a list of parameters to reduce the number of API calls between engine and vdsm)? If so, the same argument Saggi makes above applies here.
I try and have VDSM expose APIs that are simple to predict. a command can either succeed or fail. The problem is not actually validating the connections. The problem is that once I concluded that they are all OK I need to persist to disk the information that will allow me to reconnect if VDSM happens to crash. If I naively save them one by one I could get in a state where only some of the connections persisted before the operation failed. So I have to somehow put all this in a transaction.
I don't have to use sqlite. I could also put all the persistence information in a new dir for every call named <UUID>.tmp. Once I wrote everything down I rename the directory to just <UUID> and fsync it. This is guarantied by posix to be atomic. For unmanage, I move all the persistence information from the directories they sit in to a new dir named <UUID>. Rename it to a <UUDI>.tmp, fsync it and then remove it.
This all just looks like more trouble then it's worth to me.
I agree with Adam, I don't think the operation should be atomic, having only some of the connections persisted is a perfectly valid outcome if the API returns a list of statuses.
What if it doesn't return at all? The only reasons that something will fail manage is if the URI is broken so I assume 99% of the issued manage commands will succeed. My problem is with VDSM crashing mid operation. The operation will appear to fail but when VDSM returns some of the connections persisted so it will reconnect. because the client's manage call failed it doesn't expect the CIDs to be in the list. This will cause ambiguity when finding an already registered CID at runtime.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this. If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections:
The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and
automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
On 26/01/12 23:42, Saggi Mizrahi wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Adam Litke" agl@us.ibm.com, engine-devel@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Thursday, January 26, 2012 3:16:32 PM Subject: Re: [vdsm] [Engine-devel] [RFC] New Connection Management API
On 26/01/12 21:21, Saggi Mizrahi wrote:
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, engine-devel@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Thursday, January 26, 2012 1:58:40 PM Subject: Re: [vdsm] [Engine-devel] [RFC] New Connection Management API
On Thu, Jan 26, 2012 at 10:00:57AM -0500, Saggi Mizrahi wrote:
<snip> Again trying to sum up and address all comments
Clear all:
My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
List of connections as args:
Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
I would be strongly opposed to introducing a sqlite database into vdsm just to enable "convenience mode" for this API. Does the operation really need to be atomic? Why not just perform each connection sequentially and return a list of statuses? Is the only motivation for allowing a list of parameters to reduce the number of API calls between engine and vdsm)? If so, the same argument Saggi makes above applies here.
I try and have VDSM expose APIs that are simple to predict. a command can either succeed or fail. The problem is not actually validating the connections. The problem is that once I concluded that they are all OK I need to persist to disk the information that will allow me to reconnect if VDSM happens to crash. If I naively save them one by one I could get in a state where only some of the connections persisted before the operation failed. So I have to somehow put all this in a transaction.
I don't have to use sqlite. I could also put all the persistence information in a new dir for every call named <UUID>.tmp. Once I wrote everything down I rename the directory to just <UUID> and fsync it. This is guarantied by posix to be atomic. For unmanage, I move all the persistence information from the directories they sit in to a new dir named <UUID>. Rename it to a <UUDI>.tmp, fsync it and then remove it.
This all just looks like more trouble then it's worth to me.
I agree with Adam, I don't think the operation should be atomic, having only some of the connections persisted is a perfectly valid outcome if the API returns a list of statuses.
What if it doesn't return at all? The only reasons that something will fail manage is if the URI is broken so I assume 99% of the issued manage commands will succeed. My problem is with VDSM crashing mid operation. The operation will appear to fail but when VDSM returns some of the connections persisted so it will reconnect. because the client's manage call failed it doesn't expect the CIDs to be in the list. This will cause ambiguity when finding an already registered CID at runtime.
I think that if VDSM did not return at all, it is reasonable expectation to use the status verb for finding the connections status (managed or not).
general comment - It would help if manage verb returns a dedicated error code which indicates that the CID is already managed.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this. If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections:
The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote: > <SNIP> > This is mail was getting way too long. > > About the clear all verb. > No. > Just loop, find the connections YOU OWN and clean them. Even > though > you don't want to support multiple clients to VDSM API doesn't > mean the engine shouldn't behave like a proper citizen. > It's the same reason why VDSM tries and not mess system > resources > it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
> > ------------------------ > > As I see it the only point of conflict is the so called > non-peristed connections. > I will call them transient connections from now on. > > There are 2 user cases being discussed > 1. Wait until a connection is made, if it fails don't retry and > automatically unmanage. > 2. If the called of the API forgets or fails to unmanage a > connection. >
Actually I was not discussing #2 at all.
> Your suggestion as I understand it: > Transient connections are: > - Connection that VDSM will only try to connect to once > and > will not reconnect to in case of disconnect.
yes
> > My problem with this definition that it does not specify the > "end > of life" of the connection. > Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
> If all is well, and it usually is, VDSM will not invoke a > disconnect. > So the caller would have to call unmanage if the connection > succeeded at the end of the flow.
agree.
> Now, if you are already calling unmanage if connection > succeeded > you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
> > instead of doing: (with your suggestion) > ---------------- > manage > wait until succeeds or lastError has value > try: > do stuff > finally: > unmanage > > do: (with the canonical flow) > --- > manage > try: > wait until succeeds or lastError has value > do stuff > finally: > unmanage > > This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
> > Now that we got that out of the way lets talk about the 2nd use > case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
> API client died in the middle of the operation and unmanage was > never called. > > Your suggested definition means that unless there was a problem > with the connection VDSM will still have this connection > active. > The engine will have to clean it anyway. > > The problem is, VDSM has no way of knowing that a client died, > forgot or is thinking really hard and will continue on in about > 2 > minutes.
> > Connections that live until they die is a hard to define and > work > with lifecycle. Solving this problem is theoretically simple. > > Have clients hold some sort of session token and force the > client > to update it at a specified interval. You could bind resources > (like domains, VMs, connections) to that session token so when > it > expires VDSM auto cleans the resources. > > This kind of mechanism is out of the scope of this API change. > Further more I think that this mechanism should sit in the > engine > since the session might actually contain resources from > multiple > hosts and resources that are not managed by VDSM. > > In GUI flows specifically the user might do actions that don't > even > touch the engine and forcing it to refresh the engine token is > simpler then having it refresh the VDSM token. > > I understand that engine currently has no way of tracking a > user > session. This, as I said, is also true in the case of VDSM. We > can > start and argue about which project should implement the > session > semantics. But as I see it it's not relevant to the connection > management API.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
On 26/01/12 17:00, Saggi Mizrahi wrote:
<snip> Again trying to sum up and address all comments
Clear all:
My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
I disagree with the above statement, exposing a bulk of operations in a single API call is very common and not considered redundant.
List of connections as args:
Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this.
It is not about keeping the code simple (writing a loop is simple as well), it is about redundant round trips.
If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections:
The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
I have no additional input on this.
I truly appreciate your effort for modeling clean and simple API, but at the end of the day if the users of the API don't think it is clean and simple you missed your goal.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and
automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 3:03:39 PM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 26/01/12 17:00, Saggi Mizrahi wrote:
<snip> Again trying to sum up and address all comments
Clear all:
My opinions is still to not implement it. Even though it might generate a bit more traffic premature optimization is bad and there are other reasons we can improve VDSM command overhead without doing this.
In any case this argument is redundant because my intention is (as Litke pointed out) is to have a lean API. and API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily.
As a rule, if an API call C and be implemented by doing A + B then C is redundant.
I disagree with the above statement, exposing a bulk of operations in a single API call is very common and not considered redundant.
I agree that that APIs with those kind of calls exist but it doesn't mean they are not redundant.
re·dun·dant: adj. (of words or data) Able to be omitted without loss of meaning or function
This call can be omitted without loss of function. API calls are a commitment for generations. Wrapping this in the clients doesn't. To quot myself: "API call is something you have to support across versions, this call implemented in the engine is something that no one has to support and can change\evolve easily." ~ Saggi Mizrahi, a few lines above
this API set will one day be considered stupid, obsolete and annoying. That's just how life is. We'll find better ways of solving these problems. When that moment comes I want to have as little functionality as possible I have to keep maintaining. I doubt there is any way you can convince me otherwise.
Put yourself in my position and think if you would have made this sacrifice just to save someone a loop.
To sum up, I will not add any API calls I don't absolutely have to.
As to the amount of calls, this is not relevant to the clear all verb. This is addressed by the point right below this sentence.
List of connections as args:
Sorry I forgot to respond about that. I'm not as strongly opposed to the idea as the other things you suggested. It'll just make implementing the persistence logic in VDSM significantly more complicated as I will have to commit multiple connection information to disk in an all or nothing mode. I can create a small sqlitedb to do that or do some directory tricks and exploit FS rename atomicity but I'd rather not.
The demands are not without base. I would like to keep the code simple under the hood in the price of a few more calls. You would like to make less calls and keep the code simpler on your side. There isn't a real way to settle this.
It is not about keeping the code simple (writing a loop is simple as well), it is about redundant round trips.
As I said, I agree there is merit there.
I think that roundtrips is a general issue not specific to this call. My opinion is that communication with VDSM should just use HTTP pipelining (http://en.wikipedia.org/wiki/HTTP_pipelining) This will solve the problem globally instead of tacking it on to the API interface.
I generally prefer simplicity of the API and the implementation, and correctness over performance.
I laid out out what the change entails, multiple ways of solving this, and my personal perspective. Unless someone on the list objects to either solution, Ayal will have final say on this matter. He is more of a pragmatist than I (and doing what he says usually correlates with me getting my paycheck).
If anyone on the list as pros and cons for either way I'd be happy to hear them. If no compelling arguments arise I will let Ayal call this one.
Transient connections:
The problem you are describing as I understand it is that VDSM did not respond and not that the API client did not respond. Again, this can happen for a number of reason, most of which VDSM might not be aware that there is actually a problem (network issues).
This relates to the EOL policy. I agree we have to find a good way to define an automatic EOL for resources. I have made my suggestion. Out of the scope of the API.
In the meantime cleaning stale connections is trivial and I have made it clear a previous email about how to go about it in a simple non intrusive way. Clean hosts on host connect, and on every poll if you find connections that you don't like. This should keep things squeaky clean.
I have no additional input on this.
The only real legitimate reservation you still have with the API is transient connections. As I said, if you can find a way to define an End Of Life condition I will implement it. Promise, David star my heart and hope to die.
Just some pointers: * It cannot be time as no flow is really restricted by time. * It cannot be tied to specific calls as an argument (like createStorageDomain, connectStorageDomain, etc...) as this is arbitrary and solves problems in very specific flows. * It cannot be tied to VDSM restart or host restart. This just makes no sense as there is never a connection between these and a flow. * It cannot be tied to number of connection attempts. Flows should be able to decide weather they want to keep on trying or bail out. The EOL mechanism should not come in the expense of preventing legitimate EOL. VDSM\Client\ can't clean up because of some unrecoverable error.
I proposed my solution to the EOL issue. The only problem I see you having with it, is that it's not actually VDSM that has to implement it. You can either accept it or give a better one.
I know that garbage collection and other fun tricks made everyone spoiled rotten about managing resources. But to create an automatic resource freeing mechanism (GC) you got to have a clear EOL condition.
Examples: - Cleaning FDs is easy as processes have a very clear EOL. - With memory it is a bit more complicated but this is why you got ref-counting, hard-refs and weak-refs, loop detection, scoped pointers, and a million other strategies.
All I need to create a collection mechanism is a clear condition where I am guaranteed that the resource is not in use. Until I get that you will just have to manage the resources yourself. This is not an API issue, this is a general problem when making software.
I don't go ask the lvm guys to automatically clear VGs if I fail while creating a block domain. I understand and accept the fact that they are just too low on the stack to solve this for me.
I truly appreciate your effort for modeling clean and simple API, but at the end of the day if the users of the API don't think it is clean and simple you missed your goal.
Simple is an amorphic concept that everyone have a personal definition for. I don't care if this doesn't fit in to what you consider simple. You are free to suggest improvements. Hell, just write your dream API if it's better then this well use it.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel@ovirt.org Sent: Thursday, January 26, 2012 5:22:42 AM Subject: Re: [Engine-devel] [RFC] New Connection Management API
On 25/01/12 23:35, Saggi Mizrahi wrote:
<SNIP> This is mail was getting way too long.
About the clear all verb. No. Just loop, find the connections YOU OWN and clean them. Even though you don't want to support multiple clients to VDSM API doesn't mean the engine shouldn't behave like a proper citizen. It's the same reason why VDSM tries and not mess system resources it didn't initiate.
There is a big difference, VDSM living in hybrid mode with other workload on the host is a valid use case, having more than one concurrent manager for VDSM is not. Generating a disconnect request for each connection does not seem like the right API to me, again think on the simple flow of moving host from one data center to another, the engine needs to disconnect tall storage domains (each domain can have couple of connections associated with it).
I am giving example from the engine use cases as it is the main user of VDSM ATM but I am sure it will be relevant to any other user of VDSM.
As I see it the only point of conflict is the so called non-peristed connections. I will call them transient connections from now on.
There are 2 user cases being discussed
- Wait until a connection is made, if it fails don't retry and
automatically unmanage. 2. If the called of the API forgets or fails to unmanage a connection.
Actually I was not discussing #2 at all.
Your suggestion as I understand it: Transient connections are: - Connection that VDSM will only try to connect to once and will not reconnect to in case of disconnect.
yes
My problem with this definition that it does not specify the "end of life" of the connection. Meaning it solves only use case 1.
since this is the only use case i had in mind, it is what i was looking for.
If all is well, and it usually is, VDSM will not invoke a disconnect. So the caller would have to call unmanage if the connection succeeded at the end of the flow.
agree.
Now, if you are already calling unmanage if connection succeeded you can just call it anyway.
not exactly, an example I gave earlier on the thread was that VSDM hangs or have other error and the engine can not initiate unmanaged, instead let's assume the host is fenced (self-fence or external fence does not matter), in this scenario the engine will not issue unmanage.
instead of doing: (with your suggestion)
manage wait until succeeds or lastError has value try: do stuff finally: unmanage
do: (with the canonical flow)
manage try: wait until succeeds or lastError has value do stuff finally: unmanage
This is simpler to do than having another connection type.
You are assuming the engine can communicate with VDSM and there are scenarios where it is not feasible.
Now that we got that out of the way lets talk about the 2nd use case.
Since I did not ask VDSM to clean after the (engine) user and you don't want to do it I am not sure we need to discuss this.
If you insist we can start the discussion on who should implement the cleanup mechanism but I'm afraid I have no strong arguments for VDSM to do it, so I rather not go there ;)
You dropped from the discussion my request for supporting list of connections for manage and unmanage verbs.
API client died in the middle of the operation and unmanage was never called.
Your suggested definition means that unless there was a problem with the connection VDSM will still have this connection active. The engine will have to clean it anyway.
The problem is, VDSM has no way of knowing that a client died, forgot or is thinking really hard and will continue on in about 2 minutes.
Connections that live until they die is a hard to define and work with lifecycle. Solving this problem is theoretically simple.
Have clients hold some sort of session token and force the client to update it at a specified interval. You could bind resources (like domains, VMs, connections) to that session token so when it expires VDSM auto cleans the resources.
This kind of mechanism is out of the scope of this API change. Further more I think that this mechanism should sit in the engine since the session might actually contain resources from multiple hosts and resources that are not managed by VDSM.
In GUI flows specifically the user might do actions that don't even touch the engine and forcing it to refresh the engine token is simpler then having it refresh the VDSM token.
I understand that engine currently has no way of tracking a user session. This, as I said, is also true in the case of VDSM. We can start and argue about which project should implement the session semantics. But as I see it it's not relevant to the connection management API.
On Mon, Jan 23, 2012 at 04:54:10PM -0500, Saggi Mizrahi wrote:
Nitty Gritty:
This seems like a good API but I have some suggestions with respect to API naming:
manageStorageServer
Could we name this manageStorageConnection or manageStorageServerConnection? Manage storage server is confusing because it implies you are managing the server itself (ie. server configuration, NFS exports, reboot, etc).
Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
To match above: unmanageStorageConnection or unmanageStorageServerConnection
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
getStorageConnectionList or getStorageServerConnectionList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} } _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 25/01/12 16:20, Adam Litke wrote:
On Mon, Jan 23, 2012 at 04:54:10PM -0500, Saggi Mizrahi wrote:
Nitty Gritty:
This seems like a good API but I have some suggestions with respect to API naming:
manageStorageServer
Could we name this manageStorageConnection or manageStorageServerConnection? Manage storage server is confusing because it implies you are managing the server itself (ie. server configuration, NFS exports, reboot, etc).
+1
Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
To match above: unmanageStorageConnection or unmanageStorageServerConnection
+1
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
getStorageConnectionList or getStorageServerConnectionList
+1
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} } _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
top posting since there was a long thread on this anyway. some questions/comments:
1. about the CIDs - it sounds like the engine needs to persist this info, so it can resume normally in case of a failure/restart (this is different than today, when the persisted info is the connection details, rather than some generated identifier)?
2. sounds like the engine needs to block in certain cases after a manageConnection to make sure it is there and alive before doing an operation. this means now engine has to check a host has all relevant connections online before choosing it as a target for live migration even for a regular VM (all disks on a storage domain). worse/uglier (well, imho), in case of a disk based on a direct LUN, the engine needs to actively connect the target host, poll till it's up, and only then live migrate (would be much nicer if vdsm migration protocol would have taken care of this manageConnection call (preserving the CID?)
3. in unmanageStorageServer(connectionID) below you finish with "Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()"
it is not clear if vdsm will retry to disconnect, and how races between those retries and new manage connection requests will be handled. if the connection only becomes unmanaged, there is no way to track and clean it up (engine is not supposed to touch the unmanaged connections)
4. I don't think we handle this today, but while we are planning for the future - what if the host needs one of the connections to exist regardless of engine for another need (say it does boot from network from same iscsi target - this is an unmanaged connection which you will disconnect based on the CID refcount concept). i.e., what happens if the host has an unmanaged connection, which becomes a managed one. solving this probably means when adding a connection, need to add an unmanaged_existed_before CID for refcount?
On 01/23/2012 11:54 PM, Saggi Mizrahi wrote:
I have begun work at changing how API clients can control storage connections when interacting with VDSM.
Currently there are 2 API calls: connectStorageServer() - Will connect to the storage target if the host is not already connected to it. disconnectStorageServer() - Will disconnect from the storage target if the host is connected to it.
This API is very simple but is inappropriate when multiple clients and flows try to access the same storage.
This is currently solved by trying to synchronize things inside rhevm. This is hard and convoluted. It also brings out issues with other clients using the VDSM API.
Another problem is error recovery. Currently ovirt-engine(OE) has no way of monitoring the connections on all the hosts an if a connection disappears it's OE's responsibility to reconnect.
I suggest a different concept where VDSM 'manages' the connections. VDSM receives a manage request with the connection information and from that point forward VDSM will try to keep this connection alive. If the connection fails VDSM will automatically try and recover.
Every manage request will also have a connection ID(CID). This CID will be used when the same client asks to unamange the connection. When multiple requests for manage are received to the same connection they all have to have their own unique CID. By internally mapping CIDs to actual connections VDSM can properly disconnect when no CID is addressing the connection. This allows each client and even each flow to have it's own CID effectively eliminating connect\disconnect races.
The change from (dis)connect to (un)manage also changes the semantics of the calls significantly. Whereas connectStorageServer would have returned when the storage is either connected or failed to connect, manageStorageServer will return once VDSM registered the CID. This means that the connection might not be active immediately as the VDSM tries to connect. The connection might remain down for a long time if the storage target is down or is having issues.
This allows for VDSM to receive the manage request even if the storage is having issues and recover as soon as it's operational without user intervention.
In order for the client to query the current state of the connections I propose getStorageConnectionList(). This will return a mapping of CID to connection status. The status contains the connection info (excluding credentials), whether the connection is active, whether the connection is managed (unamanged connection are returned with transient IDs), and, if the connection is down, the last error information.
The same actual connection can return multiple times, once for each CID.
For cases where an operation requires a connection to be active a user can poll the status of the CID. The user can then choose to poll for a certain amount of time or until an error appears in the error field of the status. This will give you either a timeout or a "try once" semantic depending on the flows needs.
All connections that have been managed persist VDSM restart and will be managed until a corresponding unmanage command has been issued.
There is no concept of temporary connections as "temporary" is flow dependent and VDSM can't accommodate all interpretation of "temporary". An ad-hoc mechanism can be build using the CID field. For instance a client can manage a connection with "ENGINE_FLOW101_CON1". If the flow got interrupted the client can clean all IDs with certain flow IDs.
I think this API gives safety, robustness, and implementation freedom.
Nitty Gritty:
manageStorageServer
Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} } _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
On 28/01/12 04:50, Itamar Heim wrote:
top posting since there was a long thread on this anyway. some questions/comments:
- about the CIDs - it sounds like the engine needs to persist this
info, so it can resume normally in case of a failure/restart (this is different than today, when the persisted info is the connection details, rather than some generated identifier)?
This info should be persisted in the engine, in addition to the connection details.
- sounds like the engine needs to block in certain cases after a
manageConnection to make sure it is there and alive before doing an operation. this means now engine has to check a host has all relevant connections online before choosing it as a target for live migration even for a regular VM (all disks on a storage domain).
With the current flow it is not needed for 'regular VM'. The engine currently do not monitor the storage domain's connections on a periodic basis because the storage domain status represents the availability of the domain.
worse/uglier (well, imho), in case of a disk based on a direct LUN, the engine needs to actively connect the target host, poll till it's up, and only then live migrate (would be much nicer if vdsm migration protocol would have taken care of this manageConnection call (preserving the CID?)
- in unmanageStorageServer(connectionID) below you finish with
"Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()"
it is not clear if vdsm will retry to disconnect, and how races between those retries and new manage connection requests will be handled. if the connection only becomes unmanaged, there is no way to track and clean it up (engine is not supposed to touch the unmanaged connections)
- I don't think we handle this today, but while we are planning for the
future - what if the host needs one of the connections to exist regardless of engine for another need (say it does boot from network from same iscsi target - this is an unmanaged connection which you will disconnect based on the CID refcount concept). i.e., what happens if the host has an unmanaged connection, which becomes a managed one. solving this probably means when adding a connection, need to add an unmanaged_existed_before CID for refcount?
On 01/23/2012 11:54 PM, Saggi Mizrahi wrote:
I have begun work at changing how API clients can control storage connections when interacting with VDSM.
Currently there are 2 API calls: connectStorageServer() - Will connect to the storage target if the host is not already connected to it. disconnectStorageServer() - Will disconnect from the storage target if the host is connected to it.
This API is very simple but is inappropriate when multiple clients and flows try to access the same storage.
This is currently solved by trying to synchronize things inside rhevm. This is hard and convoluted. It also brings out issues with other clients using the VDSM API.
Another problem is error recovery. Currently ovirt-engine(OE) has no way of monitoring the connections on all the hosts an if a connection disappears it's OE's responsibility to reconnect.
I suggest a different concept where VDSM 'manages' the connections. VDSM receives a manage request with the connection information and from that point forward VDSM will try to keep this connection alive. If the connection fails VDSM will automatically try and recover.
Every manage request will also have a connection ID(CID). This CID will be used when the same client asks to unamange the connection. When multiple requests for manage are received to the same connection they all have to have their own unique CID. By internally mapping CIDs to actual connections VDSM can properly disconnect when no CID is addressing the connection. This allows each client and even each flow to have it's own CID effectively eliminating connect\disconnect races.
The change from (dis)connect to (un)manage also changes the semantics of the calls significantly. Whereas connectStorageServer would have returned when the storage is either connected or failed to connect, manageStorageServer will return once VDSM registered the CID. This means that the connection might not be active immediately as the VDSM tries to connect. The connection might remain down for a long time if the storage target is down or is having issues.
This allows for VDSM to receive the manage request even if the storage is having issues and recover as soon as it's operational without user intervention.
In order for the client to query the current state of the connections I propose getStorageConnectionList(). This will return a mapping of CID to connection status. The status contains the connection info (excluding credentials), whether the connection is active, whether the connection is managed (unamanged connection are returned with transient IDs), and, if the connection is down, the last error information.
The same actual connection can return multiple times, once for each CID.
For cases where an operation requires a connection to be active a user can poll the status of the CID. The user can then choose to poll for a certain amount of time or until an error appears in the error field of the status. This will give you either a timeout or a "try once" semantic depending on the flows needs.
All connections that have been managed persist VDSM restart and will be managed until a corresponding unmanage command has been issued.
There is no concept of temporary connections as "temporary" is flow dependent and VDSM can't accommodate all interpretation of "temporary". An ad-hoc mechanism can be build using the CID field. For instance a client can manage a connection with "ENGINE_FLOW101_CON1". If the flow got interrupted the client can clean all IDs with certain flow IDs.
I think this API gives safety, robustness, and implementation freedom.
Nitty Gritty:
manageStorageServer
Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} } _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
----- Original Message -----
top posting since there was a long thread on this anyway. some questions/comments:
- about the CIDs - it sounds like the engine needs to persist this
info, so it can resume normally in case of a failure/restart (this is different than today, when the persisted info is the connection details, rather than some generated identifier)?
It doesn't have to. engine can have 2 types of CIDs: 1. for engine internal flows: - This would mostly be around a storage domain so the CID can start with the sd name and whenever engine wants to disconnect the connections of an sd. Several use cases for this - a. when moving a host between DCs, and don't tell me engine can't run operations on a host in 'maintenance' mode, either solve this silliness or allow moving a host directly between DCs without it being in maintenance). b. when removing a connection from the storage domain definition c. when removing the storage domain from the db Anyway, in the above cases you have 2 options: i. just getStorageConnectionList and disconnect anything that starts with this domain name ii. If the name is constant (e.g. SD_NAME-CONN-IQN) then you can always 'build it'
2. for UI generated flows I see no reason to persist anything here, just need a connection cleanup flow to get rid of irrelevant connections (you would need this anyway).
- sounds like the engine needs to block in certain cases after a
manageConnection to make sure it is there and alive before doing an operation.
I don't see how this is different from today.
this means now engine has to check a host has all relevant connections online before choosing it as a target for live migration even for a regular VM (all disks on a storage domain). worse/uglier (well, imho), in case of a disk based on a direct LUN, the engine needs to actively connect the target host, poll till it's up, and only then live migrate (would be much nicer if vdsm migration protocol would have taken care of this manageConnection call (preserving the CID?)
Today all hosts are connected to all the storage connections (that are needed for storage domains) beforehand and when a VM is migrated the connection is assumed to be on the target host. What is the difference? You preconnect to make sure the host is a valid target for migration. If you keep a set of hosts always connected you can better ensure SLAs.
- in unmanageStorageServer(connectionID) below you finish with
"Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()"
it is not clear if vdsm will retry to disconnect, and how races between those retries and new manage connection requests will be handled. if the connection only becomes unmanaged, there is no way to track and clean it up (engine is not supposed to touch the unmanaged connections)
Basically disconnect should never fail, if it does then there is a bug somewhere as it does not require access to a remote host. If and when this happens, qe will open a bug. Unless we have a ton of manage ops and then unmanage ops which fail this will not be an issue at all.
- I don't think we handle this today, but while we are planning for
the future - what if the host needs one of the connections to exist regardless of engine for another need (say it does boot from network from same iscsi target - this is an unmanaged connection which you will disconnect based on the CID refcount concept). i.e., what happens if the host has an unmanaged connection, which becomes a managed one. solving this probably means when adding a connection, need to add an unmanaged_existed_before CID for refcount?
So what happens if the order is reveresed? i.e. manage is called by ovirt and then the other application calls connect underneath? vdsm would have no idea this happened and would disconnect when unmanage arrives. The solution is simple, either separate connections (hybrid mode doesn't mean we share connections) or the other application has to be aware it is running in hybrid mode and then go through vdsm for connecting (no app is hybrid aware today).
On 01/23/2012 11:54 PM, Saggi Mizrahi wrote:
I have begun work at changing how API clients can control storage connections when interacting with VDSM.
Currently there are 2 API calls: connectStorageServer() - Will connect to the storage target if the host is not already connected to it. disconnectStorageServer() - Will disconnect from the storage target if the host is connected to it.
This API is very simple but is inappropriate when multiple clients and flows try to access the same storage.
This is currently solved by trying to synchronize things inside rhevm. This is hard and convoluted. It also brings out issues with other clients using the VDSM API.
Another problem is error recovery. Currently ovirt-engine(OE) has no way of monitoring the connections on all the hosts an if a connection disappears it's OE's responsibility to reconnect.
I suggest a different concept where VDSM 'manages' the connections. VDSM receives a manage request with the connection information and from that point forward VDSM will try to keep this connection alive. If the connection fails VDSM will automatically try and recover.
Every manage request will also have a connection ID(CID). This CID will be used when the same client asks to unamange the connection. When multiple requests for manage are received to the same connection they all have to have their own unique CID. By internally mapping CIDs to actual connections VDSM can properly disconnect when no CID is addressing the connection. This allows each client and even each flow to have it's own CID effectively eliminating connect\disconnect races.
The change from (dis)connect to (un)manage also changes the semantics of the calls significantly. Whereas connectStorageServer would have returned when the storage is either connected or failed to connect, manageStorageServer will return once VDSM registered the CID. This means that the connection might not be active immediately as the VDSM tries to connect. The connection might remain down for a long time if the storage target is down or is having issues.
This allows for VDSM to receive the manage request even if the storage is having issues and recover as soon as it's operational without user intervention.
In order for the client to query the current state of the connections I propose getStorageConnectionList(). This will return a mapping of CID to connection status. The status contains the connection info (excluding credentials), whether the connection is active, whether the connection is managed (unamanged connection are returned with transient IDs), and, if the connection is down, the last error information.
The same actual connection can return multiple times, once for each CID.
For cases where an operation requires a connection to be active a user can poll the status of the CID. The user can then choose to poll for a certain amount of time or until an error appears in the error field of the status. This will give you either a timeout or a "try once" semantic depending on the flows needs.
All connections that have been managed persist VDSM restart and will be managed until a corresponding unmanage command has been issued.
There is no concept of temporary connections as "temporary" is flow dependent and VDSM can't accommodate all interpretation of "temporary". An ad-hoc mechanism can be build using the CID field. For instance a client can manage a connection with "ENGINE_FLOW101_CON1". If the flow got interrupted the client can clean all IDs with certain flow IDs.
I think this API gives safety, robustness, and implementation freedom.
Nitty Gritty:
manageStorageServer
Synopsis: manageStorageServer(uri, connectionID):
Parameters: uri - a uri pointing to a storage target (eg: nfs://server:export, iscsi://host/iqn;portal=1) connectionID - string with any char except "/".
Description: Tells VDSM to start managing the connection. From this moment on VDSM will try and have the connection available when needed. VDSM will monitor the connection and will automatically reconnect on failure. Returns: Success code if VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results:VDSM was able to manage the connection. It usually just verifies that the arguments are sane and that the CID is not already in use. This doesn't mean the host is connected.
unmanageStorageServer
Synopsis: unmanageStorageServer(connectionID):
Parameters: connectionID - string with any char except "/".
Descriptions: Tells VDSM to stop managing the connection. VDSM will try and disconnect for the storage target if this is the last CID referencing the storage connection.
Returns: Success code if VDSM was able to unmanage the connection. It will return an error if the CID is not registered with VDSM. Disconnect failures are not reported. Active unmanaged connections can be tracked with getStorageServerList()
getStorageServerList
Synopsis: getStorageServerList()
Description: Will return list of all managed and unmanaged connections. Unmanaged connections have temporary IDs and are not guaranteed to be consistent across calls.
Results: A mapping between CIDs and the status. example return value (Actual key names may differ)
{'conA': {'connected': True, 'managed': True, 'lastError': 0, 'connectionInfo': { 'remotePath': 'server:/export 'retrans': 3 'version': 4 }} 'iscsi_session_34': {'connected': False, 'managed': False, 'lastError': 339, 'connectionIfno': { 'hostname': 'dandylopn' 'portal': 1}} } _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel@lists.fedorahosted.org