On 05/19/2011 04:19 PM, Jeff Darcy wrote:
On 05/19/2011 03:03 PM, Kaleb KEITHLEY wrote:
> No attempt is made to avoid collisions between server-side uids and
> gids among muliple tenants. To avoid collisions the per-tenant xlator
> configuration may specify ranges that can not collide. (In fact that
> seems like a good enhancement for the cloudfs python script that creates
> the gluster volume config file.)
We definitely do need to avoid collisions, and I don't think trying to
push it into the management stack is sufficient. IDs would have to be
allocated in chunks, with the chunk assignments stored - separate from
the UID assignments, but reconciled with them during load. What happens
when a tenant uses up their chunk and needs a new one? We'd need to
coordinate carefully between the management code and the running
glusterfsd, perhaps reconfigure/reload the latter, allow specification
of multiple ranges because the adjacent ones are likely to be taken (or
implement re-mapping of a tenant's already allocated IDs) etc. It seems
to me that allocating from a single table within the translator would be
a lot simpler. Is there a reason that's not the case?
If each tenant runs in a separate server instance then each new mapping
requires a read-modify-write of the single/global persisted map file to
update its in-memory table with what the other servers have done. I
started to go down that route but didn't like the performance
implications. But if that's okay from a performance standpoint then I'll
do that. Adding new mappings should be infrequent so in retrospect I
guess I'm okay with it.
We should talk about how we want to handle running out of uids in the
configured range. Reconfigure/reload seems like it might be okay.
P.S. The cloudfs script is no longer used. Volfiles are now
generated
from cfs_start_volume.py (server) and cfs_mount.py (client).
_______________________________________________
cfs_start_volume.py isn't in the uidmap branch yet. Which way do you
want to go: a) merge cloudfsd -> uidmap or b) wait until we merge uidmap
-> cloudfsd and fix cfs_start_volume.py then?