On 05/19/2011 03:03 PM, Kaleb KEITHLEY wrote:
No attempt is made to avoid collisions between server-side uids
and
gids among muliple tenants. To avoid collisions the per-tenant xlator
configuration may specify ranges that can not collide. (In fact that
seems like a good enhancement for the cloudfs python script that creates
the gluster volume config file.)
We definitely do need to avoid collisions, and I don't think trying to
push it into the management stack is sufficient. IDs would have to be
allocated in chunks, with the chunk assignments stored - separate from
the UID assignments, but reconciled with them during load. What happens
when a tenant uses up their chunk and needs a new one? We'd need to
coordinate carefully between the management code and the running
glusterfsd, perhaps reconfigure/reload the latter, allow specification
of multiple ranges because the adjacent ones are likely to be taken (or
implement re-mapping of a tenant's already allocated IDs) etc. It seems
to me that allocating from a single table within the translator would be
a lot simpler. Is there a reason that's not the case?
P.S. The cloudfs script is no longer used. Volfiles are now generated
from cfs_start_volume.py (server) and cfs_mount.py (client).