https://bugzilla.redhat.com/show_bug.cgi?id=518544 Resolves: bug 518544 Bug Description: large entries cause server SASL responses to fail Reviewed by: ??? Files: see diff Branch: HEAD and 1.2 Fix Description: The SASL server code was broken when we switched over to use NSPR I/O for the SASL IO layer. If the entire encrypted buffer could not be sent to the client, the server was just failing. Instead, the server must keep track of how many encrypted bytes were sent. If all of the encrypted bytes could not be sent, we must return the appropriate error to the caller to let them know the operation would block. The caller in this case is the write_function() which does a poll() to see if the socket is available for writing again, then will attempt the send again. I also cleaned up usage of the various Debug macros. Finally, I discovered that the sasl init code was calling config_get_localhost() before that value could be set. In most cases, it is ok, because it will fall back to the default hostname from the system. However, if for some reason you want to use a different localhost, it will fail. Now it will be in the bootstrap config code. Platforms tested: RHEL5 x86_64 Flag Day: no Doc impact: no https://bugzilla.redhat.com/attachment.cgi?id=358289&action=diff
389-devel@lists.fedoraproject.org