Hi Theirry,

I submitted a draft PR for my epoll change, now that I have reworked it for the master branch https://github.com/lslile/389-ds-base/pull/1

We are hitting a C10K-ish issue where I work with the removal of nunc-stans.  In our environment there are both large inrushes of clients and latency requirements.  Fortunately we never had any stability issues with nunc-stans,  I have been running Directory Server for a long time so I tend to only update to address specific issues.  Currently we are migrating to 1.4.x because support from RedHat ended on 1.3.x.


I hope what I have done can be merged with the multiple polling thread work directly.  Having both together would seem to be the best architecture.


I'm having some issues with "lost connections" under load.  I think I my have uncovered something interesting.  Inconveniently turning up the debug log level hides the issue so I have resorted to intercepting calls with macros to keep logging to a minimum.

The log from a "lost connection" looks like this:

[07/Feb/2022:17:45:11.573235179 -0500] conn=732 fd=795 slot=795 connection from 10.232.87.67 to 10.60.17.156
[07/Feb/2022:17:45:11.581206879 -0500] - ERR - handle_new_connection:2453 - connection_table_move_connection_on_to_active_list(0x7fd260959e40, 0x7fd25784df60) conn=732 c_fdi=-1

[ at this point the server was shutdown to capture logs ]

[07/Feb/2022:19:05:28.228100129 -0500] - ERR - connection_table_disconnect_all:240 - disconnect_server_nomutex(0x7fd25784df60, 732, -1, Connection aborted - A1, Operation canceled) conn=732 c_fdi=0
[07/Feb/2022:19:06:34.466489954 -0500] - ERR - connection_done:146 - connection_cleanup(0x7fd25784df60) conn=732 c_fdi=0

It never activated handle_epoll_pr_read_ready and was never processed out in epoll_pr_idle_pds.  I'm guessing it either didn't get added to the connection table, never got flagged as active, or was somehow locked, preventing it from being processed in epoll_pr_idle_pds.  It did get addressed by the shutdown cleanup, so I believe that rules out not getting into the connection table linked list.  I will try to dump the status information during shutdown to see if the cause can be determined postmortem.


This happened to 2 of 760 connections.  I'm trying to limit the test to a manageable size, just enough to elicit the problem but not so much that I can't find the lost connections.  The gross statistics for the run look promising, but the stats for the client that had conn=732 and conn=442 were lost and not included.  This was a 2 minute slamd replay, 1 minute stats gathering, of a live log file from my environment.

Count     Avg/Second     Avg/Interval     Std Dev     Corr Coeff
60053     500.442     30026.500     204.483     -0.878

Operation Result Codes:
0 (Success) 54728 (99.974%)
87 (filter error) 13 (0.024%)
32 (no such object) 1 (0.002%)
4 (size limit exceeded) 0 (0.000%)
85 (timeout) 0 (0.000%)

Operation Ratios:      
Search 54742 (100.000%)


I look forward to any insights and feedback from you and the other 389 developers.

Thanks.

--Larry

On Wednesday, February 2, 2022, 03:25:01 AM EST, Thierry Bordaz <tbordaz@redhat.com> wrote:


HI Larry,

That is excellent news that you resurrecting epoll effort. Like Mark mentioned we were unable to stabilize our first attempts (nunc-stans). It worked very well but the debug of remaining bugs was a nightmare. The bug(s) happened after a long period of run, without clear pattern (sometime when under high or low connection pressure), only few symptoms were reproducible and debug log was hiding the symptoms.

nunc-stans (epoll) was designed to address C10K problem, is it for the same reason that you are working on epoll at the moment ? What is the issue you try to solve (response time, high cpu,...)

ATM we are looking for another option (several polling threads) that should hopefully address the response time. It will be great if we can easily support the two options.

regards
Theirry

On 2/2/22 12:17 AM, Larry Lile wrote:
Hi Mark,

I take no negativity from your comments, nor am I surprised, or dissuaded.  :-)  I will push the patch forward to the master branch. 

I do have an agenda for having this patch working with 1.4, I am in the process of DS10 (389-ds-base-1.3.6.1-26.el7_4) to DS11(389-ds-base-1.4.3.27-2...) migration and performance has fallen off badly in my environment with the loss of nunc stans.  That however, can be "my problem for later" if I can help deliver epoll in 2.x.

Any feedback on the code, method, etc. as it stands, will be integrated in my work against the master branch.

Thanks.

--Larry


On Tuesday, February 1, 2022, 05:50:22 PM EST, Mark Reynolds <mareynol@redhat.com> wrote:


Hi Larry,

I have not reviewed any of your work yet, but I have some comments about this effort.  Many years ago we all spent a lot of time trying to use e_poll through something we called nunc-stans.  It was a mess, and we were never able to resolve all the issues we found (FD leaks, lost connections, instability, high CPU, loops, etc).  You can see this code in the 1.4.1 branch I believe (as it was stripped out in later releases). 

Now the connection code is very fragile, and any changes will need extensive testing, that being said, the 1.4.4 branch is essentially dead (especially in regards to major RFEs).  Any work you are doing should be done on master branch (which will be 2.1.0), because any major changes to the connection code will not be backported to anything earlier (sorry).

Anyway, I'm not trying to be negative.  This is exciting work, and it's something that we had wanted to go back and revisit.  I'm sure William and Thierry will have some comments about some of the challenges we faced, and some testing scenarios to try.  But you will need to port this to master branch, so I suggest getting off of 1.4.4 asap.

Thanks again for working on this, and we will help you as much as we can!

Cheers,
Mark

On 2/1/22 5:20 PM, Larry Lile wrote:
Hi,

I have been working on converting slapd from using NSPR PR_Poll to using epoll(7), forked from release 1.4.4.

The fork can be found here https://github.com/lslile/389-ds-base/tree/epoll.  The patch is also attached.

I would appreciate any feedback from the community on my progress so far and any assistance with bringing this change to completion.  I also hope that it might well be integrated with James Chapman's Connection Table splitting proposal and further proposal regarding listener threading.

I believe my code still contains an error, causing it to occasionally lose track of a connection under heavy load, but I have so far been unable to find the error.  It doesn't seem to happen when I have  logging at SLAPI_LOG_CONNS, so it is possible I have caused or encountered a race condition.

I tried not to deviate too far from the existing code at this point, the major changes at this point are:
  • Listeners moved to a listen_table (setup_epoll_listen_pds)
    • listen_table is a list of Connection's so the can be handled in the same way as a client Connection
    • Differentiated by Connection->conn_state = CONN_STATE_LISTEN
    • Connection_Table->listen_count is no longer maintained
    • Eliminates listener pd handing from setup_pr_read_pds
  • epoll_arm_listen_pds
    • Adds or removes all listener pds from epoll
    • Triggered from main event loop based on connection count limits
  • Connection_Table->fd is currently only maintained for listeners
    • This could likely be fully eliminated, but I'm not sure what to do with signalpipe to accomplish this end
  • Connection_Table->epollfd has been added to hold the epoll fd set
  • handle_new_connection
    • Adds descriptors to epoll immediately
    • Eliminates the need for setup in setup_pr_read_pds
  • epoll_pr_idle_pds ( timeout related section of setup_pr_read_pds )
    • Should only handle client timeouts or special cases for re-adding a descriptor to epoll
    • Eliminates the needs from the remainder of setup_pr_read_pds
  • setup_pr_read_pds is not used with epoll

Is epoll(7) available on all platforms supported by 389-ds?  Because I don't know, I have hesitated to remove any NSPR related code at this point.


In my testing I have found that epoll is provides a measurable boost in client servicing, however my current testing methodology is not sufficiently regimented enough to provide statistically sound measurements.



I believe conversion from PR_Poll to epoll(7) fits well with the "389 ds connection management proposal" that James Chapman had raised.

When epoll is accepting a large number of concurrent connections there are obvious stalls that indicate the need for one or more listener threads to be created to separate client connection processing from connected client servicing.

I also think that James' idea of creating multiple Connection Tables could be simplified with epoll.

  • Connection_Table->epollfd could be converted to an array of epoll fd sets
    • one thread and epoll fd for each listener
    • one thread and epoll fd for each "Connection Table" processors
  • Re-balancing connections between "Connection Table" processors could then be accomplished by adding and deleting the fd in the appropriate "Connection Table" epoll fd sets

Thanks in advance for all input or assistance.

--Larry



_______________________________________________
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
-- 
Directory Server Development Team

_______________________________________________
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
_______________________________________________
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure