Hello.
I recently encoutered a poblem that nubmer of concurrent connecitons are decreased in FreeIPA servers.
[Architecutre - replication topology] My replication topology which is circular (ring-shaped), consists of 13 FreeIPA servers. These 13 servers are grouped as 3 clusters, of which members are 5, 4, 4 respectively. NLBs(network load balancers) to share request from clients for ipa login, kerberos authenticaion, ldap connections, are assinged to each cluster. Therefore 3 NLBs have 5, 4, 4 FreeIPA servers as their nlb backend pool, repectively.
This architecture has been worked successfully for 2 years, but recently I encountered a problem that 867 host_add per hours to one cluster results in "# of concurrent connections decrement" for all clusters. Command to get # of concurrent connections is dsconf -D "cn=Directory Manager" ldap://server.example.com monitor server | grep currentconnections: About 2K connections are observed for each servers, by this command.
I also found that if servers which replication info isn't transfered to, this symptom doesn't happen, even though those are in the same replication topology ring. Hence, I guess that "# of concurrent connections decrement" symptom is related to replcation.
I tried to tune the parameters like dtablesize = 65535, repl-release-timeout = 120, nnsslapd-threadnumber = authomatic thread tuning, db and entry cache auto-sizing (nsslapd-cache-autosize = 80, with failure.
I want to ask help to solve this symptom, if posible.
Thank you. JHK
Hi Jaehwan,
Why the nb of established connections (to the server) is a concern ?
The vast majority of the connections are client connections. Replication connections, especially in ring topology, would account for a small fraction of them. The added hosts generates a replication traffic, over the replication connections, and would put some cpu load on the destination server. ATM I do not see how it would impact the capacity of the destination server to accept new connections. The response time of destination server may increase (because of replicated updates), could it impact clients to open new connections ?
By the way what version of 389ds, are you running ?
best regards thierry
On 1/5/24 04:38, Jaehwan Kim via FreeIPA-users wrote:
Hello.
I recently encoutered a poblem that nubmer of concurrent connecitons are decreased in FreeIPA servers.
[Architecutre - replication topology] My replication topology which is circular (ring-shaped), consists of 13 FreeIPA servers. These 13 servers are grouped as 3 clusters, of which members are 5, 4, 4 respectively. NLBs(network load balancers) to share request from clients for ipa login, kerberos authenticaion, ldap connections, are assinged to each cluster. Therefore 3 NLBs have 5, 4, 4 FreeIPA servers as their nlb backend pool, repectively.
This architecture has been worked successfully for 2 years, but recently I encountered a problem that 867 host_add per hours to one cluster results in "# of concurrent connections decrement" for all clusters. Command to get # of concurrent connections is dsconf -D "cn=Directory Manager" ldap://server.example.com monitor server | grep currentconnections: About 2K connections are observed for each servers, by this command.
I also found that if servers which replication info isn't transfered to, this symptom doesn't happen, even though those are in the same replication topology ring. Hence, I guess that "# of concurrent connections decrement" symptom is related to replcation.
I tried to tune the parameters like dtablesize = 65535, repl-release-timeout = 120, nnsslapd-threadnumber = authomatic thread tuning, db and entry cache auto-sizing (nsslapd-cache-autosize = 80, with failure.
I want to ask help to solve this symptom, if posible.
Thank you. JHK -- _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Hello Thierry, Thank you for your reply. I couldn't see how replication traffic caused the # of current connections decrement.
My 388ds version by 'yum list | grep 389-ds' command, is like below. 389-ds-base.x86_64 1.4.3.11-1.fc32 @updates 389-ds-base-libs.x86_64 1.4.3.11-1.fc32 @updates 389-ds-base.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-devel.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-legacy-tools.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-libs.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-snmp.x86_64 1.4.3.23-1.fc32 updates cockpit-389-ds.noarch 1.4.3.23-1.fc32 updates
Thank you. JHK
Jaehwan Kim via FreeIPA-users wrote:
Hello Thierry, Thank you for your reply. I couldn't see how replication traffic caused the # of current connections decrement.
My 388ds version by 'yum list | grep 389-ds' command, is like below. 389-ds-base.x86_64 1.4.3.11-1.fc32 @updates 389-ds-base-libs.x86_64 1.4.3.11-1.fc32 @updates 389-ds-base.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-devel.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-legacy-tools.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-libs.x86_64 1.4.3.23-1.fc32 updates 389-ds-base-snmp.x86_64 1.4.3.23-1.fc32 updates cockpit-389-ds.noarch 1.4.3.23-1.fc32 updates
Fedora 32 became end of life on 2021-05-25. I'd strongly encourage you to update to a more recent release.
rob
Hello Rob,
I appreicate your suggestion.
I'm trying to change freeipa-server docker to the latest version (fedora-39-4.11.0) and to see if host_add make the # of currentconnections still decrease (in other words, if lots of host_add generate disconnection of existing ldap connections) I'd like to feedback once I have test result.
Thank you. JHK
Hello Rob,
I successfully installed a single FreeIPA server with fedora-39-4.11.0 docker(container) and tested performance with high host_add rate (14 host_add per min) by about 1K clients.
Test procedure is like... First, I added 500 hosts successfully and waited for about 10 mins. Then, I tried to add 500 hosts more and I could see ldap disconnection problem.
To analyze the problem, I looked into the log and found many logs : TCP_ERROR", "client_ip": "3.39.196.155", "server_ip": "34.146.187.171", "ldap_version": 3, "conn_id": 3043, "msg": "Bad Ber Tag or uncleanly closed connection - B1" }
Command I used to find out error log is : cat /var/log/dirsrv/slapd-SAMSUNGSRE-COM/security | grep TCP_ERROR
Can you please give me a piece of advice?
JHK
Jaehwan Kim via FreeIPA-users wrote:
Hello Rob,
I successfully installed a single FreeIPA server with fedora-39-4.11.0 docker(container) and tested performance with high host_add rate (14 host_add per min) by about 1K clients.
Test procedure is like... First, I added 500 hosts successfully and waited for about 10 mins. Then, I tried to add 500 hosts more and I could see ldap disconnection problem.
To analyze the problem, I looked into the log and found many logs : TCP_ERROR", "client_ip": "3.39.196.155", "server_ip": "34.146.187.171", "ldap_version": 3, "conn_id": 3043, "msg": "Bad Ber Tag or uncleanly closed connection - B1" }
Command I used to find out error log is : cat /var/log/dirsrv/slapd-SAMSUNGSRE-COM/security | grep TCP_ERROR
Can you please give me a piece of advice?
I'd correlate the connection id in the security log to the access log to see what it failed on and if any additional reason was given. I'd guess it is timeout related.
A host is generally a prety standalone object not requiring much process in LDAP other than the write.
Do you have any automember hostgroups defined? That could definitely have an impact.
rob
Hello Rob, Thank you for the reply. I got the logs, as you commeted. ========= access log [18/Jan/2024:23:34:13.087718471 +0000] conn=788 fd=258 slot=258 connection from 52.78.30.18 to 34.84.136.11 [18/Jan/2024:23:34:13.088018506 +0000] conn=788 op=0 EXT oid="1.3.6.1.4.1.1466.20037" name="start_tls_plugin" [18/Jan/2024:23:34:13.088053934 +0000] conn=788 op=0 RESULT err=0 tag=120 nentries=0 wtime=0.000228592 optime=0.000040018 etime=0.000268106 [18/Jan/2024:23:34:13.158931686 +0000] conn=788 TLS1.3 128-bit AES-GCM [18/Jan/2024:23:34:13.159223459 +0000] conn=788 op=-1 fd=258 Disconnect - Bad Ber Tag or uncleanly closed connection - B1
security log { "date": "[18/Jan/2024:23:34:13.159227408 +0000] ", "utc_time": "1705620853.159227408", "event": "TCP_ERROR", "client_ip": "52.78.30.18", "server_ip": "34.84.136.11", "ldap_version": 3, "conn_id": 788, "msg": "Bad Ber Tag or uncleanly closed connection - B1" } =========
I'm using automember to automatically join new hosts to a specific hostgroup. 0.5K ~ 1K hosts is too many to join one hostgroup?
JHK
Hello.
I verified that this disconnection happens because new hosts are continuously added into a SINGLE BIG host-group by automembership, which results in slow response of ldap search. I also verified that the disconnection does't happen if ldap_search_timeout is changed from 6 sec to 60 sec, in client side. In my simplified condition with ldap_search_timeout = 6 sec, disconnection happens when # of hosts of the hostgroup is more than 4K. Can you guide me if there is a recommended hostgroup size and sss ldap_search_timeout value?
Thank you.
Jaehwan KIm
Jaehwan Kim via FreeIPA-users wrote:
Hello.
I verified that this disconnection happens because new hosts are continuously added into a SINGLE BIG host-group by automembership, which results in slow response of ldap search. I also verified that the disconnection does't happen if ldap_search_timeout is changed from 6 sec to 60 sec, in client side. In my simplified condition with ldap_search_timeout = 6 sec, disconnection happens when # of hosts of the hostgroup is more than 4K. Can you guide me if there is a recommended hostgroup size and sss ldap_search_timeout value?
It is not recommended that any group exceed 3K members. Beyond this the memberof calculations required noticeably slows down as you are experiencing.
What is the reason to have a single hostgroup will hosts as members? Perhaps there is another way to achieve what you want.
rob
Hello Rob,
2 automembership functions add a host to 2 hostgroups by checking keywords in the host description, - 'servicename=...' for 1st hostgroup - 'groupname=...' for 2nd hostgroup.
10K hosts are managed by IPA, and 2.9K hosts were in 1st and 2nd hostgroup when disconnection happened. For about a week, 2.9K hosts with a specific servicename had been continuously added to 1st and 2nd hostgroup.
There is an automated function to check host-validity but it deletes invalid hosts in a week from the hostgroups.
Can you advise me if possible?
Is there a link that I can refer to, about the recommended members in a host < 3K?
JHK
Jaehwan Kim via FreeIPA-users wrote:
Hello Rob,
2 automembership functions add a host to 2 hostgroups by checking keywords in the host description,
- 'servicename=...' for 1st hostgroup
- 'groupname=...' for 2nd hostgroup.
10K hosts are managed by IPA, and 2.9K hosts were in 1st and 2nd hostgroup when disconnection happened. For about a week, 2.9K hosts with a specific servicename had been continuously added to 1st and 2nd hostgroup.
There is an automated function to check host-validity but it deletes invalid hosts in a week from the hostgroups.
Can you advise me if possible?
Is there a link that I can refer to, about the recommended members in a host < 3K?
It's all group membership, not just hosts. This based on the metric of operations taking < 2s. It can certainly support more but you'll see an ever increasing amount of time to calculate memberof.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/htm...
rob
Hello Rob As you said, If any group member exceed 3K then you can experience slow down in server response. But in the big size of operation environment, members( especially the number of hosts) exceeding 3k is not that uncommon. So, I wonder if there is any way you recommend to manage this case such as you split up into several groups internally, or disable some configurations. If there is no reference or guide from IPA about that, I have no option but to face slow-down performance issue ?
seojeong kim via FreeIPA-users wrote:
Hello Rob As you said, If any group member exceed 3K then you can experience slow down in server response. But in the big size of operation environment, members( especially the number of hosts) exceeding 3k is not that uncommon. So, I wonder if there is any way you recommend to manage this case such as you split up into several groups internally, or disable some configurations. If there is no reference or guide from IPA about that, I have no option but to face slow-down performance issue ?
We consider an API call to be "slow" if it takes > 2s. At 3k a group adding a new member tends to exceed that. Not by a lot but the more members, the slower it gets. I didn't test member removal from a group > 3k but its likely to be similar.
I also didn't test absolutely massive groups. I was only looking to find out where adding a new member exceeded 3s. So I have no graphs on the rate at which adding a new member slows.
Splitting groups using nesting results in the same problem. The underlying issue is the memberof plugin in 389-ds which calculates the membership. There is no getting around it.
Work is happening to address the known performance issues but I'm not doing the work and have no insight into their progress. All I know is that it's a hard nut to crack.
Currently there are no known mitigations.
rob
Hello Rob, I have an extra question on this thread. On the client side, ldap_search request was triggered periodically and in the situation of large host group such as 3k members exceeded, ldap latency was happening. In our client configuration, ldap_search_timeout is 6 sec by default. In this latency situation, ldap search was failed by timeout( 6sec ) on the client side and it causes ldap disconnection.
Q. what/when trigger ldap search in client side? In our ipa client, it has krb5_lifetime / ldap_connection_expire_time 24h, so I thought ldap search will be triggered every 24h, but ldap search was triggered continuously. Is there another configuration to control ldap search?
seojeong kim via FreeIPA-users wrote:
Hello Rob, I have an extra question on this thread. On the client side, ldap_search request was triggered periodically and in the situation of large host group such as 3k members exceeded, ldap latency was happening. In our client configuration, ldap_search_timeout is 6 sec by default. In this latency situation, ldap search was failed by timeout( 6sec ) on the client side and it causes ldap disconnection.
Q. what/when trigger ldap search in client side? In our ipa client, it has krb5_lifetime / ldap_connection_expire_time 24h, so I thought ldap search will be triggered every 24h, but ldap search was triggered continuously. Is there another configuration to control ldap search?
I don't know what ldapsearch you're referencing or what is executing the search.
I did not see any client-side performance issues with large groups. The issue is in adding one more member to the group takes an increasingly long time.
rob
Could you provide me with any IPA concept related to what continue to trigger ldap_search_request in sssd ? Because ipa ssh login was not working. And in our client , ldap_connection_expire_time 24h so I thought ldap_search_request will start every 24 hour. but actually ldap_search_request was done almost every minute.
This is part of sssd log in ipa client. sssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_search_timeout has value 6 sssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_network_timeout has value 6 sssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_opt_timeout has value 8 sssd_example.com.log:(Fri Fesssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_sasl_minssf has value 56 sssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_sasl_maxssf has value -1 b 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_offline_timeout has value 60 sssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_offline_timeout has value 60 sssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_enumeration_search_timeout has value 60 sssd_example.com.log:(Fri Feb 16 08:28:31 2024) [be[example.com]] [dp_get_options] (0x0400): Option ldap_connection_expire_timeout has value 89763
root@myinstnace:/var/log/sssd# grep "Searching xx.xx.xx.xx:389" *
sssd_example.com.log:(Fri Feb 16 09:50:36 2024) [be[example.com]] [sdap_print_server] (0x2000): Searching xx.xx.xx.xx:389 sssd_example.com.log:(Fri Feb 16 09:51:57 2024) [be[example.com]] [sdap_print_server] (0x2000): Searching xx.xx.xx.xx:389
sssd_example.com.log:(Fri Feb 16 09:52:05 2024) [be[example.com]] [sdap_print_server] (0x2000): Searching xx.xx.xx.xx:389 sssd_example.com.log:(Fri Feb 16 09:52:05 2024) [be[example.com]] [sdap_print_server] (0x2000): Searching xx.xx.xx.xx:389 sssd_example.com.log:(Fri Feb 16 09:52:05 2024) [be[example.com]] [sdap_print_server] (0x2000): Searching xx.xx.xx.xx:389
freeipa-users@lists.fedorahosted.org