So, we are moving more and more things over to our ocp4 cluster (which is great!). However, I noticed this weekend, it's going to mean some of our applications that are reachable via ipv6 will no longer be. ;(
The ocp3 cluster is on our vpn and can be reached by all our proxy network. Many of our proxies have ipv6 connectivity.
The ocp4 cluster is not on our vpn and can only be reached by the 2 iad2 proxies. iad2 has currently no ipv6 support.
I'm asking networking folks about ipv6 support in iad2, but last I heard it was waiting for some hardware upgrades, so I don't know that we can count on it anytime soon.
So, we can:
1. Just not care, and move everything to ocp4 and people will need to use ipv4 to reach those services.
2. Try and get the ocp4 compute nodes on our vpn. I looked around and could not find any handy openvpn reference for openshift4. I'm guessing this needs a machine-config of some kind to establish the vpn and possibly some kind of ingress policy to allow incoming connections there.
3. Another layer of proxy. ie, proxies -> vpn -> secondproxyiniad2 -> ocp4.
4. Some other clever plan?
IMHO, I'd like to do 2... but I have no idea if it's possible/easy. Can some of you more savvy openshift folks weigh in? I think if we do 1 there will be complaints, 3 could get super complex fast and also is going to be slow with another hop in the middle there. 4 might be good if anyone can think of some plan I missed. ;)
Thoughts?
kevin
On 2022-06-06 19:45, Kevin Fenzi wrote:
- Try and get the ocp4 compute nodes on our vpn. I looked around and
could not find any handy openvpn reference for openshift4. I'm guessing this needs a machine-config of some kind to establish the vpn and possibly some kind of ingress policy to allow incoming connections there.
That can be done, but I'm not sure doing it with machine-config is the right way. Instead, I would run a deployment (or daemonset) on all workers that run a router pod, with at least hostnetwork capability (this part needs to be checked). This pod will run the openvpn process and since the openshift router listen on all interfaces by default, it should be available through the vpn automagically.
darknao
On Tue, Jun 07, 2022 at 01:15:48PM +0200, darknao wrote:
On 2022-06-06 19:45, Kevin Fenzi wrote:
- Try and get the ocp4 compute nodes on our vpn. I looked around and
could not find any handy openvpn reference for openshift4. I'm guessing this needs a machine-config of some kind to establish the vpn and possibly some kind of ingress policy to allow incoming connections there.
That can be done, but I'm not sure doing it with machine-config is the right way. Instead, I would run a deployment (or daemonset) on all workers that run a router pod, with at least hostnetwork capability (this part needs to be checked).
This pod will run the openvpn process and since the openshift router listen on all interfaces by default, it should be available through the vpn automagically.
Hum... that sounds reasonable, but I am not sure what the details would look like. ;( Would that be in openshift-ingress?
The vpn part itself is pretty simple, just needs the openvpn service, a small config file and a pub/private/ca cert tripplet.
kevin
On 2022-06-07 21:24, Kevin Fenzi wrote:
Hum... that sounds reasonable, but I am not sure what the details would look like. ;( Would that be in openshift-ingress?
Not necessary. Ideally it would be in its own namespace. I've took a closer look and I think you will need the following: - pod running as root: OpenVPN will need that to run correctly (create & manage the tun device). - hostNetwork: Needed to create the tun device on host. - access to host's /dev/net/tun: Also needed to create the tun device - NET_ADMIN capability: Needed to configure the newly tun device.
All that will require a dedicated ServiceAccount with a new SCC unless we run the pod in privileged mode, but I would advise against this. Something like: https://paste.centos.org/view/bc095501
The vpn part itself is pretty simple, just needs the openvpn service, a small config file and a pub/private/ca cert tripplet.
Right. The deployment should looks like https://paste.centos.org/view/73abc392 That is just an example (but a working one). That would need some extra affinity rules to make it run only on router node and everything but that should get you an idea.
On Tue, Jun 07, 2022 at 11:02:22PM +0200, darknao wrote:
On 2022-06-07 21:24, Kevin Fenzi wrote:
Hum... that sounds reasonable, but I am not sure what the details would look like. ;( Would that be in openshift-ingress?
Not necessary. Ideally it would be in its own namespace. I've took a closer look and I think you will need the following:
- pod running as root: OpenVPN will need that to run correctly (create &
manage the tun device).
- hostNetwork: Needed to create the tun device on host.
- access to host's /dev/net/tun: Also needed to create the tun device
- NET_ADMIN capability: Needed to configure the newly tun device.
All that will require a dedicated ServiceAccount with a new SCC unless we run the pod in privileged mode, but I would advise against this. Something like: https://paste.centos.org/view/bc095501
Alas, I took too long to get back to this and the paste is gone. ;(
The vpn part itself is pretty simple, just needs the openvpn service, a small config file and a pub/private/ca cert tripplet.
Right. The deployment should looks like https://paste.centos.org/view/73abc392 That is just an example (but a working one). That would need some extra affinity rules to make it run only on router node and everything but that should get you an idea.
Another issue I thought of: with openvpn each client has its own set of certs. so, each pod needs just the ones for that node...
Would you be willing to work up a PR? I'm kinda out of my depth with this one...
Or if not, perhaps davidk would be able to move it forward...
kevin
On 2022-06-09 00:58, Kevin Fenzi wrote:
Another issue I thought of: with openvpn each client has its own set of certs. so, each pod needs just the ones for that node...
I thought of that too. You can either use one deployment+configmap+secret combo for each node or, my favorite, use a single deployment with one secret that contains all certs, keys and CA. And to avoid exposing everything to all openvpn pods, you can use an init container that will extract the right cert/key for each node, and expose it via an emptyDir to the the openvpn container.
Would you be willing to work up a PR? I'm kinda out of my depth with this one...
Sure, can do that :)
So, to follow up...
Thanks to darknao, we now have the prod cluster workers on the vpn. ;)
So, we don't need to worry about this. We can move any apps pointed to just iad2/ocp cluster back to wildcard and adjust the proxies playbook so they proxy for them again. I'll look at doing that tomorrow.
Three cheers for darknao! :)
kevin
Hey Kevin,
Not particularly venturing an opinion here, personally would like it to stay, but then I've had native v6 at home for 10+ years
So, we are moving more and more things over to our ocp4 cluster (which is great!). However, I noticed this weekend, it's going to mean some of our applications that are reachable via ipv6 will no longer be. ;(
The ocp3 cluster is on our vpn and can be reached by all our proxy network. Many of our proxies have ipv6 connectivity.
The ocp4 cluster is not on our vpn and can only be reached by the 2 iad2 proxies. iad2 has currently no ipv6 support.
Do we have any data from the proxies as to how much of the traffic is over IPv6 vs IPv4? What services that are on the cluster will be affected by the move?
I'm asking networking folks about ipv6 support in iad2, but last I heard it was waiting for some hardware upgrades, so I don't know that we can count on it anytime soon.
That was the excuse they used to give in PHX from memory "we will deploy IPv6 in the new DC and the equipment doesn't support it" and support for v6 in equipment has been a requirement to sell to US govt since the late 2000s so by now all their equipment should support it.
So, we can:
- Just not care, and move everything to ocp4 and people will need to
use ipv4 to reach those services.
What are the affected services? What is the v6 vs v4 traffic in the current setup?
- Try and get the ocp4 compute nodes on our vpn. I looked around and
could not find any handy openvpn reference for openshift4. I'm guessing this needs a machine-config of some kind to establish the vpn and possibly some kind of ingress policy to allow incoming connections there.
- Another layer of proxy. ie, proxies -> vpn -> secondproxyiniad2 ->
ocp4.
- Some other clever plan?
IMHO, I'd like to do 2... but I have no idea if it's possible/easy. Can some of you more savvy openshift folks weigh in? I think if we do 1 there will be complaints, 3 could get super complex fast and also is going to be slow with another hop in the middle there. 4 might be good if anyone can think of some plan I missed. ;)
I'd prefer 2 and the "we need new HW" seems to have been an excuse from IT for as long as I've been involved in releng/infra :-(
Overall I'm sure we'll survive, and I'd like to see the least amount of work option. Some data about services and what levels of v6 vs v6 would be useful I feel to actually gauge the impact.
On Tue, 7 Jun 2022 at 07:36, Peter Robinson pbrobinson@gmail.com wrote:
Hey Kevin,
Not particularly venturing an opinion here, personally would like it to stay, but then I've had native v6 at home for 10+ years
So, we are moving more and more things over to our ocp4 cluster (which is great!). However, I noticed this weekend, it's going to mean some of our applications that are reachable via ipv6 will no longer be. ;(
The ocp3 cluster is on our vpn and can be reached by all our proxy network. Many of our proxies have ipv6 connectivity.
The ocp4 cluster is not on our vpn and can only be reached by the 2 iad2 proxies. iad2 has currently no ipv6 support.
Do we have any data from the proxies as to how much of the traffic is over IPv6 vs IPv4? What services that are on the cluster will be affected by the move?
Doing a bone stupid ``` awk 'BEGIN{ip4=0; ip6=0} $1!~/[g-z]/{if ($1~/./){ip4=ip4+1}; if ($1~/:/){ip6=ip6+1};} END{total=ip4+ip6; print "ip4:",ip4,((ip4*100)/total)"%","ip6:",ip6, ((ip6*100)/total)"%"}' ``` on a bunch of the httpd logs that are external services for 2022-05-22 gave me ip4: 49528865 88.1798% ip6: 6639163 11.8202%
Going over other days gives me around an 85% ipv4 and 15% ipv6. That is large enough that I think it would be good to get IAD2 onto ipv6.
I'm asking networking folks about ipv6 support in iad2, but last I heard
it was waiting for some hardware upgrades, so I don't know that we can count on it anytime soon.
That was the excuse they used to give in PHX from memory "we will deploy IPv6 in the new DC and the equipment doesn't support it" and support for v6 in equipment has been a requirement to sell to US govt since the late 2000s so by now all their equipment should support it.
There are different levels of support. The various vendors have been selling an ipv6 stack for years but various parts don't work under load unless you just want port 80 and port 443 and nothing too fancy on them. (NFS seems to be one protocol which seems to overload various stacks a lot). Most of the problems tend to be the amount of memory needed to map the stateful firewall and it blowing up regularly.
We had ipv6 twice in the IAD2 location but found that various Fedora utilities broke horribly during that time. We didn't have time to deal with those and get the move complete so we asked ipv6 to be turned off. The current problem is we were already in the red for ipv4 traffic on the firewall which was purchased. This one was much bigger than the one we had in PHX2 and was thought to be enough for our needs. However, we basically have filled that pipe. We could turn on the ipv6 stack but would probably see a degradation of services overall. The firewall to replace it has been purchased but is on a long wait queue to be put into replacement as other needs have been deemed higher urgency than that.
At this point, it is a matter of getting on the Change Control schedule for Red Hat IT to have our networks turn the process on. It would also help to have a general plan of action of how this would be done like: 1. Set up internal ipv6 for IAD2 networks. 2. Set up and test ipv6 firewalls and dualstacks on systems in IAD2. 3. Set up limited firewall traffic of ipv6 to public network 4. Roll out new firewall hardware to IAD2 5. Test infrastructure and add ipv6 advertisements to public network 6. ... 7. profit
On Tue, Jun 07, 2022 at 08:44:03AM -0400, Stephen Smoogen wrote:
We had ipv6 twice in the IAD2 location but found that various Fedora utilities broke horribly during that time. We didn't have time to deal with those and get the move complete so we asked ipv6 to be turned off. The current problem is we were already in the red for ipv4 traffic on the firewall which was purchased. This one was much bigger than the one we had in PHX2 and was thought to be enough for our needs. However, we basically have filled that pipe. We could turn on the ipv6 stack but would probably see a degradation of services overall. The firewall to replace it has been purchased but is on a long wait queue to be put into replacement as other needs have been deemed higher urgency than that.
I've asked for current status on all this... it's not clear to me if thats still the case (but it could well be! :)
At this point, it is a matter of getting on the Change Control schedule for Red Hat IT to have our networks turn the process on. It would also help to have a general plan of action of how this would be done like:
- Set up internal ipv6 for IAD2 networks.
- Set up and test ipv6 firewalls and dualstacks on systems in IAD2.
- Set up limited firewall traffic of ipv6 to public network
- Roll out new firewall hardware to IAD2
- Test infrastructure and add ipv6 advertisements to public network
- ...
- profit
Well, personally I'd like to only enable ipv6 on a small subset of machines via static ips. The majority of our instances have 0 need for ipv6. But of course that still depends on it being available...
kevin
infrastructure@lists.fedoraproject.org