[ad_1]
As of 14 June 2023, PROXY protocol is supported for Ingress Controllers in Crimson Hat OpenShift on IBM Cloud clusters hosted on VPC infrastructure.
Introduction
Fashionable software program architectures usually embody a number of layers of proxies and cargo balancers. Preserving the IP handle of the unique consumer by these layers is difficult, however is perhaps required in your use instances. A possible resolution for the issue is to make use of PROXY Protocol.
Beginning with Crimson Hat OpenShift on IBM Cloud model 4.13, PROXY protocol is now supported for Ingress Controllers in clusters hosted on VPC infrastructure.
In case you are enthusiastic about utilizing PROXY protocol for Ingress Controllers on IBM Cloud Kubernetes Service clusters, you could find extra info in our earlier weblog publish.
Organising PROXY protocol for OpenShift Ingress Controllers
When utilizing PROXY protocol for supply handle preservation, all proxies that terminate TCP connections within the chain should be configured to ship and obtain PROXY protocol headers after initiating L4 connections. Within the case of Crimson Hat OpenShift on IBM Cloud clusters working on VPC infrastructure, we’ve got two proxies: the VPC Software Load Balancer (ALB) and the Ingress Controller.
On OpenShift clusters, the Ingress Operator is answerable for managing the Ingress Controller situations and the load balancers used to reveal the Ingress Controllers. The operator watches IngressController assets on the cluster and makes changes to match the specified state.
Because of the Ingress Operator, we will allow PROXY protocol for each of our proxies without delay. All we have to do is to alter the endpointPublishingStrategy configuration on our IngressController useful resource:
endpointPublishingStrategy:
kind: LoadBalancerService
loadBalancer:
scope: Exterior
providerParameters:
kind: IBM
ibm:
protocol: PROXY
Whenever you apply the earlier configuration, the operat,or switches the Ingress Controller into PROXY protocol mode and provides the service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: “proxy-protocol” annotation to the corresponding LoadBalancer typed Service useful resource, enabling PROXY protocol for the VPC ALB.
Instance
On this instance, we deployed a check utility in a single-zone Crimson Hat OpenShift on IBM Cloud 4.13 cluster that makes use of VPC technology 2 compute. The appliance accepts HTTP connections and returns details about the acquired requests, such because the consumer handle. The appliance is uncovered by the default-router created by the OpenShift Ingress Operator on the echo.instance.com area.
Consumer info with out utilizing PROXY protocol
By default, the PROXY protocol will not be enabled. Let’s check accessing the appliance:
$ curl https://echo.instance.com
Hostname: test-application-cd7cd98f7-9xbvm
Pod Info:
-no pod info available-
Server values:
server_version=nginx: 1.13.3 – lua: 10008
Request Info:
client_address=172.24.84.165
technique=GET
actual path=/
question=
request_version=1.1
request_scheme=http
request_uri=http://echo.instance.com:8080/
Request Headers:
settle for=*/*
forwarded=for=10.240.128.45;host=echo.instance.com;proto=https
host=echo.instance.com
user-agent=curl/7.87.0
x-forwarded-for=10.240.128.45
x-forwarded-host=echo.instance.com
x-forwarded-port=443
x-forwarded-proto=https
Request Physique:
-no physique in request-
As you possibly can see, the handle within the x-forwarded-for header 10.240.128.45 doesn’t match your handle. That’s the employee node’s handle that acquired the request from the VPC load balancer. Meaning we cannot get well the unique handle of the consumer:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.240.128.45 Prepared grasp,employee 5h33m v1.26.3+b404935
10.240.128.46 Prepared grasp,employee 5h32m v1.26.3+b404935
Enabling PROXY protocol on the default ingress controller
First, edit the Ingress Controller useful resource:
oc -n openshift-ingress-operator edit ingresscontroller/default
Within the Ingress controller useful resource, discover the spec.endpointPublishingStrategy.loadBalancer part and outline the next providerParameters values:
endpointPublishingStrategy:
loadBalancer:
providerParameters:
kind: IBM
ibm:
protocol: PROXY
scope: Exterior
kind: LoadBalancerService
Then, save and apply the useful resource.
Consumer info utilizing PROXY protocol
Wait till the default-router pods are recycled and check entry to the appliance once more:
$ curl https://echo.instance.com
Hostname: test-application-cd7cd98f7-9xbvm
Pod Info:
-no pod info available-
Server values:
server_version=nginx: 1.13.3 – lua: 10008
Request Info:
client_address=172.24.84.184
technique=GET
actual path=/
question=
request_version=1.1
request_scheme=http
request_uri=http://echo.instance.com:8080/
Request Headers:
settle for=*/*
forwarded=for=192.0.2.42;host=echo.instance.com;proto=https
host=echo.instance.com
user-agent=curl/7.87.0
x-forwarded-for=192.0.2.42
x-forwarded-host=echo.instance.com
x-forwarded-port=443
x-forwarded-proto=https
Request Physique:
-no physique in request-
This time, you could find the precise consumer handle 192.0.2.42 within the request headers, which is the precise public IP handle of the unique consumer.
Limitations
The PROXY protocol function on Crimson Hat OpenShift on IBM Cloud is supported for less than VPC technology 2 clusters that run 4.13 OpenShift model or later.
Extra info
For extra info, take a look at our official documentation about exposing apps with load balancers, enabling PROXY protocol for Ingress Controllers or the Crimson Hat OpenShift documentation.
[ad_2]
Source link