

Receives evenly between its two pods, so the final per-pod loadĭistribution is 25% for each of node A’s pods, and 50% for node B’s

Running on node B, the Local traffic policy will send 50% of the This may result in traffic imbalances toįor example, if your service has 2 pods running on node A and one pod “unit” of load balancing, regardless of how many of the service’s podsĪre running on that node. The downside of this policy is that it treats each cluster node as one Traffic between cluster nodes, your pods can see the real source IP Furthermore, because kube-proxy doesn’t need to send This policy provides the most efficient flow of traffic to your On each node, the traffic isįorwarded only to local pods by kube-proxy, there is no “horizontal” Routers will load balance incoming traffic only across those nodes They are running one or more of the service’s pods locally. With the Local traffic policy, nodes will only attract traffic if Load balancing, so your pod logs will show that external trafficĪppears to be coming from your cluster’s nodes. Obscure the source IP address of the connection when it does its The other downside of the “Cluster” policy is that kube-proxy will Node A by the BGP router, but then node A decides to send that For example, a particular user’s connection might be sent to Kube-proxy on the nodes), which can cause inefficient trafficįlows. Results in two layers of load balancing (one at the BGP router, one at In your cluster, and across all pods in your service. This policy results in uniform traffic distribution across all nodes Kube-proxy), which directs the traffic to individual pods. Subjected to a second layer of load balancing (provided by With the default Cluster traffic policy, every node in your cluster Probably skip this section: MetalLB’s behaviors and tradeoffs are If you’reįamiliar with Google Cloud’s Kubernetes load balancers, you can When announcing over BGP, MetalLB respects the service’sĮxternalTrafficPolicy option, and implements two differentĪnnouncement modes depending on what policy you select. They are just there as replicas in case a failover is needed. Pods that aren’t on the current leader node receive no traffic, The downside of this policy is that incoming traffic only goes to some pods in Pods can see the real source IP address of incoming connections. Is no “horizontal” traffic flow between nodes.īecause kube-proxy doesn’t need to send traffic between cluster nodes, your Traffic sends it only to the service’s pod(s) that are on the same node. With the Local traffic policy, kube-proxy on the node that received the Show that external traffic appears to be coming from the service’s Of the connection when it does load balancing, so your pod logs will However, kube-proxy will obscure the source IP address This policy results in uniform traffic distribution across all pods in Received the traffic does load balancing, and distributes the traffic to all the

With the default Cluster traffic policy, kube-proxy on the node that From there, the behavior depends on the selected traffic When announcing in layer2 mode, one node in your cluster will attract trafficįor the service IP. MetalLB understands and respects the service’s externalTrafficPolicy option,Īnd implements different announcements modes depending on the policy and Name of the address pool as the annotation value. address-pool annotation to your service, with the MetalLB also supports requesting a specific address pool, if you wantĪ certain kind of address but don’t care which one exactly. Please note that spec.LoadBalancerIP is planned to be deprecated in k8s apis. The annotation also supports a comma separated list of IPs to be used in case of MetalLB supports spec.loadBalancerIP and a custom /loadBalancerIPsĪnnotation. If MetalLB does not own the requestedĪddress, or if the address is already in use by another service,Īssignment will fail and MetalLB will log a warning event visible in Your service to be set up with a specific address, you can request itīy setting that parameter. MetalLB respects the spec.loadBalancerIP parameter, so if you want If your LoadBalancer is misbehaving, run kubectlĭescribe service and check the event log. MetalLB attaches informational events to the services that it’sĬontrolling. After MetalLB is installed and configured, to expose a serviceĮxternally, simply create it with spec.type set to LoadBalancer,
