Traffic split is a feature that splits and delivers traffic to multiple backend services. Solutions like API Gateway (e.g. Apache APISIX and Traefik) and Service Mesh (e.g. Istio and linkerd2-proxy) are capable of doing traffic splitting and implementing functionalities like Canary Release and Blue-Green Deployment.
Traffic split is also a key feature in Ingress Controller. As the Kuberentes cluster’s ingress layer, the ingress controller naturally needs to reduce the risks from releasing a new version of the application by setting up some traffic split rules so that only a manageable amount of traffic will be routed to newly released instances. In this article, we’ll introduce the traffic split (also called canary release) in Ingress Nginx and Kong Ingress Controller, and ultimately we explain the traffic split in Apache APISIX Ingress Controller.
(PS: For the sake of more concise descriptions, we use the term “canary app” to describe the backend service which gets routed after the canary rules hit, and the term “stable app” to describe the backend service which gets routed due to the canary rules miss, for instance, the canary and stable app are “foo-canary” and “foo” perspectively in the following diagram.)
Ingress Nginx supports the canary release and it’s controlled by an annotation “nginx.ingress.kubernetes.io/canary”. It also supports several annotations to customize this feature.
The destination is decided by the value of header (indicated by nginx.ingress.kubernetes.io/canary-by-header). The canary app will be routed if the value is “always”, or the stable app will be routed if value of the header is “never”.
This annotation extends nginx.ingress.kubernetes.io/canary-by-header, and now the value of the header no longer needs to be “always” or “never”.
Similar to nginx.ingress.kubernetes.io/canary-by-header,but the value is a PCRE compatible regular expression.
Use data field in Cookie header to decide the backend service.
Assign weight value between 0 and 100, traffic will be delivered according to this weight, a 0 weight means all traffic will be routed to canary app and 100 weight will route all traffic to stable app.
The following example includes YAML snippet proxies requests whose URI path is led by “/get” and the User-Agent matches with the “.Mozilla.” pattern to the canary app “foo-canary”.
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "User-Agent" nginx.ingress.kubernetes.io/canary-by-header-pattern: ".*Mozilla.*" name: ingress-v1beta1 spec: rules: - host: foo.org http: paths: - path: /get pathType: Prefix backend: serviceName: foo-canary servicePort: 80
The Kong Gateway has a canary release plugin and exposes this plugin to its ingress controller by KongPlugin resource. Administrators/Users need to create a KongPlugin object and fill the canary release rule, injecting an annotation “konghq.com/plugins” to the target Kuberentes Service. Or you can create a KongClusterPlugin object to let this canary rule effective in the whole cluster.
apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: foo-canary config: percentage: 30 upstream_host: foo.com upstream_fallback: false upstream_port: 80 plugin: canary --- apiVersion: v1 kind: Service metadata: name: foo-canary labels: app: foo annotations: konghq.com/plugins: foo-canary spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: foo canary: true
The above case marks the service “foo-canary” as “canary”, and creates a canary release rule to proxy 30% traffic to this service.
Apache APISIX splits traffic with custom rules by its traffic-split plugin, and Apache APISIX Ingress Controller implements the traffic split feature to ApisixRoute (as the first-class support, without relying on annotations) by this plugin and the flexible route match abilities in ApisixRoute.
By configuring multiple Kubernetes Services, the weight-based canary rule can be applied like:
apiVersion: apisix.apache.org/v2alpha1 kind: ApisixRoute metadata: name: foo-route spec: http: - name: rule1 match: hosts: - foo.org paths: - /get* backends: - serviceName: foo-canary servicePort: 80 weight: 10 - serviceName: foo servicePort: 80 weight: 5
The above case puts ⅔ requests whose Host is “foo.org” and with URI path prefix “/get” to “foo-canary” service, and others to foo.
The weight for canary service can be tiny for the small scale verification, and enlarge the weight by modifying the ApisixRoute until all traffic routed to the canary service, finishing the release totally.
The Exprs field in ApisixRoute allows users to configure custom route match rules. On the other hand, multiple route rules can be grouped into a single ApisixRoute object, so there is a seamless way to implement the rules-based traffic split.
apiVersion: apisix.apache.org/v2alpha1 kind: ApisixRoute metadata: name: foo-route spec: http: - name: rule1 priority: 1 match: hosts: - foo.org paths: - /get* backends: - serviceName: foo servicePort: 80 - name: rule2 priority: 2 match: hosts: - foo.org paths: - /get* exprs: - subject: scope: Query name: id op: In set: - "3" - "13" - "23" - "33" backends: - serviceName: foo-canary servicePort: 80
Requests whose Host is “foo.org”and URI path prefix is “/get” will be separated into two parts:
- The id parameter whose value is 3, 13, 23 or 33 will hit rule2, and will be forwarded to foo-canary;
- Others will hit rule1 and route to foo.
Traffic split (Canary release) in Ingress Nginx supports weight based scheme and header rule based one, but it relies on annotations and the semantic is weak; The Kong way only supports to configure canary release by weight, the scenarios are somewhat narrow, and the configuring is complicated (you need to configure several resources); In contrast, traffic split in Apache APISIX Ingress Controller is flexible and easy to configure, it works well for both the weight based and rule based traffic split scheme.
About the Author
Zhang Chao (GitHub ID: tokers), system engineer of Shenzhen Zhiliu Technology, Apache APISIX PMC, OpenResty contributor, open source enthusiast, currently focusing on cloud native, service grid and other fields.
Shenzhen Zhiliu Technology is an open source infrastructure software company providing API processing and analytics with products and solutions. It is also one of the founding members of the TARS Foundation and is committed to contributing to the TARS microservice ecosystem.