Capability
Automatic Request Routing And Canary Deployment With Traffic Splitting
5 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
Kubernetes ML inference — serverless autoscaling, canary rollouts, multi-framework, Kubeflow.
Unique: Implements traffic splitting through Kubernetes Ingress annotations and Knative Serving integration, allowing canary deployments without external service mesh; traffic percentages are declaratively specified in InferenceService CRD and reconciled into Ingress resources by the controller
vs others: Simpler than Istio-based canary deployments (no VirtualService/DestinationRule CRDs required); more integrated than manual kubectl service patching; supports both Knative and native Ingress backends