Kubernetes

Kubernetes:基於另一個命名空間中的指標的水平自動縮放

  • December 17, 2019

我想根據部署在另一個命名空間中的入口控制器的指標為部署設置水平自動縮放。

我有一個部署(petclinic)部署在某個命名空間(petclinic)中。

我有一個入口控制器 ( nginx-ingress) 部署在另一個命名空間 ( nginx-ingress) 中。

入口控制器已經部署了 Helm 和 Tiller,所以我有以下ServiceMonitor實體:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
 annotations:
   kubectl.kubernetes.io/last-applied-configuration: |
     {"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"creationTimestamp":"2019-08-19T10:48:00Z","generation":5,"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.12.1","component":"controller","heritage":"Tiller","release":"nginx-ingress"},"name":"nginx-ingress-controller","namespace":"nginx-ingress","resourceVersion":"7391237","selfLink":"/apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller","uid":"0217c466-5b78-4e38-885a-9ee65deb2dcd"},"spec":{"endpoints":[{"interval":"30s","port":"metrics"}],"namespaceSelector":{"matchNames":["nginx-ingress"]},"selector":{"matchLabels":{"app":"nginx-ingress","component":"controller","release":"nginx-ingress"}}}}
 creationTimestamp: "2019-08-21T13:12:00Z"
 generation: 1
 labels:
   app: nginx-ingress
   chart: nginx-ingress-1.12.1
   component: controller
   heritage: Tiller
   release: nginx-ingress
 name: nginx-ingress-controller
 namespace: nginx-ingress
 resourceVersion: "7663160"
 selfLink: /apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller
 uid: 33421be7-108b-4b81-9673-05db140364ce
spec:
 endpoints:
 - interval: 30s
   port: metrics
 namespaceSelector:
   matchNames:
   - nginx-ingress
 selector:
   matchLabels:
     app: nginx-ingress
     component: controller
     release: nginx-ingress

我也有 Prometheus Operaton 實例,它找到了這個實體,並用這個節更新了 Prometheus 的配置:

- job_name: nginx-ingress/nginx-ingress-controller/0
 honor_labels: false
 kubernetes_sd_configs:
 - role: endpoints
   namespaces:
     names:
     - nginx-ingress
 scrape_interval: 30s
 relabel_configs:
 - action: keep
   source_labels:
   - __meta_kubernetes_service_label_app
   regex: nginx-ingress
 - action: keep
   source_labels:
   - __meta_kubernetes_service_label_component
   regex: controller
 - action: keep
   source_labels:
   - __meta_kubernetes_service_label_release
   regex: nginx-ingress
 - action: keep
   source_labels:
   - __meta_kubernetes_endpoint_port_name
   regex: metrics
 - source_labels:
   - __meta_kubernetes_endpoint_address_target_kind
   - __meta_kubernetes_endpoint_address_target_name
   separator: ;
   regex: Node;(.*)
   replacement: ${1}
   target_label: node
 - source_labels:
   - __meta_kubernetes_endpoint_address_target_kind
   - __meta_kubernetes_endpoint_address_target_name
   separator: ;
   regex: Pod;(.*)
   replacement: ${1}
   target_label: pod
 - source_labels:
   - __meta_kubernetes_namespace
   target_label: namespace
 - source_labels:
   - __meta_kubernetes_service_name
   target_label: service
 - source_labels:
   - __meta_kubernetes_pod_name
   target_label: pod
 - source_labels:
   - __meta_kubernetes_service_name
   target_label: job
   replacement: ${1}
 - target_label: endpoint
   replacement: metrics

我還有一個 Prometheus-Adapter 實例,所以我custom.metrics.k8s.io在可用 API 列表中有這個 API。

正在收集和公開指標,因此使用以下命令:

$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests" | jq .

給出以下結果:

{
 "kind": "MetricValueList",
 "apiVersion": "custom.metrics.k8s.io/v1beta1",
 "metadata": {
   "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests"
 },
 "items": [
   {
     "describedObject": {
       "kind": "Ingress",
       "namespace": "nginx-ingress",
       "name": "petclinic",
       "apiVersion": "extensions/v1beta1"
     },
     "metricName": "nginx_ingress_controller_requests",
     "timestamp": "2019-08-20T12:56:50Z",
     "value": "11"
   }
 ]
}

到目前為止一切順利,對吧?

我需要為我的部署設置 HPA 實體。做這樣的事情:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
 name: petclinic
 namespace: petclinic
spec:
 scaleTargetRef:
   apiVersion: apps/v1
   kind: Deployment
   name: petclinic
 minReplicas: 1
 maxReplicas: 10
 metrics:
 - type: Object
   object:
     metricName: nginx_ingress_controller_requests
     target:
       apiVersion: extensions/v1beta1
       kind: Ingress
       name: petclinic
     targetValue: 10k

當然,這是不正確的,因為nginx_ingress_controller_requestsnginx-ingress命名空間有關,所以它不起作用(好吧,正如預期的那樣):

   annotations:
     autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"SucceededGetScale","message":"the
       HPA controller was able to get the target''s current scale"},{"type":"ScalingActive","status":"False","lastTransitionTime":"2019-08-19T18:55:26Z","reason":"FailedGetObjectMetric","message":"the
       HPA was unable to compute the replica count: unable to get metric nginx_ingress_controller_requests:
       Ingress on petclinic petclinic/unable to fetch metrics
       from custom metrics API: the server could not find the metric nginx_ingress_controller_requests
       for ingresses.extensions petclinic"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"DesiredWithinRange","message":"the
       desired count is within the acceptable range"}]'
     autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":""},{"type":"Resource","resource":{"name":"cpu","currentAverageUtilization":1,"currentAverageValue":"10m"}}]'
     autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Object","object":{"target":{"kind":"Ingress","name":"petclinic","apiVersion":"extensions/v1beta1"},"metricName":"nginx_ingress_controller_requests","targetValue":"10k"}}]'
     kubectl.kubernetes.io/last-applied-configuration: |
       {"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"petclinic","namespace":"petclinic"},"spec":{"maxReplicas":10,"metrics":[{"object":{"metricName":"nginx_ingress_controller_requests","target":{"apiVersion":"extensions/v1beta1","kind":"Ingress","name":"petclinic"},"targetValue":"10k"},"type":"Object"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"petclinic"}}}

這是我在 Prometheus-Adapter 的日誌文件中看到的內容:

I0820 15:42:13.467236       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/petclinic/ingresses.extensions/petclinic/nginx_ingress_controller_requests: (6.124398ms) 404 [[kube-controller-manager/v1.15.1 (linux/amd64) kubernetes/4485c6f/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.103.98.0:37940]

HPA 在部署的命名空間中查找此指標,但我需要它從nginx-ingress命名空間中獲取它,就像這樣:

I0820 15:44:40.044797       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests: (2.210282ms) 200 [[kubectl/v1.15.2 (linux/amd64) kubernetes/f627830] 10.103.97.0:35142]

唉,autoscaling/v2beta1API 沒有spec.metrics.object.target.namespace實體,所以我不能“要求”它從另一個命名空間獲取值。:-(

有人能幫我解決這個難題嗎?有沒有辦法根據屬於另一個命名空間的自定義指標設置自動縮放?

也許有辦法讓這個指標在這個 ingress.extension 所屬的同一個命名空間中可用?

在此先感謝您提供任何線索和提示。

啊,我想通了。這是我需要的 prometheus-adapter 配置的一部分:

   rules:
   - seriesQuery: '{__name__=~"^nginx_ingress_.*",namespace!=""}'
     seriesFilters: []
     resources:
       template: <<.Resource>>
       overrides:
         exported_namespace:
           resource: "namespace"
     name:
       matches: ""
       as: ""
     metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)

達達!:-)

我的選擇是從 prometheus 導出一個外部指標,因為它們不依賴於命名空間。

@Volodymyr Melnyk您需要prometheus適配器將自定義指標導出到petclinic命名空間,我沒有看到您的配置中解決了這個問題,也許您還做了其他忘記提及的配置?

引用自:https://serverfault.com/questions/979985