k8s使用kube-router网络插件并监控流量状态

本文涉及的产品
可观测可视化 Grafana 版,10个用户账号 1个月
简介: 简介 kube-router是一个新的k8s的网络插件,使用lvs做服务的代理及负载均衡,使用iptables来做网络的隔离策略。部署简单,只需要在每个节点部署一个daemonset即可,高性能,易维护。

简介

kube-router是一个新的k8s的网络插件,使用lvs做服务的代理及负 载均衡,使用iptables来做网络的隔离策略。部署简单,只需要在每个节点部署一个daemonset即可,高性能,易维护。支持pod间通信,以及服务的代理。

安装

# 本次实验重新创建了集群,使用之前测试其他网络插件的集群环境没有成功 # 可能是由于环境干扰,实验时需要注意 # 创建kube-router目录下载相关文件
mkdir kube-router && cd kube-router
wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml

# 以下两种部署方式任选其一 # 1. 只启用 pod网络通信,网络隔离策略 功能
kubectl apply -f kubeadm-kuberouter.yaml

# 2. 启用 pod网络通信,网络隔离策略,服务代理 所有功能 # 删除kube-proxy和其之前配置的服务代理
kubectl apply -f kubeadm-kuberouter-all-features.yaml
kubectl -n kube-system delete ds kube-proxy

# 在每个节点上执行
docker run --privileged --net=host registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.10.2 kube-proxy --cleanup

# 查看
kubectl get pods --namespace kube-system
kubectl get svc --namespace kube-system
复制代码

测试

# 启动用于测试的deployment
kubectl run nginx --replicas=2 --image=nginx:alpine --port=80
kubectl expose deployment nginx --type=NodePort --name=example-service-nodeport
kubectl expose deployment nginx --name=example-service

# dns及访问测试
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes
nslookup example-service
curl example-service

# 清理
kubectl delete svc example-service example-service-nodeport
kubectl delete deploy nginx curl
复制代码

监控相关数据并可视化

重新部署kube-router

# 修改yml文件
cp kubeadm-kuberouter-all-features.yaml kubeadm-kuberouter-all-features.yaml.ori
vim kubeadm-kuberouter-all-features.yaml
...
spec:
 template:
 metadata:
 labels:
 k8s-app: kube-router
 tier: node
 annotations:
 scheduler.alpha.kubernetes.io/critical-pod: '' # 添加如下参数,让prometheus收集数据
 prometheus.io/scrape: "true"
 prometheus.io/path: "/metrics"
 prometheus.io/port: "8080"
 spec:
 serviceAccountName: kube-router
 serviceAccount: kube-router
 containers:
 - name: kube-router
 image: cloudnativelabs/kube-router
 imagePullPolicy: Always
 args:
		# 添加如下参数开启metrics
 - --metrics-path=/metrics
 - --metrics-port=8080
 - --run-router=true
 - --run-firewall=true
 - --run-service-proxy=true
 - --kubeconfig=/var/lib/kube-router/kubeconfig
...

# 重新部署
kubectl delete ds kube-router -n kube-system
kubectl apply -f kubeadm-kuberouter-all-features.yaml

# 测试获取metrics
curl http://127.0.0.1:8080/metrics
复制代码

部署prometheus

复制如下内容到prometheus.yml文件

--- apiVersion: v1 kind: ConfigMap metadata:  name: prometheus  namespace: kube-system data: prometheus.yml: |-  global:  scrape_interval: 15s  scrape_configs: # scrape config for API servers  - job_name: 'kubernetes-apiservers'  kubernetes_sd_configs:  - role: endpoints  scheme: https  tls_config:  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token  relabel_configs:  - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]  action: keep  regex: default;kubernetes;https # scrape config for nodes (kubelet)  - job_name: 'kubernetes-nodes'  scheme: https  tls_config:  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token  kubernetes_sd_configs:  - role: node  relabel_configs:  - action: labelmap  regex: __meta_kubernetes_node_label_(.+)  - target_label: __address__  replacement: kubernetes.default.svc:443  - source_labels: [__meta_kubernetes_node_name]  regex: (.+)  target_label: __metrics_path__  replacement: /api/v1/nodes/${1}/proxy/metrics # Scrape config for Kubelet cAdvisor. # # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics # (those whose names begin with 'container_') have been removed from the # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to # retrieve those metrics. # # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with # the --cadvisor-port=0 Kubelet flag). # # This job is not necessary and should be removed in Kubernetes 1.6 and # earlier versions, or it will cause the metrics to be scraped twice.  - job_name: 'kubernetes-cadvisor'  scheme: https  tls_config:  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token  kubernetes_sd_configs:  - role: node  relabel_configs:  - action: labelmap  regex: __meta_kubernetes_node_label_(.+)  - target_label: __address__  replacement: kubernetes.default.svc:443  - source_labels: [__meta_kubernetes_node_name]  regex: (.+)  target_label: __metrics_path__  replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor # scrape config for service endpoints.  - job_name: 'kubernetes-service-endpoints'  kubernetes_sd_configs:  - role: endpoints  relabel_configs:  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]  action: keep  regex: true  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]  action: replace  target_label: __scheme__  regex: (https?)  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]  action: replace  target_label: __metrics_path__  regex: (.+)  - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]  action: replace  target_label: __address__  regex: ([^:]+)(?::\d+)?;(\d+)  replacement: $1:$2  - action: labelmap  regex: __meta_kubernetes_service_label_(.+)  - source_labels: [__meta_kubernetes_namespace]  action: replace  target_label: kubernetes_namespace  - source_labels: [__meta_kubernetes_service_name]  action: replace  target_label: kubernetes_name # Example scrape config for pods  - job_name: 'kubernetes-pods'  kubernetes_sd_configs:  - role: pod  relabel_configs:  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]  action: keep  regex: true  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]  action: replace  target_label: __metrics_path__  regex: (.+)  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]  action: replace  regex: ([^:]+)(?::\d+)?;(\d+)  replacement: $1:$2  target_label: __address__  - action: labelmap  regex: __meta_kubernetes_pod_label_(.+)  - source_labels: [__meta_kubernetes_namespace]  action: replace  target_label: namespace  - source_labels: [__meta_kubernetes_pod_name]  action: replace  target_label: pod_name --- apiVersion: v1 kind: Service metadata:  annotations: prometheus.io/scrape: 'true'  labels:  name: prometheus  name: prometheus  namespace: kube-system spec:  selector:  app: prometheus  type: NodePort  ports:  - name: prometheus  protocol: TCP  port: 9090 --- apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: prometheus  namespace: kube-system spec:  replicas: 1  selector:  matchLabels:  app: prometheus  template:  metadata:  name: prometheus  labels:  app: prometheus  annotations: sidecar.istio.io/inject: "false"  spec:  serviceAccountName: prometheus  containers:  - name: prometheus  image: docker.io/prom/prometheus:v2.2.1  imagePullPolicy: IfNotPresent  args:  - '--storage.tsdb.retention=6h'  - '--config.file=/etc/prometheus/prometheus.yml'  ports:  - name: web  containerPort: 9090  volumeMounts:  - name: config-volume  mountPath: /etc/prometheus  volumes:  - name: config-volume  configMap:  name: prometheus --- apiVersion: v1 kind: ServiceAccount metadata:  name: prometheus  namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata:  name: prometheus rules: - apiGroups: [""]  resources:  - nodes  - services  - endpoints  - pods  - nodes/proxy  verbs: ["get", "list", "watch"] - apiGroups: [""]  resources:  - configmaps  verbs: ["get"] - nonResourceURLs: ["/metrics"]  verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata:  name: prometheus roleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: prometheus subjects: - kind: ServiceAccount  name: prometheus  namespace: kube-system --- 复制代码

部署测试

# 部署
kubectl apply -f prometheus.yml

# 查看
kubectl get pods --namespace kube-system
kubectl get svc --namespace kube-system

# 访问prometheus # 输入 kube_router 关键字查找 看有无提示出现
prometheusNodePort=$(kubectl get svc -n kube-system | grep prometheus | awk '{print $5}' | cut -d '/' -f 1 | cut -d ':' -f 2)
nodeName=$(kubectl get no | grep '<none>' | head -1 | awk '{print $1}')
nodeIP=$(ping -c 1 $nodeName | grep PING | awk '{print $3}' | tr -d '()')
echo "http://$nodeIP:"$prometheusNodePort 复制代码

部署grafana

复制如下内容到grafana.yml文件

--- apiVersion: v1 kind: Service metadata:  name: grafana  namespace: kube-system spec:  type: NodePort  ports:  - port: 3000  protocol: TCP  name: http  selector:  app: grafana --- apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: grafana  namespace: kube-system spec:  replicas: 1  template:  metadata:  labels:  app: grafana  spec:  serviceAccountName: grafana  containers:  - name: grafana  image: grafana/grafana  imagePullPolicy: IfNotPresent  ports:  - containerPort: 3000  volumeMounts:  - mountPath: /var/lib/grafana  name: grafana-data  volumes:  - name: grafana-data  emptyDir: {} --- apiVersion: v1 kind: ServiceAccount metadata:  name: grafana  namespace: kube-system --- 复制代码

部署测试

# 部署
kubectl apply -f grafana.yml

# 查看
kubectl get pods --namespace kube-system
kubectl get svc --namespace kube-system


# 访问grafana
grafanaNodePort=$(kubectl get svc -n kube-system | grep grafana | awk '{print $5}' | cut -d '/' -f 1 | cut -d ':' -f 2)
nodeName=$(kubectl get no | grep '<none>' | head -1 | awk '{print $1}')
nodeIP=$(ping -c 1 $nodeName | grep PING | awk '{print $3}' | tr -d '()')
echo "http://$nodeIP:"$grafanaNodePort # 默认用户密码
admin/admin
复制代码

导入并查看dashboard

# 下载官方dashboard的json文件
wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/dashboard/kube-router.json
复制代码

创建名为Prometheus类型也为Prometheus的数据源,连接地址为http://prometheus:9090/

选择刚刚下载的json文件导入dashboard

查看dashboard

1
本文转自掘金- k8s使用kube-router网络插件并监控流量状态
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
3天前
|
存储 运维 Kubernetes
Kubernetes 集群的监控与维护策略
【4月更文挑战第23天】 在微服务架构日益盛行的当下,容器编排工具如 Kubernetes 成为了运维工作的重要环节。然而,随着集群规模的增长和复杂性的提升,如何确保 Kubernetes 集群的高效稳定运行成为了一大挑战。本文将深入探讨 Kubernetes 集群的监控要点、常见问题及解决方案,并提出一系列切实可行的维护策略,旨在帮助运维人员有效管理和维护 Kubernetes 环境,保障服务的持续可用性和性能优化。
|
1月前
|
弹性计算 监控 数据可视化
ECS网络流量监控
ECS网络流量监控
63 2
|
19天前
|
运维 Kubernetes Cloud Native
探索Kubernetes的大二层网络:原理、优势与挑战🚀
在云原生领域,Kubernetes (K8s) 已经成为容器编排的事实标准☁️📦。为了支撑其灵活的服务发现和负载均衡🔍🔄,K8s采用了大二层网络的设计理念🕸️。本文将深入探讨大二层网络的工作原理、带来的好处✨,以及面临的挑战和解决方案❗🛠️。
探索Kubernetes的大二层网络:原理、优势与挑战🚀
|
1月前
|
监控 网络协议 Shell
【Shell 命令集合 网络通讯 】Linux 监控和记录网络中ARP(Address Resolution Protocol)活动 arpwatch命令 使用指南
【Shell 命令集合 网络通讯 】Linux 监控和记录网络中ARP(Address Resolution Protocol)活动 arpwatch命令 使用指南
35 0
|
2月前
|
Prometheus 监控 Kubernetes
如何用 Prometheus Operator 监控 K8s 集群外服务?
如何用 Prometheus Operator 监控 K8s 集群外服务?
|
1月前
|
Prometheus 监控 Kubernetes
Kubernetes 集群监控与日志管理实践
【2月更文挑战第29天】 在微服务架构日益普及的当下,Kubernetes 已成为容器编排的事实标准。然而,随着集群规模的扩大和业务复杂度的提升,有效的监控和日志管理变得至关重要。本文将探讨构建高效 Kubernetes 集群监控系统的策略,以及实施日志聚合和分析的最佳实践。通过引入如 Prometheus 和 Fluentd 等开源工具,我们旨在为运维专家提供一套完整的解决方案,以保障系统的稳定性和可靠性。
|
2月前
|
Prometheus 监控 Kubernetes
监控 Kubernetes 集群证书过期时间的三种方案
监控 Kubernetes 集群证书过期时间的三种方案
|
13天前
|
JSON Kubernetes 网络架构
Kubernetes CNI 网络模型及常见开源组件
【4月更文挑战第13天】目前主流的容器网络模型是CoreOS 公司推出的 Container Network Interface(CNI)模型
|
14天前
|
JSON Kubernetes Go
无缝集成:在IntelliJ IDEA中利用Kubernetes插件轻松管理容器化应用
无缝集成:在IntelliJ IDEA中利用Kubernetes插件轻松管理容器化应用
25 0
无缝集成:在IntelliJ IDEA中利用Kubernetes插件轻松管理容器化应用
|
14天前
|
JSON Kubernetes Go
IDEA使用Kubernetes插件编写YAML
IDEA使用Kubernetes插件编写YAML
30 0
IDEA使用Kubernetes插件编写YAML

推荐镜像

更多