k8s卸载清理

k8s 节点删除

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
kubeadm reset

若需要重新加入 ,则再次执行 kubeadm init or kubeadm join

清理 Pods

kubectl delete node --all
rm -r /var/etcd/backups/* 删除备份。

卸载清理K8S

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd

k8s web管理 dashboard

确保kubernetes环境正常
官网说明
在GitHub官网中获取dashboard的资源清单:https://github.com/kubernetes/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

或手动下载
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
修改配置文件:
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort ##增加此字段
ports:
– port: 443
targetPort: 8443
nodePort: 30443 #增加
selector:
k8s-app: kubernetes-dashboard

#因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque
===================================
创建证书
mkdir dashboard-certs
cd dashboard-certs/
#创建命名空间
kubectl create namespace kubernetes-dashboard    #yaml里会自动创建,可以不用单独创建
#创建key文件
openssl genrsa -out dashboard.key 2048
#证书请求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#创建kubernetes-dashboard-certs对象
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

master上传所需的软件包,可以先下载
kubernetesui/dashboard:v2.0.0-beta8
kubernetesui/metrics-scraper:v1.0.2
安装
kubectl apply -f recommended.yaml
#检查结果
kubectl get pods -A -o wide
kubectl get service -n kubernetes-dashboard -o wide

创建dashboard管理员

cat >> dashboard-admin.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF

kubectl create -f dashboard-admin.yaml

为用户分配权限:

cat >>dashboard-admin-bind-cluster-role.yaml<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF

kubectl create -f dashboard-admin-bind-cluster-role.yaml

访问地址:http://NodeIP:30443

创建service account并绑定默认cluster-admin管理员集群角色:

$ kubectl create serviceaccount dashboard-admin -n kube-system
$ kubectl create clusterrolebinding dashboard-admin –clusterrole=cluster-admin –serviceaccount=kube-system:dashboard-admin
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}’)
使用输出的token登录Dashboard。

Name:         dashboard-admin-token-nlhcc
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 392bdc7a-4032-4ef1-b5b7-f8d8a816b3b2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Im9IXzlYU1d0Ukhsc194aWk5R29TV2h3WGRkaF9LMEVOVWxpRW1IYXBUNUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbmxoY2MiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzkyYmRjN2EtNDAzMi00ZWYxLWI1YjctZjhkOGE4MTZiM2IyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Jd8vxPSJWvA8vwxgMZ-uSMGPHh7lY2U91Ui5mAZXH25ThSbbbHJotK1A5h6vu4XreGESBiGMKrI1sZViI4ZhaSFt2e25KrwhliYRxEZaJ5hRsBFdxc8sU16UJX9ctHMQ9RbnZyhY8gL7s2Fmz18Keowa5e-bJL7dAyeqH9WtUi_liDZHIKLtf1EtnmOE-NFxGJ7NwZYS6ZsMUXu0e0XkuhkQRE8gVof1QxuJGxtVCw0V8dNCIgzBbbpkSEWXqzHVM5Cceaf888GXqjryvIHJ-UGvKoVc2m_MRpIqLRqjmsHCFGDTFdrWk0XQDT1NcS5jAK6YJ6WW6lhrj5c65puSDQ

.安装metrics-server

在Node上下载镜像文件:

docker pull bluersw/metrics-server-amd64:v0.3.6
docker tag bluersw/metrics-server-amd64:v0.3.6 k8s.gcr.io/metrics-server-amd64:v0.3.6

在Master上执行安装:

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server/deploy/1.8+/
修改metrics-server-deployment.yaml
image: k8s.gcr.io/metrics-server-amd64  #在image下添加一下内容
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP

查找runAsNonRoot: true  修改为runAsNonRoot: false

kubectl create -f .

如果不能获取不到镜像可以更改image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6

说明:service的三种端口
port:service暴露在集群ip上的端口,提供给集群内部客户访问service入口
odePort:是k8s提供给集群外部客户访问service入口的一种方式
targetPort:targetPort是pod中容器实例上的端口,从port和nodePort上到来的数据最终经过kube-proxy流入到后端pod的targetport上进入容器

vim /etc/kubernetes/dashboard-deployment.yaml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    # Keep the name in sync with image version and
    # gce/coreos/kube-manifests/addons/dashboard counterparts
      name: kubernetes-dashboard-latest
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
            version: latest
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: kubernetes-dashboard
            image: docker.io/bestwu/kubernetes-dashboard-amd64:v1.6.3
            imagePullPolicy: IfNotPresent
            resources:
              # keep request = limit to keep this container in guaranteed class
              limits:
                cpu: 100m
                memory: 50Mi
              requests:
                cpu: 100m
                memory: 50Mi
            ports:
            - containerPort: 9090
            args:
            - --apiserver-host=http://10.3.20.100:8080
            livenessProbe:
              httpGet:
                path: /
                port: 9090
              initialDelaySeconds: 30
              timeoutSeconds: 30

[root@master-ldy ~]# vim /etc/kubernetes/dashboard-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      name: kubernetes-dashboard
      namespace: kube-system
      labels:
        k8s-app: kubernetes-dashboard
        kubernetes.io/cluster-service: "true"
    spec:
      selector:
        k8s-app: kubernetes-dashboard
      ports:
      - port: 80
        targetPort: 9090

FYI1
FYI2

k8s 管理平台 Rancher

rancher
rancher CN

什么版本的Docker才能适配Rancher和Kubernetes
请参考:(http://rancher.com/docs/rancher/v1.6/zh/hosts/#docker)

https://rancher.com/ 新版 V2.X

swapoff -a

docker run  --name rancher --privileged -d --restart=unless-stopped -p 8080:80 -p 8443:443 -v ~/rancher/data:/var/lib/rancher/ rancher/rancher:v2.5.7

docker run -d --name rancher \
-v ~/rancher/data:/var/lib/rancher/ \
--restart=unless-stopped \
--privileged \
-p 8080:80 -p 8443:443 \
-e CATTLE_SYSTEM_CATALOG=bundled \
rancher/rancher:stable \

-v ~/rancher/certs:/container/certs \
-e SSL_CERT_DIR="/container/certs" \
--no-cacerts

-v ~/rancher/certs/sercert.pem:/etc/rancher/ssl/cert.pem \
-v ~/rancher/certs/serprivkey.pem:/etc/rancher/ssl/key.pem \
-v ~/rancher/certs/cacert.pem:/etc/rancher/ssl/cacerts.pem \
--no-cacerts

scp -r D:/Desktop/temp/ ubuntu@119.29.57.229:~/rancher/certs/

证书过期
docker exec c -ti 5b4b6e274b31 mv /var/lib/rancher/management-state/certs/bundle.json /var/lib/rancher/management-state/certs/bundle.json-bak

直接把/var/lib/rancher/k3s/server/tls/下已过期的证书(.crt和.key)删掉,大概有14个,也可以生成新的证书,解决过期问题

重新升级部署前
sudo rm -rf /var/lib/etcd/member/
安装完成后可以通过http://ip:8080 访问Rancher的管理平台

K8S中国区镜像模板配置
打开环境管理页面,点击添加环境模板。命名模板并进入配置页,配置阿里巴巴镜像仓库
私有仓库地址:registry.cn-shenzhen.aliyuncs.com
AAONS组件命名空间:rancher_cn
kubernetes-helm命名空间:rancher_cn
Pod Infra Container Image: rancher_cn/pause-amd64:3.0
保存模板,创建一个Kubernetes环境并添加主机.

部署前或部署时,请使用以下命令将环境的各类信息清理干净:
docker rm -f `docker ps -a -q`
docker system prune -f
docker volume rm $(docker volume ls -q)

for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done

sudo rm -rf /etc/ceph \
       /etc/cni/* \
       /opt/cni/* \
       /opt/rke \
      /etc/kubernetes \
       /run/secrets/kubernetes.io \
       /run/calico/* \
       /run/flannel/* \
       /var/lib/calico/* \
       /var/lib/cni/* \
       /var/lib/kubelet/* \
       /var/lib/rancher/rke/log \
       /var/log/containers/* \
       /var/log/pods/* \
       /var/run/calico/* \
       /var/lib/rancher/* \
       /var/lib/docker/* \
       /var/lib/etcd/* \
      /var/lib/kubelet/* \
  ~/rancher/*

ip link del flannel.1
ip link del cni0

sudo rm -f /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db
sudo systemctl restart containerd
sudo systemctl restart docker

iptables -F && iptables -t nat -F

.rke remove
重启

k8s集群管理平台 wayne

### wayne
https://github.com/Qihoo360/wayne.git
架构

说明文档

开发版项目依赖
Golang 1.12+(installation manual)
Docker 17.05+ (installation manual)
Bee (installation manual) (请务必使用链接版本,不要使用 beego 官方版本,存在一些定制)
Node.js 8+ and npm 5+ (installation with nvm)
MySQL 5.6+ (Wayne 主要数据都存在 MySQL 中)
RabbitMQ (可选,如需扩展审计功能,例如操作审计和 Webhooks 等,则需部署)

快速启动
克隆代码仓库
$ go get github.com/Qihoo360/wayne
启动服务
在 Wayne 的根目录下,通过 docker-compose 创建服务
$ docker-compose -f ./hack/docker-compose/docker-compose.yaml up
通过上述命令,您可以从通过 http://127.0.0.1:4200 访问本地 Wayne, 默认管理员账号 admin:admin。
注意:项目启动后还需要配置集群和Namespace等信息才可正常使用。详见 集群配置

由于前后端使用 JWT Token 通信,生产环境一定要重新生成 RSA 文件,确保安全。生成 RSA 加密对命令如下:
$ ssh-keygen -t rsa -b 2048 -f jwtRS256.key
$ # Don’t add passphrase
$ openssl rsa -in jwtRS256.key -pubout -outform PEM -out jwtRS256.key.pub

K8问题解决

执行kubeadm init集群初始化时

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.

[警告IsDockerSystemdCheck]:检测到“cgroupfs”作为Docker cgroup驱动程序。 推荐的驱动程序是“systemd”。

vim /etc/docker/daemon.json

加入以下内容:

{

“exec-opts”:[“native.cgroupdriver=systemd”]

}

 

WARNING FileExisting-socat

socat是一个网络工具, k8s 使用它来进行 pod 的数据交互,出现这个问题直接安装socat即可:

apt-get install socat

手动拉取镜像

flannel的镜像可以使用如下命令拉到,如果你是其他镜像没拉到的话,百度一下就可以找到国内的镜像源地址了,这里记得把最后面的版本号修改成你自己的版本,具体的版本号可以用上面说的kubectl describe命令看到:

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
等镜像拉取完了之后需要把镜像名改一下,改成 k8s 没有拉到的那个镜像名称
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

工作节点加入失败

在子节点执行kubeadm join命令后返回超时错误

master节点上执行kubeadm token create --print-join-command重新生成加入命令,并使用输出的新命令在工作节点上重新执行即可。

Kubernetes K8s

Service 服务信息

kubectl get service –all-namespaces

Deployment 部署信息

kubectl create -f test.yml

kubectl delete -f test.yml

kubectl get deployment –all-namespaces

K8S文档

使用阿里的安装 k8s安装

swapoff -a 禁用虚拟内存 直接永久关闭 swap ,修改 /etc/fstab 文件,注释掉 swap那行。
apt update
apt install -y apt-transport-https ca-certificates curl software-properties-common
#docker源
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add –
add-apt-repository “deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable”

#kubeadm源
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add –
cat </etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

#安装docker、kubeadm以及k8s中不会通过docker容器部署的组件
apt update
apt install -y docker-ce kubeadm kubelet kubectl

开始初始化k8s

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
成功后按提示操作并记住join信息

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看k8s状态

kubectl --namespace kube-system get pod

(sysctl net.bridge.bridge-nf-call-iptables=1)
,对于 k8s 来说,安装一个插件其实就是应用一个插件的配置文件, kubectl apply 就是应用配置文件的命令。另外这个命令还支持 http 协议。 flannel 的配置文件我们可以直接在 githubflannel 的官方仓库拿到 url ,是 https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

(

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

增加:
tolerations:
下面的
– key: node.kubernetes.io/not-ready
operator: Exists
effect: NoSchedule

再执行
kubectl apply -f kube-flannel.yml

)

插件只在master中执行安装,安装完成以后我们等个几分钟再来看看组件的状态,都是running才正常

join 命令有三个参数,第一个是 masterip 和端口,第二个是 token ,第三个是证书的 hash 值。
join24小时之后 token 就过期,重新生成脚本

<

div>

#!/bin/bash

if [ $EUID -ne 0 ];then
    echo "You must be root (or sudo) to run this script"
    exit 1
fi

if [ $# != 1 ] ; then
    echo "Usage: $0 [master-hostname | master-ip-address]"
    echo " e.g.: $0 api.k8s.hiko.im"
    exit 1;
fi

token=`kubeadm token create`
cert_hash=`openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'`

echo "Refer the following command to join kubernetes cluster:"
echo "kubeadm join $1:6443 --token ${token} --discovery-token-ca-cert-hash sha256:${cert_hash}"
 代码复制到 mster主机 的新文件中,命名 join.sh ,然后执行他:./join.sh ipordom

kubernetes: 如何自动生成join master的命令

kubeadm token

在客户机输入joih命令

查看节点

kubectl get nodes

kubectl get po –all-namespaces / kubectl get po –all-namespaces -o wide
FYI google安装

apt-get update && apt-get install -y apt-transport-https curl

科学上网,设置代理

export http_proxy=10.10.10.99:1087 && export https_proxy=10.10.10.99:1087

echo $http_proxy

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-kkey add –
cat </etc/apt/sources/list.d/kubernetes.list
deb http://apt.kubernetes.io/kubernetes-xenial main
EOF

apt-get update && apt-get install -y kubelet kubeadm kubectl

or apt-get install -c apt-poxy-config -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl 禁止更新
手动下载两个依赖镜像。 k8s 需要的镜像并没有保存在 dockerhub 上,而是保存在 google 的服务器上。好在使用 kubeadm 部署 k8s 时可以指定其他的镜像仓库,而 dockerhub 上有另一个镜像仓库,里面是 k8s 组件的镜像。这个镜像仓库叫做 mirrorgooglecontainers 。有两个依赖镜像没有被放到这个镜像仓库中:coredns/coredns 和 coreos/flannel 。我们先把这两个镜像下载下来。命令如下:
docker pull mirrorgooglecontainers/coredns:1.2.6

docker pull coredns/coredns:1.2.6
docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6

wget https://github.com/coreos/flannel/releases/download/v0.11.0/flanneld-v0.11.0-amd64.dockerdocker load < flanneld-v0.11.0-amd64.docker

第一个镜像是 k8s 内部使用的DNS和服务发现服务镜像,第二个镜像是网络插件 flannel 的镜像。第一个镜像需要改名与 mirrorgooglecontainers 一致, k8s 在安装开始的时候就会需要它,第二个镜像不用改名之后会用到。
开始部署 k8s ,命令如下:

kubeadm init –image-repository=mirrorgooglecontainers –pod-network-cidr=10.244.0.0/16

kubectl get pods –namespace=kube-system

插件所有 pod

是否都是 running

状态。如果不是 running

状态就需要查看 pod 描述

或者 pod 日志

来排查错误,这两条命令分别是:

查看pod描述,主要看最下面的events

kubectl -n kube-system describe pod xxxxxxx #pod名称

查看pod日志

kubectl -n kube-system logs xxxxxxx # pod名称#删除kubectl -n kube-system delete pod xxxxxxx # pod名称

忽略报错继续执行
在 init 时加上参数 –ignore-preflight-errors 后面跟错误的名称。

在重新执行 init 时,请先执行

重启 kubeadm reset

systemctl daemon-reload
systemctl restart docker

重启相关的服务

$systemctl restart kube-apiserver
$systemctl restart kube-controller-manager
$systemctl restart kube-scheduler

节点 服务
master etcd、kube-apiserver、kube-controller-manager和kube-scheduler组件
node flannel 、kubelet、kube-proxy

#让配置生效
systemctl daemon-reload

#启动服务
systemctl start etcd kube-apiserver.service kube-controller-manager kube-scheduler

#重启服务
systemctl restart etcd kube-apiserver.service kube-controller-manager kube-scheduler

#设定开机启动
systemctl enable etcd kube-apiserver.service kube-controller-manager kube-scheduler

#通过systemctl status 来验证服务启动的状态。

#日志查看
cat /var/log/messages |grep kube

#启动服务
systemctl start kubelet kube-proxy

#设定开机启动
systemctl enable kubelet kube-proxy

===================================================

安装 kubeadm

安装 kubectl

kubectl 是一个用于管理 Kubernetes 的命令行工具。

Linux安装kubectl(使用如下其中一种方式):

使用国内阿里云源安装

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

使用谷歌源安装(国内网路会很慢)
~ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl

MacOS安装kubectl(使用如下其中一种方式):

通过brew方式安装
~ brew install kubectl

通过国内阿里云源安装
~ curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/curl -s http://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/stable.txt/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl

通过谷歌源安装(国内网路会很慢)
~ curl -LO https://storage.googleapis.com/kubernetes-release/release/curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl

安装完成后,查看版本:
~ kubectl version

经常Pull失败的文件

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 gcr.io/kubernetes-helm/tiller:v2.14.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.25.1

docker pull googlecontainer/defaultbackend-amd64:1.5
docker tag googlecontainer/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5
docker rmi googlecontainer/defaultbackend-amd64:1.5

docker pull sacred02/kubernetes-dashboard-amd64:v1.10.1
docker tag sacred02/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
docker rmi sacred02/kubernetes-dashboard-amd64:v1.10.1

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.2 gcr.io/google_containers/metrics-server-amd64:v0.3.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.2

docker pull registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8szk:v3
docker tag registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8szk:v3 gcr.io/google_samples/k8szk:v3
docker rmi registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8szk:v3

k8s:

MY_REGISTRY=gcr.azk8s.cn/google-containers

拉取镜像

docker pull {MY_REGISTRY}/kube-apiserver:v1.15.1
docker pull {MY_REGISTRY}/kube-controller-manager:v1.15.1
docker pull {MY_REGISTRY}/kube-scheduler:v1.15.1
docker pull {MY_REGISTRY}/kube-proxy:v1.15.1
docker pull {MY_REGISTRY}/pause:3.1
docker pull {MY_REGISTRY}/etcd:3.3.10
docker pull ${MY_REGISTRY}/coredns:1.3.1

添加Tag

docker tag {MY_REGISTRY}/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag {MY_REGISTRY}/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag {MY_REGISTRY}/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag {MY_REGISTRY}/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag {MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1
docker tag {MY_REGISTRY}/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

#删除无用的镜像
docker images | grep {MY_REGISTRY} | awk ‘{print “docker rmi “ 1“:”$2}’ | sh -x
echo “end”

节点卸载

使用kubeadm 命令 删除节点 。

kubectl drain –delete-local-data –force –ignore-daemonsets
kubectl delete node
kubeadm reset

开启IPV4转发
在/etc/sysctl.conf新添加如下参数
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_use_pmtu = 0

生效命令:
sysctl -p
查看
sysctl -a|grep “ip_forward”