存档2019年12月27日

Ubuntu安装NFS,实现网络文件共享

NFS(Network File System)即网络文件系统,是FreeBSD支持的文件系统中的一种,它允许网络中的计算机之间通过TCP/IP网络共享资源。在NFS的应用中,本地NFS的客户端应用可以透明地读写位于远端NFS服务器上的文件,就像访问本地文件一样。

嵌入式应用中就是用网线将主机与开发板连接起来。在主机开启nfs服务后,就可以像ftp一样传文件到开发板了,有的人会问,不如直接用ftp好了,但是nfs还可以挂载根文件系统啊!这样开发板直接可以用PC上的文件系统去启动了。

以下配置主要思路是在PC的Linux端开启NFS服务器端,共享一个文件夹,通过配置文件允许网络上的其他设备去挂载使用。

  1. 安装:sudo apt-get install nfs-kernel-server

  2. 启动:sudo /etc/init.d/nfs-kernel-server restart 或 sudo service nfs-kernel-server start

  3. 查看状态:sudo service nfs-kernel-server status

状态/启动/关闭/重启:sudo service nfs-kernel-server status|start|stop|restart

  1. 建立共享网络文件夹:sudo mkdir /home/nfs

  2. 设置目录和权限:

打开/etc/exports文件,在末尾添加:

/home/nfs *(rw,sync,no_root_squash)

/home/nfs 表示 nfs 共享目录,它可以作为开发板的根文件系统通过 nfs 挂接
* 表示所有的客户机都可以挂接此目录
rw 表示挂接此目录的客户机对该目录有读写的权力
no_root_squash 表示允许挂接此目录的客户机享有该主机的 root 身份

  1. 重启NFS:sudo service nfs-kernel-server restart

  2. 测试NFS:将共享文件夹挂在到本机的一个文件/home/test下:

sudo mount -t nfs -o nolock localhost:/home/nfs /home/test

如果test下目录变得和nfs目录下一致则说明挂载成功。

  1. ARM开发板挂载本机文件夹

串口登陆ARM,挂载: mount -t nfs -o nolock 192.168.15.124:/home/nfs /mnt

192.168.15.124是本机的IP地址,ARM开发板的IP地址是192.168.15.95。进入/mnt下查看,成功挂载本机下的文件夹。

如果报错,出现:mount: wrong fs type, bad option, bad superblock on 125.64.41.244

是因为没有安装 mount.nfs了,我们只要安装mount.nfs就不会有wrong fs type, bad option, bad superblock错误提示了哦。

apt-get update

apt-get install nfs-common

  1. umount卸载

在不需要挂载的时候需要umount卸载掉,比如sudo umont /mnt。卸载的时候有时候会因为进程在运行,提示无法卸载“umount: /mnt: device is busy”,这时需要sudo umount -l /mnt在空闲的时候卸载。或查找到进程,强制删除进程后卸载fuser -km /mnt,详情参考fuser用法,在查询文件或套接字被占用和杀死进程的时候经常会用到。

  1. 说明:ARM开发板可以将根文件系统挂载到网络文件系统(NFS)上,启动的时候从NFS上启动。这样在开发调试阶段就很方便,不需要用SD卡或其他方式启动。而且ARM的存储空间也变得大得多,因为存储挂载到其他大容量的设备上了,比如PC。此功能没有实际运行,后期再尝试应用。

k8s web管理 dashboard

确保kubernetes环境正常
官网说明
在GitHub官网中获取dashboard的资源清单:https://github.com/kubernetes/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

或手动下载
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
修改配置文件:
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort ##增加此字段
ports:
– port: 443
targetPort: 8443
nodePort: 30443 #增加
selector:
k8s-app: kubernetes-dashboard

#因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque
===================================
创建证书
mkdir dashboard-certs
cd dashboard-certs/
#创建命名空间
kubectl create namespace kubernetes-dashboard    #yaml里会自动创建,可以不用单独创建
#创建key文件
openssl genrsa -out dashboard.key 2048
#证书请求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#创建kubernetes-dashboard-certs对象
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

master上传所需的软件包,可以先下载
kubernetesui/dashboard:v2.0.0-beta8
kubernetesui/metrics-scraper:v1.0.2
安装
kubectl apply -f recommended.yaml
#检查结果
kubectl get pods -A -o wide
kubectl get service -n kubernetes-dashboard -o wide

创建dashboard管理员

cat >> dashboard-admin.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF

kubectl create -f dashboard-admin.yaml

为用户分配权限:

cat >>dashboard-admin-bind-cluster-role.yaml<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF

kubectl create -f dashboard-admin-bind-cluster-role.yaml

访问地址:http://NodeIP:30443

创建service account并绑定默认cluster-admin管理员集群角色:

$ kubectl create serviceaccount dashboard-admin -n kube-system
$ kubectl create clusterrolebinding dashboard-admin –clusterrole=cluster-admin –serviceaccount=kube-system:dashboard-admin
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}’)
使用输出的token登录Dashboard。

Name:         dashboard-admin-token-nlhcc
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 392bdc7a-4032-4ef1-b5b7-f8d8a816b3b2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Im9IXzlYU1d0Ukhsc194aWk5R29TV2h3WGRkaF9LMEVOVWxpRW1IYXBUNUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbmxoY2MiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzkyYmRjN2EtNDAzMi00ZWYxLWI1YjctZjhkOGE4MTZiM2IyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Jd8vxPSJWvA8vwxgMZ-uSMGPHh7lY2U91Ui5mAZXH25ThSbbbHJotK1A5h6vu4XreGESBiGMKrI1sZViI4ZhaSFt2e25KrwhliYRxEZaJ5hRsBFdxc8sU16UJX9ctHMQ9RbnZyhY8gL7s2Fmz18Keowa5e-bJL7dAyeqH9WtUi_liDZHIKLtf1EtnmOE-NFxGJ7NwZYS6ZsMUXu0e0XkuhkQRE8gVof1QxuJGxtVCw0V8dNCIgzBbbpkSEWXqzHVM5Cceaf888GXqjryvIHJ-UGvKoVc2m_MRpIqLRqjmsHCFGDTFdrWk0XQDT1NcS5jAK6YJ6WW6lhrj5c65puSDQ

.安装metrics-server

在Node上下载镜像文件:

docker pull bluersw/metrics-server-amd64:v0.3.6
docker tag bluersw/metrics-server-amd64:v0.3.6 k8s.gcr.io/metrics-server-amd64:v0.3.6

在Master上执行安装:

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server/deploy/1.8+/
修改metrics-server-deployment.yaml
image: k8s.gcr.io/metrics-server-amd64  #在image下添加一下内容
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP

查找runAsNonRoot: true  修改为runAsNonRoot: false

kubectl create -f .

如果不能获取不到镜像可以更改image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6

说明:service的三种端口
port:service暴露在集群ip上的端口,提供给集群内部客户访问service入口
odePort:是k8s提供给集群外部客户访问service入口的一种方式
targetPort:targetPort是pod中容器实例上的端口,从port和nodePort上到来的数据最终经过kube-proxy流入到后端pod的targetport上进入容器

vim /etc/kubernetes/dashboard-deployment.yaml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    # Keep the name in sync with image version and
    # gce/coreos/kube-manifests/addons/dashboard counterparts
      name: kubernetes-dashboard-latest
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
            version: latest
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: kubernetes-dashboard
            image: docker.io/bestwu/kubernetes-dashboard-amd64:v1.6.3
            imagePullPolicy: IfNotPresent
            resources:
              # keep request = limit to keep this container in guaranteed class
              limits:
                cpu: 100m
                memory: 50Mi
              requests:
                cpu: 100m
                memory: 50Mi
            ports:
            - containerPort: 9090
            args:
            - --apiserver-host=http://10.3.20.100:8080
            livenessProbe:
              httpGet:
                path: /
                port: 9090
              initialDelaySeconds: 30
              timeoutSeconds: 30

[root@master-ldy ~]# vim /etc/kubernetes/dashboard-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      name: kubernetes-dashboard
      namespace: kube-system
      labels:
        k8s-app: kubernetes-dashboard
        kubernetes.io/cluster-service: "true"
    spec:
      selector:
        k8s-app: kubernetes-dashboard
      ports:
      - port: 80
        targetPort: 9090

FYI1
FYI2

k8s 管理平台 Rancher

rancher
rancher CN

什么版本的Docker才能适配Rancher和Kubernetes
请参考:(http://rancher.com/docs/rancher/v1.6/zh/hosts/#docker)

https://rancher.com/ 新版 V2.X

swapoff -a

docker run  --name rancher --privileged -d --restart=unless-stopped -p 8080:80 -p 8443:443 -v ~/rancher/data:/var/lib/rancher/ rancher/rancher:v2.5.7

docker run -d --name rancher \
-v ~/rancher/data:/var/lib/rancher/ \
--restart=unless-stopped \
--privileged \
-p 8080:80 -p 8443:443 \
-e CATTLE_SYSTEM_CATALOG=bundled \
rancher/rancher:stable \

-v ~/rancher/certs:/container/certs \
-e SSL_CERT_DIR="/container/certs" \
--no-cacerts

-v ~/rancher/certs/sercert.pem:/etc/rancher/ssl/cert.pem \
-v ~/rancher/certs/serprivkey.pem:/etc/rancher/ssl/key.pem \
-v ~/rancher/certs/cacert.pem:/etc/rancher/ssl/cacerts.pem \
--no-cacerts

scp -r D:/Desktop/temp/ ubuntu@119.29.57.229:~/rancher/certs/

证书过期
docker exec c -ti 5b4b6e274b31 mv /var/lib/rancher/management-state/certs/bundle.json /var/lib/rancher/management-state/certs/bundle.json-bak

直接把/var/lib/rancher/k3s/server/tls/下已过期的证书(.crt和.key)删掉,大概有14个,也可以生成新的证书,解决过期问题

重新升级部署前
sudo rm -rf /var/lib/etcd/member/
安装完成后可以通过http://ip:8080 访问Rancher的管理平台

K8S中国区镜像模板配置
打开环境管理页面,点击添加环境模板。命名模板并进入配置页,配置阿里巴巴镜像仓库
私有仓库地址:registry.cn-shenzhen.aliyuncs.com
AAONS组件命名空间:rancher_cn
kubernetes-helm命名空间:rancher_cn
Pod Infra Container Image: rancher_cn/pause-amd64:3.0
保存模板,创建一个Kubernetes环境并添加主机.

部署前或部署时,请使用以下命令将环境的各类信息清理干净:
docker rm -f `docker ps -a -q`
docker system prune -f
docker volume rm $(docker volume ls -q)

for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done

sudo rm -rf /etc/ceph \
       /etc/cni/* \
       /opt/cni/* \
       /opt/rke \
      /etc/kubernetes \
       /run/secrets/kubernetes.io \
       /run/calico/* \
       /run/flannel/* \
       /var/lib/calico/* \
       /var/lib/cni/* \
       /var/lib/kubelet/* \
       /var/lib/rancher/rke/log \
       /var/log/containers/* \
       /var/log/pods/* \
       /var/run/calico/* \
       /var/lib/rancher/* \
       /var/lib/docker/* \
       /var/lib/etcd/* \
      /var/lib/kubelet/* \
  ~/rancher/*

ip link del flannel.1
ip link del cni0

sudo rm -f /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db
sudo systemctl restart containerd
sudo systemctl restart docker

iptables -F && iptables -t nat -F

.rke remove
重启