Kubernetes
Kubeadm是管理集群生命周期的重要工具,从创建到配置再到升级,Kubeadm处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心Kubernetes组件,以便为新节点提供安全而简单的连接流程并支持轻松升级。随着Kubernetes 1.13 的发布,现在Kubeadm正式成为GA。
Dcoker准备
卸载可能存在的旧版本
1
sudo apt-get remove docker docker-engine docker-ce docker.io
更新apt包索引
1
sudo apt-get update
安装以下包以使apt可以通过HTTPS使用存储库(repository):
1
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
添加Docker官方的GPG密钥:
1
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
通过下面的语句安装stable存储库
1
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu artful stable"
再更新一下apt包索引:
1
sudo apt-get update
Kubernetes 1.13已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,最低支持的Docker版本是1.11.1,最高支持是18.06,而Docker最新版本已经是
18.09了,故我们安装时需要指定版本为18.06.1-ce:1
sudo apt install docker-ce=18.06.1~ce~3-0~ubuntu
环境准备
首先准备2台虚拟机(CPU最少2核),本次使用阿里云Ubuntu18.04服务器两台
- 127.0.0.0 k8smaster
- 127.0.0.1 k8s-node1
禁用Swap
Kubernetes 1.8开始要求必须禁用Swap,如果不关闭,默认配置下kubelet将无法启动。
编辑
/etc/fstab文件:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15sudo vim /etc/fstab
/etc/fstab: static file system information.
Use 'blkid' to print the universally unique identifier for a
device; this may be used with UUID= as a more robust way to name devices
that works even if disks are added and removed. See fstab(5).
<file system> <mount point> <type> <options> <dump> <pass>
/ was on /dev/vda1 during installation
UUID=a377b828-db5d-4fdd-90e5-5d092e7310dc / ext4 errors=remount-ro 0 1
/swapfile none swap sw 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0如上,将
/swapfile所在的行注释掉,然后运行:1
sudo swapoff -a
DNS配置
在Ubuntu18.04+版本中,DNS由
systemd全面接管,接口监听在127.0.0.53:53,配置文件在/etc/systemd/resolved.conf中。有时候会导致无法解析域名的问题,采用如下方式解决:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22This file is part of systemd.
systemd is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 2.1 of the License, or
(at your option) any later version.
Entries in this file show the compile time defaults.
You can change settings by editing this file.
Defaults can be restored by simply deleting this file.
See resolved.conf(5) for details
[Resolve]
DNS=1.1.1.1 1.0.0.1
FallbackDNS=
Domains=
LLMNR=no
MulticastDNS=no
DNSSEC=no
Cache=yes
DNSStubListener=yesDNS=设置的是域名解析服务器的IP地址,这里分别设为1.1.1.1和1.0.0.1
LLMNR=设置的是禁止运行LLMNR(Link-Local Multicast Name Resolution),否则systemd-resolve会监听5535端口。安装kubeadm, kubelet 和 kubectl
kubeadm: 引导启动k8s集群的命令行工具。
kubelet: 在群集中所有节点上运行的核心组件, 用来执行如启动pods和containers等操作。
kubectl: 操作集群的命令行工具。
首先添加apt-key:
1
2sudo apt update && sudo apt install -y apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -添加kubernetes源:
1
2
3sudo vim /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main安装:
1
2
3sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
搭建
初始化Master节点
Kubenetes默认Registries地址是
k8s.gcr.io,很明显,在国内并不能访问gcr.io,因此在kubeadm v1.13之前的版本,安装起来非常麻烦,但是在1.13版本中终于解决了国内的痛点,其增加了一个--image-repository参数,默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers,其它的就可以完全按照官方文档来愉快的玩耍了。其次,我们还需要指定
--kubernetes-version参数,因为它的默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(最新版:v1.13.1)来跳过网络请求。实操
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70使用calico网络 --pod-network-cidr=192.168.0.0/16
sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --pod-network-cidr=192.168.0.0/16
输出
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.20.210]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.20.210 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.20.210 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 42.003645 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6pkrlg.8glf2fqpuf3i489m
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 127.0.0.0:6443 --token 6pkrlg.8glf2fqpuf3i489m --discovery-token-ca-cert-hash sha256:eebfe256113bee397b218ba832f412273ae734bd4686241fb910885d26efd222使用非root用户操作
kubectl1
2
3mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装网络插件
为了让Pods间可以相互通信,我们必须安装一个网络插件,并且必须在部署任何应用之前安装,CoreDNS也是在网络插件安装之后才会启动的。网络的插件完整列表,请参考 Networking and Network Policy。
在安装之前,我们先查看一下当前Pods的状态:
1
2
3
4
5
6
7
8
9
10
11kubectl get pods --all-namespaces
输出
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78d4cf999f-6pgfr 0/1 Pending 0 87s
kube-system coredns-78d4cf999f-m9kgs 0/1 Pending 0 87s
kube-system etcd-master 1/1 Running 0 47s
kube-system kube-apiserver-master 1/1 Running 0 38s
kube-system kube-controller-manager-master 1/1 Running 0 55s
kube-system kube-proxy-mkg24 1/1 Running 0 87s
kube-system kube-scheduler-master 1/1 Running 0 41s如上,可以看到CoreDND的状态是
Pending,这是因为我们还没有安装网络插件。使用如下命令命令来安装
Canal插件:1
2kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/rbac-kdd.yaml
kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/calico.yaml稍等片刻,再使用
kubectl get pods --all-namespaces命令来查看网络插件的安装情况:1
2
3
4
5
6
7
8
9
10
11
12kubectl get pods --all-namespaces
输出
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-x96gn 2/2 Running 0 47s
kube-system coredns-78d4cf999f-6pgfr 1/1 Running 0 54m
kube-system coredns-78d4cf999f-m9kgs 1/1 Running 0 54m
kube-system etcd-master 1/1 Running 3 53m
kube-system kube-apiserver-master 1/1 Running 3 53m
kube-system kube-controller-manager-master 1/1 Running 3 53m
kube-system kube-proxy-mkg24 1/1 Running 2 54m
kube-system kube-scheduler-master 1/1 Running 3 53m如上,STATUS全部变为了
Running,表示安装成功,接下来就可以加入其他节点以及部署应用了。Master隔离
1
2
3
4kubectl taint nodes --all node-role.kubernetes.io/master-
# 输出
node/master untainted
加入
在每个Node节点中以root用户输入一下命令,即Master中
kubeadm init输出的内容1
2
3
4
5kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
本次为:
kubeadm join 127.0.0.0:6443 --token 6pkrlg.8glf2fqpuf3i489m --discovery-token-ca-cert-hash如果我们忘记了Master节点的加入token,可以使用如下命令来查看:
1
2
3
4
5kubeadm token list
输出
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
6pkrlg.8glf2fqpuf3i489m 22h 2018-12-07T13:46:33Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token默认情况下,token的有效期是24小时,如果我们的token已经过期的话,可以使用以下命令重新生成:
1
2
3
4kubeadm token create
输出
u2mt59.tyqpo0v5wf05lx2q如果我们也没有
--discovery-token-ca-cert-hash的值,可以使用以下命令生成:1
2
3
4openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
输出
eebfe256113bee397b218ba832f412273ae734bd4686241fb910885d26efd222在Master节点上使用
kubectl get nodes命令来查看节点的状态:1
2
3
4
5
6kubectl get nodes
输出
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 17m v1.13.1
k8s-node1 Ready <none> 15m v1.13.1如上全部
Ready,大功告成。
Linux 添加新用户
添加新用户
1
sudo adduser [username]
提升用户权限
1
2
3
4
5vim /etc/sudoers
User privilege specification
root ALL=(ALL:ALL) ALL
[username] ALL=(ALL:ALL) ALL #添加这一行,为新用户添加root权限
配置mosh
Linux中
1
2apt-get update
apt-get install moshMac中
1
brew install mosh
检查mosh状态
1
2
3
4
5
6
7
8
9
10
11
12
13mosh-server
输出
MOSH CONNECT 60001 v2nYjhYUv7/3SzNEFZX8ug
mosh-server (mosh 1.3.2) [build mosh-1.3.2-61-g60859e9-dirty]
Copyright 2012 Keith Winstein <mosh-devel@mit.edu>
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
[mosh-server detached, pid = 88215]此时可以看到mosh运行在UDP的60001端口上
连接服务器
1
mosh <username>@IPaddress
可以添加
-p参数来制定端口大坑
一定要在阿里云控制台打开相应的UDP端口,注意是UDP。