kuberspay部署高可用kubernetes集群

来源:转载

1. 环境

准备系统环境,本系统是centos7.3


node01 192.168.1.1


node02 192.168.1.2


node03 192.168.1.3


设置主机名


hostnamectl --static set-hostnamenode01
hostnamectl --static set-hostnamenode02
hostnamectl --static set-hostnamenode03

关闭防火墙


systemctl disable firewalld
systemctl stop firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

更新资源库


rpm -qa | grep epel-release || rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

安装docker


cd /etc/yum.repos.d
vim docker.repo
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
yum install -y docker-engine-1.13.1-1.el7.centos.x86_64

docker 简单配置,docker 文件一般存储在/var/lib/docker目录下,会造成系统盘撑满,可以将路径进行修改


ln -s /data/docker /var/lib/docker 2.安装ansible

在ansible的控制服务器ansible-client上安装ansible。


# 安装 python 及 epel
yum install -y epel-release python-pip python34 python34-pip
# 安装 ansible(必须先安装 epel 源再安装 ansible)
yum install -y ansible

ansible不支持windows,其他系统的安装方式可以查阅ansible官方网站。


Ansible中文权威指南: http://ansible-tran.readthedocs.io/en/latest/


3. 设置免密码登录

在ansible-client执行 ssh-keygen -t rsa 生成密钥对


ssh-keygen -t rsa -P ''

将~/.ssh/id_rsa.pub复制到其他所有节点,这样ansible-client到其他所有节点可以免密登录


IP=(192.168.1.1 192.168.1.2 192.168.1.3)
for x in ${IP[*]}; do ssh-copy-id -i ~/.ssh/id_rsa.pub $x; done
4. 下载kubespray源码

本文档下载 2.1.2版本


wget https://github.com/kubernetes-incubator/kubespray/archive/v2.1.2.tar.gz

解压kubespray后,需要将源码中的镜像进行替换,替换成小炒肉的docker hub镜像,将gcr.io/google_containers 替换成jicki


替换文件列表


./kubespray-2.1.2/extra_playbooks/roles/dnsmasq/templates/dnsmasq-autoscaler.yml
./kubespray-2.1.2/extra_playbooks/roles/download/defaults/main.yml
./kubespray-2.1.2/extra_playbooks/roles/kubernetes-apps/ansible/defaults/main.yml
./kubespray-2.1.2/roles/download/defaults/main.yml
./kubespray-2.1.2/roles/dnsmasq/templates/dnsmasq-autoscaler.yml
./kubespray-2.1.2/roles/kubernetes-apps/ansible/defaults/main.yml

涉及到的镜像及替换版本 后面那个是jicki上已有的版本


quay.io/coreos/hyperkube:v1.6.12_coreos.0
quay.io/coreos/etcd:v3.2.4
quay.io/calico/ctl:v1.4.0
quay.io/calico/node:v2.4.1
quay.io/calico/cni:v1.10.0
quay.io/kube-policy-controller:v0.7.0
quay.io/calico/routereflector:v0.3.0
quay.io/coreos/flannel:v0.8.0
quay.io/coreos/flannel-cni:v0.2.0
quay.io/l23network/k8s-netchecker-agent:v1.0
quay.io/l23network/k8s-netchecker-server:v1.0
weaveworks/weave-kube:2.0.1
weaveworks/weave-npc:2.0.1
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
gcr.io/google_containers/fluentd-elasticsearch:1.22替换版本1.24
gcr.io/google_containers/kibana:v4.6.1 替换版本6
gcr.io/google_containers/elasticsearch:v2.4.1替换版本2
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2
gcr.io/google_containers/pause-amd64:3.0
gcr.io/kubernetes-helm/tiller:v2.2.2 2.7.0
gcr.io/google_containers/heapster-grafana-amd64:v4.4.1
gcr.io/google_containers/heapster-amd64:v1.4.0
gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1
gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
gcr.io/google_containers/defaultbackend:1.3 替换版本1.4

后来试了一下怎么只有主节点会下到镜像,其他节点下不到。。


只能一个个pull ,tag了


docker pull jicki/pause-amd64:3.0
docker tag jicki/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker pull jicki/k8s-dns-kube-dns-amd64:1.14.2
docker tag jicki/k8s-dns-kube-dns-amd64:1.14.2 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2
docker pull jicki/kubernetes-dashboard-amd64:v1.6.3
docker tag jicki/kubernetes-dashboard-amd64:v1.6.3 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
docker pull jicki/cluster-proportional-autoscaler-amd64:1.1.1
docker tag jicki/cluster-proportional-autoscaler-amd64:1.1.1 gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
docker pull jicki/fluentd-elasticsearch:1.24
docker tag jicki/fluentd-elasticsearch:1.24 gcr.io/google_containers/fluentd-elasticsearch:1.24
docker pull jicki/kibana:6
docker tag jicki/kibana:6 gcr.io/google_containers/kibana:6
docker pull jicki/elasticsearch:2
docker tag jicki/elasticsearch:2 gcr.io/google_containers/elasticsearch:2
docker pull jicki/k8s-dns-sidecar-amd64:1.14.2
docker tag jicki/k8s-dns-sidecar-amd64:1.14.2 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2
docker pull jicki/k8s-dns-kube-dns-amd64:1.14.2
docker tag jicki/k8s-dns-kube-dns-amd64:1.14.2 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2
docker pull jicki/k8s-dns-dnsmasq-nanny-amd64:1.14.2
docker tag jicki/k8s-dns-dnsmasq-nanny-amd64:1.14.2 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2
docker pull jicki/tiller:2.7.0
docker tag jicki/tiller:2.7.0 gcr.io/kubernetes-helm/tiller:2.7.0
docker pull jicki/heapster-grafana-amd64:v4.4.1
docker tag jicki/heapster-grafana-amd64:v4.4.1 gcr.io/google_containers/heapster-grafana-amd64:v4.4.1
docker pull jicki/heapster-amd64:v1.4.0
docker tag jicki/heapster-amd64:v1.4.0 gcr.io/google_containers/heapster-amd64:v1.4.0
docker pull jicki/heapster-influxdb-amd64:v1.1.1
docker tag jicki/heapster-influxdb-amd64:v1.1.1 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1
docker pull jicki/nginx-ingress-controller:0.9.0-beta.11
docker tag jicki/nginx-ingress-controller:0.9.0-beta.11 gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
docker pulljicki/defaultbackend:1.4
docker tag jicki/defaultbackend:1.4 gcr.io/google_containers/defaultbackend:1.4
5. 生成配置
CONFIG_FILE=inventory/inventory.cfg python3 contrib/inventory_builder/inventory.py 192.168.1.1 192.168.1.2 192.168.1.3
6.启动
ansible-playbook -i inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa 常见问题
fatal: [node1]: FAILED! => {"failed": true, "msg": "The conditional check '{%- set certs = {'sync': False} -%}/n{% if gen_node_certs[inventory_hostname] or/n(not etcdcert_node.results[0].stat.exists|default(False)) or/n(not etcdcert_node.results[1].stat.exists|default(False)) or/n(etcdcert_node.results[1].stat.checksum|default('') != etcdcert_master.files|selectattr(/"path/", /"equalto/", etcdcert_node.results[1].stat.path)|map(attribute=/"checksum/")|first|default('')) -%}/n {%- set _ = certs.update({'sync': True}) -%}/n{% endif %}/n{{ certs.sync }}' failed. The error was: no test named 'equalto'/n/nThe error appears to have been in '/root/kubespray-2.1.2/roles/etcd/tasks/check_certs.yml': line 57, column 3, but may/nbe elsewhere in the file depending on the exact syntax problem./n/nThe offending line appears to be:/n/n/n- name: /"Check_certs | Set 'sync_certs' to true/"/n^ here/n"}

更新Jinja2


pip install --upgrade Jinja2
失败清理
rm -rf /etc/kubernetes/
rm -rf /var/lib/kubelet
rm -rf /var/lib/etcd
rm -rf /usr/local/bin/kubectl
rm -rf /etc/systemd/system/calico-node.service
rm -rf /etc/systemd/system/kubelet.service
systemctl stop etcd.service
systemctl disable etcd.service
systemctl stop calico-node.service
systemctl disable calico-node.service
docker stop $(docker ps -q)
docker rm $(docker ps -a -q)
service docker restart 参考文献

使用kuberspay部署高可用kubernetes集群 http://blog.csdn.net/zhuchuangang/article/details/77712614


kargo kubernetes 1.6.4 https://jicki.me/2017/06/06/kargo-k8s-1.6.4/


分享给朋友:
您可能感兴趣的文章:
随机阅读: