zhangguanzhang's Blog

高可用的SSL consul cluster实践

字数统计: 3.8k阅读时长: 18 min
2019/09/27 Share

同事dnsmasq + consul 整服务自动发现,但是不是tls且consul server是单点,而且进程启动用的nohub而不是systemd,这里查了下资料实践了下

部署过程已经写成了ansible,地址 https://github.com/zhangguanzhang/consul-tls-ansible

开了五台云主机,总体规划为下面,角色那有条件的可以分开,例如client单独一台。不分开的话得配置client的端口和server的分开,自己去探索

IP Hostname CPU Memory role nodeName
172.19.0.3 consul1 4 8G server 172.19.0.3
172.19.0.4 consul2 4 8G server 172.19.0.4
172.19.0.5 consul3 4 8G server 172.19.0.5
172.19.0.8 consul4 2 4G client 172.19.0.8
172.19.0.9 consul5 2 4G client 172.19.0.9

利用Consul提供的服务实现服务的注册与发现,需要建立Consul Cluster。在Consul方案中,每个提供服务的节点上都要部署和运行Consul的agent,所有运行Consul agent节点的集合构成Consul Cluster。Consul agent有两种运行模式:Server和Client。这里的Server和Client只是Consul集群层面的区分,与搭建在Cluster之上 的应用服务无关。以Server模式运行的Consul agent节点用于维护Consul集群的状态,官方建议每个Consul Cluster至少有3个或以上的运行在Server mode的Agent,Client节点不限。

相关端口
ports.
8300 – TCP agent server 使用的,用于处理其他agent发来的请求
8301 – TCP & UDP agent使用此端口处理LAN中的gossip
8302 – TCP & UDP agent server使用此端口处理WAN中的与其他server的gossip
8400 - 0.8后废弃,曾经是rpc客户端端口
8500 – TCP ui-port http API 因为https我们这里改为使用8501端口,也是cli使用的端口
8600 – TCP & UDP dns解析

文章很多步骤基本上都是在172.19.0.3上执行的,文件到其他机器的copy和配置文件自行分发和目标主机上修改

下载最新版本并解压

目前最新版是1.6.1,下载页面 https://www.consul.io/downloads.html

1
2
3
4
5
wget https://releases.hashicorp.com/consul/1.6.1/consul_1.6.1_linux_amd64.zip
unzip -x consul_1.6.1_linux_amd64.zip
mv consul /usr/local/bin/
yum install -y epel-release
yum install -y bash-completion

tls

step 1: 创建ca

为了简单起见,这里我使用Consul的内置TLS助手来创建基本的CA。您只需为数据中心创建一个CA。您应该在用于创建CA的同一服务器上生成所有证书。
ca默认五年,其他的证书默认1年,这里需要带参数-days=设置长点的日期

1
2
3
consul tls ca create -days=36500
==> Saved consul-agent-ca.pem
==> Saved consul-agent-ca-key.pem

  • CA证书consul-agent-ca.pem包含验证Consul证书所需的公钥,因此必须分发给运行consul代理的每个节点。
  • CA密钥,consul-agent-ca-key.pem将用于为Consul节点签署证书,并且必须保持私有。拥有此密钥,任何人都可以将Consul作为受信任的服务器运行,并访问所有Consul数据,包括ACL令牌。

step2: 创建server角色的证书

这里数据中心默认名字为dc1,其他的自行选项赋值。在创建CA的同一台服务器上重复此过程,直到每台服务器都有一个单独的证书。该命令可以反复调用,它将自动增加证书和密钥号。您将需要将证书分发到服务器。
因为我有三个consul server,所以执行三次

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ consul tls cert create -server -dc=dc1 -days=36500
==> WARNING: Server Certificates grants authority to become a
server and access all state in the cluster including root keys
and all ACL tokens. Do not distribute them to production hosts
that are not server nodes. Store them as securely as CA keys.
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-0.pem
==> Saved dc1-server-consul-0-key.pem
$ consul tls cert create -server -dc=dc1 -days=36500
==> WARNING: Server Certificates grants authority to become a
server and access all state in the cluster including root keys
and all ACL tokens. Do not distribute them to production hosts
that are not server nodes. Store them as securely as CA keys.
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-1.pem
==> Saved dc1-server-consul-1-key.pem
$ consul tls cert create -server -dc=dc1 -days=36500
==> WARNING: Server Certificates grants authority to become a
server and access all state in the cluster including root keys
and all ACL tokens. Do not distribute them to production hosts
that are not server nodes. Store them as securely as CA keys.
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-2.pem
==> Saved dc1-server-consul-2-key.pem

  • 为了对Consul服务器进行身份验证,服务器会提供一种特殊的证书-包含server.dc1.consul在中Subject Alternative Name。如果启用verify_server_hostname,则仅允许提供此类证书的代理作为服务器引导。没有verify_server_hostname = true攻击者,可能会破坏Consul客户端代理,并以服务器身份重新启动该代理,以便访问您数据中心中的所有数据!这就是服务器证书很特殊的原因,只有服务器才应配置它们。

step3: 创建client角色的证书

在Consul 1.5.2中,您可以使用替代过程来自动将证书分发给客户端。要启用此新功能,请设置auto_encrypt

您可以继续使用生成证书consul tls cert create -client并手动分发证书。对于需要高度保护的数据中心,仍然需要现有的工作流程。

如果您正在运行Consul 1.5.1或更早版本,则需要使用来为每个客户端创建单独的证书consul tls cert create -client。客户端证书也由您的CA签名,但是它们没有特殊性Subject Alternative Name,这意味着如果verify_server_hostname启用,则它们不能作为server角色启动。

这里我是高于1.5.2的,不需要为每个客户端创建证书,客户端只需要拥有consul-agent-ca.pem这个ca下,会自动从server获取证书存在内存中,并且不会持久保存。但是我测试了并没有成功,还是生成了证书

1
2
3
4
5
6
7
8
$ consul tls cert create -client -dc=dc1 -days=36500
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-client-consul-0.pem
==> Saved dc1-client-consul-0-key.pem
$ consul tls cert create -client -dc=dc1 -days=36500
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-client-consul-1.pem
==> Saved dc1-client-consul-1-key.pem

step4: 创建cli的证书

1
2
3
4
$ consul tls cert create -cli -dc=dc1 -days=36500
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-cli-consul-0.pem
==> Saved dc1-cli-consul-0-key.pem

文件列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ ll
total 39088
-rw-r--r-- 1 root root 39965581 Sep 13 04:30 consul_1.6.1_linux_amd64.zip
-rw-r--r-- 1 root root 227 Oct 11 10:36 consul-agent-ca-key.pem
-rw-r--r-- 1 root root 1249 Oct 11 10:36 consul-agent-ca.pem
-rw-r--r-- 1 root root 227 Oct 11 11:47 dc1-cli-consul-0-key.pem
-rw-r--r-- 1 root root 1082 Oct 11 11:47 dc1-cli-consul-0.pem
-rw-r--r-- 1 root root 227 Oct 11 14:13 dc1-client-consul-0-key.pem
-rw-r--r-- 1 root root 1139 Oct 11 14:13 dc1-client-consul-0.pem
-rw-r--r-- 1 root root 227 Oct 11 17:35 dc1-client-consul-1-key.pem
-rw-r--r-- 1 root root 1143 Oct 11 17:35 dc1-client-consul-1.pem
-rw-r--r-- 1 root root 227 Oct 11 10:42 dc1-server-consul-0-key.pem
-rw-r--r-- 1 root root 1139 Oct 11 10:42 dc1-server-consul-0.pem
-rw-r--r-- 1 root root 227 Oct 11 10:43 dc1-server-consul-1-key.pem
-rw-r--r-- 1 root root 1139 Oct 11 10:43 dc1-server-consul-1.pem
-rw-r--r-- 1 root root 227 Oct 11 10:43 dc1-server-consul-2-key.pem
-rw-r--r-- 1 root root 1139 Oct 11 10:43 dc1-server-consul-2.pem

server和client以及cli的配置

配置优先级按以下顺序评估:

  • 命令行参数
  • 环境变量
  • 配置文件
    加载配置时,Consul会以词法顺序从文件和目录中加载配置。例如,配置文件basic_config.json将在之前处理extra_config.json。配置可以采用HCL或JSON格式。HCL支持在Consul 1.0和更高版本中可用,现在需要在所有配置文件上使用.hcl或 .json扩展名以指定其格式。
    consul默认从路径/consul/config读取配置信息,为了规范,配置文件我路径定义为/etc/consul.d/,数据目录定义为/var/lib/consul/
    所有flag的解释可以参考 https://www.cnblogs.com/sunsky303/p/9209024.html
    consul的参数可以命令行指定,也可以写json文件里,为了规范,命令行参数尽量少写,大体的配置信息都写json文件里。总体目录结构为下面,这里
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    $ tree /etc/consul.d/
    /etc/consul.d/
    ├── cli
    │   ├── path.sh //cli需要的环境变量
    │   └── ssl
    │   ├── dc1-cli-consul-0-key.pem
    │   └── dc1-cli-consul-0.pem
    ├── consul-agent-ca.pem
    └── server
    ├── conf.json
    └── ssl
    ├── dc1-server-consul-0-key.pem //第一台server对应0的证书,第二个就是1
    └── dc1-server-consul-0.pem //同上,下面的配置文件里指定的文件名也要一致

server和client我们不用nohub,使用systemd跑,所以需要创建一个不能登陆的用户

1
2
3
useradd --system --home /etc/consul.d --shell /bin/false consul
mkdir -p /var/lib/consul
chown -R consul:consul /var/lib/consul

consul命令的子命令和参数能够自动补全。启用自动补全功能。

1
2
consul -autocomplete-install
complete -C /usr/local/bin/consul consul

server

node_name注意按照文章最开始的标题写,每个不一样

/etc/consul.d/server/conf.json配置内容为,缺省client_addr为127.0.0.1,如果是专门的机器跑consul这里需要修改为bind住0.0.0.0或者多网卡下专门指定的ip
关于bootstrap的解释见文档 https://www.consul.io/docs/install/bootstrapping.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
"data_dir": "/var/lib/server",
"node_name": "172.19.0.3",
"bootstrap_expect": 3, //数量等同于期望组件集群的最小数量
"bind_addr": "172.19.0.3",
"client_addr": "0.0.0.0",
"datacenter": "dc1",
"domain": "consul",
"leave_on_terminate": true,
"log_level": "INFO",
"start_join": [
"172.19.0.3",
"172.19.0.4",
"172.19.0.5"
],
"retry_interval": "2s",
"verify_incoming": true,
"verify_outgoing": true,
"verify_server_hostname": true,
"ca_file": "/etc/consul.d/consul-agent-ca.pem",
"cert_file": "/etc/consul.d/server/ssl/dc1-server-consul-0.pem",
"key_file": "/etc/consul.d/server/ssl/dc1-server-consul-0-key.pem",
"ports": {
"http": -1,
"dns": 8600,
"https": 8501
},
"server": true,
"ui": false
}

client

node_name注意按照文章最开始的标题写,每个不一样

client的配置/etc/consul.d/client/conf.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
"data_dir": "/var/lib/consul/client",
"node_name": "172.19.0.8",
"datacenter": "dc1",
"bind_addr": "172.19.0.8",
"client_addr": "0.0.0.0",
"retry_join": [
"172.19.0.3",
"172.19.0.4",
"172.19.0.5"
],
"retry_interval": "3s",
"rejoin_after_leave": true,
"enable_script_checks": true,
"verify_incoming": true,
"ca_file": "/etc/consul.d/consul-agent-ca.pem",
"cert_file": "/etc/consul.d/client/ssl/dc1-client-consul-0.pem",
"key_file": "/etc/consul.d/client/ssl/dc1-client-consul-0-key.pem",
"auto_encrypt": {
"tls": true
},
"ports": {
"http": -1,
"dns": 8600,
"https": 8501
}
}

cli

consul当作cli使用的时候也得走tls,不然会下面报错

1
2
$ consul members
Error retrieving members: Get http://127.0.0.1:8500/v1/agent/members?segment=_all: dial tcp 127.0.0.1:8500: connect: connection refused

路径根据实际来,证书最好是绝对路径,cli默认操作localhost上运行的consul,例如能够使用consul leave会让当前的consul退出集群,环境变量我们写子配置文件/etc/profile.d/consul-cli.sh

1
2
3
4
export CONSUL_HTTP_ADDR=https://localhost:8501
export CONSUL_CACERT=/etc/consul.d/consul-agent-ca.pem
export CONSUL_CLIENT_CERT=/etc/consul.d/cli/ssl/dc1-cli-consul-0.pem
export CONSUL_CLIENT_KEY=/etc/consul.d/cli/ssl/dc1-cli-consul-0-key.pem

  • CONSUL_HTTP_ADDR是Consul代理的URL,并设置的默认值 -http-addr。
  • CONSUL_CACERT是CA证书的位置,并将设置为默认值-ca-file。
  • CONSUL_CLIENT_CERT是CLI证书的位置,并将设置为默认值-client-cert。
  • CONSUL_CLIENT_KEY是CLI键的位置,并将设置为默认值 -client-key。

启动

这里使用systemd而不是使用lowb的nohub啥的

server的systemd /usr/lib/systemd/system/consul.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[Unit]
Description="Consul Startup process for server"
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
ConditionDirectoryNotEmpty=/etc/consul.d/server
ConditionDirectoryNotEmpty=/etc/consul.d/server/ssl

[Service]
User=consul
Group=consul
EnvironmentFile=-/etc/sysconfig/consul
PIDFile=/var/run/consul/consul.pid
PermissionsStartOnly=true
ExecStartPre=/usr/local/bin/consul validate /etc/consul.d/server
ExecStart=/usr/local/bin/consul agent -config-dir=/etc/consul.d/server
ExecReload=/usr/local/bin/consul reload
KillMode=process
KillSignal=SIGTERM
RestartSec=15s
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

RestartSec一定要配置,不然会报错start request repeated too quickly for consul.service

设置的bootstrap_expect为3,所以server达到了3以上的时候才会选举

client启动参数为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[Unit]
Description="Consul Startup process for client"
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
ConditionDirectoryNotEmpty=/etc/consul.d/server
ConditionDirectoryNotEmpty=/etc/consul.d/server/ssl

[Service]
User=consul
Group=consul
EnvironmentFile=-/etc/sysconfig/consul
PIDFile=/var/run/consul/consul.pid
PermissionsStartOnly=true
ExecStartPre=/usr/local/bin/consul validate /etc/consul.d/client
ExecStart=/usr/local/bin/consul agent -config-dir=/etc/consul.d/client
ExecReload=/usr/local/bin/consul reload
KillMode=process
KillSignal=SIGTERM
RestartSec=15s
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
1
2
3
4
5
6
7
8
9
10
11
12
$ consul members
Node Address Status Type Build Protocol DC Segment
172.19.0.3 172.19.0.3:8301 alive server 1.6.1 2 dc1 <all>
172.19.0.4 172.19.0.4:8301 alive server 1.6.1 2 dc1 <all>
172.19.0.5 172.19.0.5:8301 alive server 1.6.1 2 dc1 <all>
172.19.0.9 172.19.0.9:8301 alive client 1.6.1 2 dc1 <default>
172.19.0.8 172.19.0.8:8301 alive client 1.6.1 2 dc1 <default>
$ consul operator raft list-peers
Node ID Address State Voter RaftProtocol
172.19.0.5 47f0ed67-7e66-6bd1-bbf3-4e9abeb09dc0 172.19.0.5:8300 leader true 3
172.19.0.3 664e8ada-f69b-2681-8b1c-d9fbf3d1ccb1 172.19.0.3:8300 follower true 3
172.19.0.4 d8ee2b50-9350-14e7-da5f-0b200acf80d0 172.19.0.4:8300 follower true 3

服务发现

集群外

client+服务发现的配置可以直接写/etc/consul.d/client下面json文件,例如mysql.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"services": [
{
"name": "r-3306-mysql",
"tags": [
"slave-3306"
],
"address": "172.16.0.2",
"port": 3306,
"checks": [
{
"args": ["echo"], //实际应该写健康检查脚本
"interval": "10s"
}
]
}
]
}

consul client相关日志

1
2
3
2019/10/11 16:28:20 [INFO] agent: Synced service "r-3306-mysql"
2019/10/11 16:28:21 [INFO] agent: Synced node info
2019/10/11 16:28:25 [INFO] agent: Synced check "service:r-3306-mysql"

测试域名解析

1
2
$ dig @172.19.0.3 -p 8600 r-3306-mysql.service.consul +short
172.16.0.2

可配置其他的dns server转发到consul上,参考文档 https://learn.hashicorp.com/consul/security-networking/forwarding

k8s内部coredns转发到consul上

1
2
3
4
5
6
7
$ consul members
Node Address Status Type Build Protocol DC Segment
172.19.0.3 172.19.0.3:8301 alive server 1.6.1 2 dc1 <all>
172.19.0.4 172.19.0.4:8301 alive server 1.6.1 2 dc1 <all>
172.19.0.5 172.19.0.5:8301 alive server 1.6.1 2 dc1 <all>
172.19.0.9 172.19.0.9:8301 alive client 1.6.1 2 dc1 <default>
172.19.0.8 172.19.0.8:8301 alive client 1.6.1 2 dc1 <default>

每个member都能解析域名,coredns的配置段为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@k8s-m1 CoreAddons]# kubectl -n kube-system get cm coredns -o yaml --export
apiVersion: v1
data:
Corefile: |
.:53 {
errors
ready
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
reload
loadbalance
}
service.consul:53 {
errors
cache 30
forward . 172.19.0.3:8600 172.19.0.4:8600 172.19.0.5:8600 172.19.0.8:8600 172.19.0.9:8600
}
kind: ConfigMap
metadata:
annotations:
...
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: coredns
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns

跑一个指定版本带解析的工具pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF

测试解析,不影响集群的内部解析

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl exec -ti busybox -- nslookup r-3306-mysql.service.consul
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: r-3306-mysql.service.consul
Address 1: 172.16.0.2
$ kubectl exec -ti busybox -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

参考:

tls步骤参考 https://learn.hashicorp.com/consul/security-networking/certificates
官方部署指导: https://learn.hashicorp.com/consul/datacenter-deploy/deployment-guide
tls官方的文档(太过简陋可以不看) https://www.consul.io/docs/commands/tls/cert.html
集群搭建&简单功能测试&故障恢复: https://blog.csdn.net/chenchong08/article/details/77885989
token+openssl的tls: https://www.digitalocean.com/community/tutorials/how-to-secure-consul-with-tls-encryption-on-ubuntu-14-04
ha consul(推荐阅读): https://learn.hashicorp.com/vault/operations/ops-vault-ha-consul
systemd字段: https://blog.csdn.net/biyubang6725/article/details/100961677
consul k8s dns: https://www.consul.io/docs/platform/k8s/dns.html

CATALOG
  1. 1. 下载最新版本并解压
  2. 2. tls
    1. 2.1. step 1: 创建ca
    2. 2.2. step2: 创建server角色的证书
    3. 2.3. step3: 创建client角色的证书
    4. 2.4. step4: 创建cli的证书
  3. 3. server和client以及cli的配置
    1. 3.1. server
    2. 3.2. client
    3. 3.3. cli
  4. 4. 启动
  5. 5. 服务发现
    1. 5.1. 集群外
    2. 5.2. k8s内部coredns转发到consul上
  6. 6. 参考: