k8s-centos8u2-集群-企业级kubernetes容器云自动化运维平台


概述

关于IaaS、PaaS、SaaS

K8S不是传统意义上的Paas平台,而很多互联网公司都需要的是Paas平台,而不是单纯的K8S,K8S及其周边生态(如logstash、Prometheus等)才是Paas平台,才是企业需要的。

image.png
此图片仅学习使用

获得PaaS能力的几个必要条件:

  • 统一应用的运行时环境(docker)
  • 有IaaS能力(K8S)
  • 有可靠的中间件集群、数据库集群(DBA的主要工作)
  • 有分布式存储集群(存储工程师的主要工作)
  • 有适配的监控、日志系统(Prometheus、ELK)
  • 有完善的CI、CD系统(Jenkins、Spinnaker)

阿里云、腾讯云等厂商都提供了K8S为底的服务,即你买了集群就给你配备了K8S,但我们不能完全依赖于厂商,而被钳制,同时我们也需要不断的学习以备更好的理解和使用,公司越大时越需要自己创建而不是依赖于厂商。

spinnaker:通过灵活和可配置 Pipelines,实现可重复的自动化部署;提供所有环境的全局视图,可随时查看应用程序在其部署 Pipeline 的状态;易于配置、维护和扩展;等等。

基于Kubernets生态的闭环

Kubernets集群的目标是为了构建一套Paas平台:

  • 代码提交:开发将代码提交到Git仓库
  • 持续集成:通过流水线将开发提交的代码克隆、编译、构建镜像并推到docker镜像仓库
  • 持续部署:通过流水线配置Kubernetes中Pod控制器、service和ingress等,将docker镜像部署到测试环境
  • 生产发布:通过流水线配置Kubernetes中Pod控制器、service和ingress等,将通过测试的docker镜像部署到生产环境

涉及到的功能组件:

  • 持续集成用Jenkins实现
  • 持续部署用Spinnaker实现
  • 服务配置中心用Apollo实现
  • 监控用Prometheus+Grafana实现
  • 日志收集用ELK实现
  • 通过外挂存储方式实现数据持久化,甚至可以通过StoargeClass配合PV和PVC来实现自动分配和挂盘
  • 数据库属于有状态的服务,一般不会放Kubernets集群中

Spinnaker

集群管理/应用管理

  • 集群管理主要用于管理云资源,Spinnaker所说的”云“可以理解成AWS,即主要是laaS的资源,比如OpenStak,Google云,微软云等,后来还支持了容器与Kubernetes,但是管理方式还是按照管理基础设施的模式来设计的。
  • Spinnaker使用应用程序管理功能(Application management)来查看和管理您的云资源,常涉及到的云资源有 Azure、AWS、Kubernetes等,不支持国内阿里云、腾讯云。
  • Applications, clusters, server groups是Spinnaker用来描述服务的关键概念。Load balancers and firewalls 描述了您的服务如何向用户公开。

部署管理/应用部署

  • 部署管理流程是Spinnaker的核心功能,使用minio作为持久化层,同时对接jenkins流水线创建的镜像,部署到Kubernetes集群中去,让服务真正运行起来。
  • 应用部署中核心功能有两个,流水线和部署策略。流水线将CI和CD过程串联起来,每个项目构建一个流水线,通过传递变化的参数(服务名、版本号、镜像标签等),调用Jenkins中持续集成流水线完成构建,再通过提前部署方式(如kubernetes中deployment/service/ingress)方式来将构建好的镜像发布到指定的环境中。部署策略是在通过测试环境测试之后,在生产环境中的升级策略,常用的有蓝绿发布、金丝雀发布、滚动发布。

image.png

Spinnaker的常用组件

组件

  • Deck是基于浏览器的UI。
  • Gate是API网关。
    Spinnaker UI和所有api调用程序都通过Gate与Spinnaker进行通信。
  • Clouddriver操纵云环境资源的驱动,负责管理云平台,并为所有部署的资源编制索引/缓存。
  • Front50用于管理数据持久化,用于保存应用程序、管道、项目和通知的元数据,存放在桶中。本实验采用Minio存储(类似s3)
  • Igor用于通过Jenkins和Travis CI等系统中的持续集成作业来触发管道,并且它允许在管道中使用Jenkins / Travis阶段。
  • Orca是编排引擎。它处理所有临时操作和流水线。
  • Rosco是管理调度虚拟机。为云厂商提供VM镜像或者镜像模板,Kubernetes集群中不涉及
  • Kayenta为Spinnaker提供自动化的金丝雀分析。本实验未涉及
  • Fiat 是Spinnaker的认证服务。提供用户认证,本实验未涉及,后期可以考虑使用。
  • Echo是信息通信服务。
    它支持发送通知(例如,Slack,电子邮件,SMS),并处理来自Github之类的服务中传入的Webhook。
  • Halyard: 提供spinnaker集群部署、升级和配置的,本实验未涉及

架构图

Spinnaker自己就是Spinnake一个微服务,由若干组件组成,整套逻辑架构图如下:

image.png

部署选型

  • Spinnaker官网:https://www.spinnaker.io/
    Spinnaker包含组件众多,部署相对复杂,因此官方提供的脚手架工具halyard,但是涉及的部分镜像地址难以访问。
  • Armory发行版:https://www.armory.io/
    基于Spinnaker,众多公司开发了第三方发行版来简化Spinnaker的部署工作,例如我们要用的Armory发行版。
    Armory也有自己的脚手架工具,虽然相对halyard更简化了,但仍然难以访问。

因此部署方式采用手动交付Spinnaker的Armory发行版。

部署顺序:Minio > Redis > Clouddriver >Front50 > Orca > Echo > Igor > Gate > Deck > Nginx(是静态页面所以需要)

image.png

各组件服务端口:

Service Port
Clouddriver 7002
Deck 9000
Echo 8089
Fiat 7003
Front50 8080
Gate 8084
Halyard 8064
Igor 8088
Kayenta 8090
Orca 8083
Rosco 8087

创建私有仓库harbor.op.com/armory

image.png

  • 在任一运算节点创建名称空间
  1. [root@vms21 ~]# kubectl create ns armory
  2. namespace/armory created
  3. [root@vms21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.op.com --docker-username=admin --docker-password=Harbor12543 -n armory
  4. secret/harbor created

因为armory仓库是私有的,要创建secret,否则不能拉取镜像

部署对象式存储minio

运维主机vms200上:

准备docker镜像

镜像下载地址:https://hub.docker.com/r/minio/minio

  1. [root@vms200 ~]# docker pull minio/minio:latest
  2. latest: Pulling from minio/minio
  3. df20fa9351a1: Already exists
  4. ebc4e9e74d67: Pull complete
  5. Digest: sha256:a6c895f2037fb39c2a3151fcc675bd03882a807a00d3b53026076839f32472d2
  6. Status: Downloaded newer image for minio/minio:latest
  7. docker.io/minio/minio:latest
  8. [root@vms200 ~]# docker tag minio/minio:latest harbor.op.com/armory/minio:latest
  9. [root@vms200 ~]# docker push harbor.op.com/armory/minio:latest
  10. The push refers to repository [harbor.op.com/armory/minio]
  11. d2ea7b1fe80e: Pushed
  12. 50644c29ef5a: Mounted from infra/grafana
  13. latest: digest: sha256:a6c895f2037fb39c2a3151fcc675bd03882a807a00d3b53026076839f32472d2 size: 740

准备资源配置清单

  1. [root@vms200 ~]# mkdir -p /data/k8s-yaml/armory/minio && cd /data/k8s-yaml/armory/minio/
  • Deployment
  1. [root@vms200 minio]# vi deployment.yaml
  1. kind: Deployment
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. labels:
  6. name: minio
  7. name: minio
  8. namespace: armory
  9. spec:
  10. progressDeadlineSeconds: 600
  11. replicas: 1
  12. revisionHistoryLimit: 7
  13. selector:
  14. matchLabels:
  15. name: minio
  16. template:
  17. metadata:
  18. labels:
  19. app: minio
  20. name: minio
  21. spec:
  22. containers:
  23. - name: minio
  24. image: harbor.op.com/armory/minio:latest
  25. imagePullPolicy: IfNotPresent
  26. ports:
  27. - containerPort: 9000
  28. protocol: TCP
  29. args:
  30. - server
  31. - /data
  32. env:
  33. - name: MINIO_ACCESS_KEY
  34. value: admin
  35. - name: MINIO_SECRET_KEY
  36. value: admin123
  37. readinessProbe:
  38. failureThreshold: 3
  39. httpGet:
  40. path: /minio/health/ready
  41. port: 9000
  42. scheme: HTTP
  43. initialDelaySeconds: 10
  44. periodSeconds: 10
  45. successThreshold: 1
  46. timeoutSeconds: 5
  47. volumeMounts:
  48. - mountPath: /data
  49. name: data
  50. imagePullSecrets:
  51. - name: harbor
  52. volumes:
  53. - nfs:
  54. server: vms200
  55. path: /data/nfs-volume/minio
  56. name: data

创建对应的存储

  1. [root@vms200 minio]# mkdir /data/nfs-volume/minio
  • Service
  1. [root@vms200 minio]# vi svc.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: minio
  5. namespace: armory
  6. spec:
  7. ports:
  8. - port: 80
  9. protocol: TCP
  10. targetPort: 9000
  11. selector:
  12. app: minio
  • Ingress
  1. [root@vms200 minio]# vi ingress.yaml
  1. kind: Ingress
  2. apiVersion: extensions/v1beta1
  3. metadata:
  4. name: minio
  5. namespace: armory
  6. spec:
  7. rules:
  8. - host: minio.op.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: minio
  14. servicePort: 80

解析域名

vms11

  1. [root@vms11 ~]# vi /var/named/op.com.zone
  1. ...
  2. minio A 192.168.26.10

注意serial前滚一个序号

  1. [root@vms11 ~]# systemctl restart named
  2. [root@vms11 ~]# dig -t A minio.op.com +short
  3. 192.168.26.10
  4. [root@vms11 ~]# host minio.op.com
  5. minio.op.com has address 192.168.26.10

应用资源配置清单

任意运算节点上:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/armory/minio/deployment.yaml
  2. deployment.apps/minio created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/armory/minio/svc.yaml
  4. service/minio created
  5. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/armory/minio/ingress.yaml
  6. ingress.extensions/minio created[root@hdss7-21 ~]# kubectl apply -f https://k8s-yaml.od.com/minio/deployment.yaml
  7. deployment.extensions/minio created
  8. [root@hdss7-21 ~]# kubectl apply -f https://k8s-yaml.od.com/minio/svc.yaml
  9. service/minio created
  10. [root@hdss7-21 ~]# kubectl apply -f https://k8s-yaml.od.com/minio/ingress.yaml
  11. ingress.extensions/minio created

浏览器访问

http://minio.op.com (账户:admin 密码:admin123 即Deployment中设置的明文密码)
image.png

部署Redis

Spinnaker中的redis仅仅是起到缓存作用,对Spinnaker的作用不是很大,即使宕机重启也问题不大,且并发小。基于当前有限的资源条件下,考虑使用单个副本非持久化的方式部署redis。如果需要持久化,在启动容器时,指定command和args,如改为: /usr/local/bin/redis-server /etc/myredis.conf

准备docker镜像

运维主机vms200上:
镜像下载地址:https://hub.docker.com/search?q=redis&type=image

  1. [root@vms200 ~]# docker pull redis:6.0.8
  2. 6.0.8: Pulling from library/redis
  3. d121f8d1c412: Pull complete
  4. 2f9874741855: Pull complete
  5. d92da09ebfd4: Pull complete
  6. bdfa64b72752: Pull complete
  7. e748e6f663b9: Pull complete
  8. eb1c8b66e2a1: Pull complete
  9. Digest: sha256:1cfb205a988a9dae5f025c57b92e9643ec0e7ccff6e66bc639d8a5f95bba928c
  10. Status: Downloaded newer image for redis:6.0.8
  11. docker.io/library/redis:6.0.8
  12. [root@vms200 ~]# docker images|grep redis
  13. redis 6.0.8 84c5f6e03bf0 2 weeks ago 104MB
  14. [root@vms200 ~]# docker tag redis:6.0.8 harbor.op.com/armory/redis:v6.0.8
  15. [root@vms200 ~]# docker push harbor.op.com/armory/redis:v6.0.8
  16. The push refers to repository [harbor.op.com/armory/redis]
  17. 2e9c060aef92: Pushed
  18. ea96cbf71ac4: Pushed
  19. 47d8fadc6714: Pushed
  20. 7fb1fa4d4022: Pushed
  21. 45b5e221b672: Pushed
  22. 07cab4339852: Pushed
  23. v6.0.8: digest: sha256:02d2467210e76794c98ae14c642b88ee047911c7e2ab4aa444b0bfe019a41892 size: 1572

准备资源配置清单

  1. [root@vms200 ~]# mkdir /data/k8s-yaml/armory/redis && cd /data/k8s-yaml/armory/redis
  • Deployment
  1. [root@vms200 redis]# vi deployment.yaml
  1. kind: Deployment
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. labels:
  6. name: redis
  7. name: redis
  8. namespace: armory
  9. spec:
  10. replicas: 1
  11. revisionHistoryLimit: 7
  12. selector:
  13. matchLabels:
  14. name: redis
  15. template:
  16. metadata:
  17. labels:
  18. app: redis
  19. name: redis
  20. spec:
  21. containers:
  22. - name: redis
  23. image: harbor.op.com/armory/redis:v6.0.8
  24. imagePullPolicy: IfNotPresent
  25. ports:
  26. - containerPort: 6379
  27. protocol: TCP
  28. imagePullSecrets:
  29. - name: harbor
  • Service
  1. [root@vms200 redis]# vi svc.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: redis
  5. namespace: armory
  6. spec:
  7. ports:
  8. - port: 6379
  9. protocol: TCP
  10. targetPort: 6379
  11. selector:
  12. app: redis

应用资源配置清单

任意运算节点上:

  1. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/armory/redis/deployment.yaml
  2. deployment.apps/redis created
  3. [root@vms21 ~]# kubectl apply -f http://k8s-yaml.op.com/armory/redis/svc.yaml
  4. service/redis created
  1. [root@vms21 ~]# kubectl get pod -n armory -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. minio-5d6b989d46-wxh4c 1/1 Running 0 69m 172.26.21.3 vms21.cos.com <none> <none>
  4. redis-5979d767cd-dp2sb 1/1 Running 0 2m12s 172.26.21.4 vms21.cos.com <none> <none>
  5. [root@vms21 ~]# telnet 172.26.21.4 6379
  6. Trying 172.26.21.4...
  7. Connected to 172.26.21.4.
  8. Escape character is '^]'.
  9. ^]
  10. telnet> quit
  11. Connection closed.

部署CloudDriver

CloudDriver是整套spinnaker部署中最难的部分

运维主机vms200上:

准备镜像和目录

镜像下载地址:

  1. [root@vms200 ~]# docker pull armory/spinnaker-clouddriver-slim:release-1.11.x-bee52673a
  2. release-1.11.x-bee52673a: Pulling from armory/spinnaker-clouddriver-slim
  3. 6c40cc604d8e: Pull complete
  4. e78b80385239: Pull complete
  5. 47317d99e629: Pull complete
  6. d81f37aa0a02: Pull complete
  7. 6d7a23031ae9: Pull complete
  8. f18a770afc14: Pull complete
  9. 6bea2c559832: Pull complete
  10. 68654bc5bd90: Pull complete
  11. 5f28719fb892: Pull complete
  12. Digest: sha256:1267bdc872c741ce28021d44d7c69f6eb04e7441fb1e3e475d584772de829df7
  13. Status: Downloaded newer image for armory/spinnaker-clouddriver-slim:release-1.11.x-bee52673a
  14. docker.io/armory/spinnaker-clouddriver-slim:release-1.11.x-bee52673a
  15. [root@vms200 ~]# docker images | grep spinnaker-clouddriver-slim
  16. armory/spinnaker-clouddriver-slim release-1.11.x-bee52673a f1d52d01e28d 19 months ago 1.05GB
  17. [root@vms200 ~]# docker tag f1d52d01e28d harbor.op.com/armory/clouddriver:v1.11.x
  18. [root@vms200 ~]# docker push harbor.op.com/armory/clouddriver:v1.11.x
  19. The push refers to repository [harbor.op.com/armory/clouddriver]
  20. be305dda3fe4: Pushed
  21. acceb5d68f45: Pushed
  22. e405e67c8e60: Pushed
  23. 0f59f260abd3: Pushed
  24. 43f1d24bca51: Pushed
  25. 820b438c3358: Pushed
  26. 4c6899b75fdb: Pushed
  27. 744b4cd8cf79: Pushed
  28. 503e53e365f3: Pushed
  29. v1.11.x: digest: sha256:1267bdc872c741ce28021d44d7c69f6eb04e7441fb1e3e475d584772de829df7 size: 2216
  1. [root@vms200 ~]# mkdir /data/k8s-yaml/armory/clouddriver
  2. [root@vms200 ~]# cd /data/k8s-yaml/armory/clouddriver

准备minio的secret

  1. [root@vms200 clouddriver]# vi credentials
  1. [default]
  2. aws_access_key_id=admin
  3. aws_secret_access_key=admin123

任一运算节点创建secret

  1. [root@vms21 ~]# wget http://k8s-yaml.op.com/armory/clouddriver/credentials
  2. ...
  3. [root@vms21 ~]# kubectl create secret generic credentials --from-file=./credentials -n armory
  4. secret/credentials created

也可以直接使用命令行创建:

  1. kubectl create secret generic credentials \
  2. --aws_access_key_id=admin \
  3. --aws_secret_access_key=admin123 \
  4. -n armory

签发证书与私钥

运维主机vms200上:

  1. [root@vms200 ~]# cd /opt/certs/
  2. [root@vms200 certs]# cp client-csr.json admin-csr.json
  3. [root@vms200 certs]# vi admin-csr.json

修改"CN": "k8s-node""CN": "cluster-admin"

  1. {
  2. "CN": "cluster-admin",
  3. "hosts": [
  4. ],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "beijing",
  13. "L": "beijing",
  14. "O": "op",
  15. "OU": "ops"
  16. }
  17. ]
  18. }
  1. [root@vms200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client admin-csr.json | cfssl-json -bare admin
  2. 2020/09/29 17:37:54 [INFO] generate received request
  3. 2020/09/29 17:37:54 [INFO] received CSR
  4. 2020/09/29 17:37:54 [INFO] generating key: rsa-2048
  5. 2020/09/29 17:37:54 [INFO] encoded CSR
  6. 2020/09/29 17:37:54 [INFO] signed certificate with serial number 296902468778836911333325817747181254099240321779
  7. 2020/09/29 17:37:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  8. websites. For more information see the Baseline Requirements for the Issuance and Management
  9. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  10. specifically, section 10.2.3 ("Information Requirements").
  11. [root@vms200 certs]# ls -l admin*
  12. -rw-r--r-- 1 root root 1001 Sep 29 17:37 admin.csr
  13. -rw-r--r-- 1 root root 285 Sep 29 17:37 admin-csr.json
  14. -rw------- 1 root root 1675 Sep 29 17:37 admin-key.pem
  15. -rw-r--r-- 1 root root 1367 Sep 29 17:37 admin.pem

准备cluster-admin用户配置

vms21上:

  • 分发证书
  1. [root@vms21 ~]# mkdir /opt/certs
  2. [root@vms21 ~]# cd /opt/certs
  3. [root@vms21 certs]# scp vms200:/opt/certs/ca.pem .
  4. ... 100% 1338 292.8KB/s 00:00
  5. [root@vms21 certs]# scp vms200:/opt/certs/admin.pem .
  6. ... 100% 1367 583.9KB/s 00:00
  7. [root@vms21 certs]# scp vms200:/opt/certs/admin-key.pem .
  8. ... 100% 1675 718.7KB/s 00:00
  9. [root@vms21 certs]# ll
  10. total 12
  11. -rw------- 1 root root 1675 Sep 29 18:09 admin-key.pem
  12. -rw-r--r-- 1 root root 1367 Sep 29 18:09 admin.pem
  13. -rw-r--r-- 1 root root 1338 Sep 29 18:08 ca.pem
  • 4步法创建用户+集群角色绑定
  1. [root@vms21 certs]# kubectl config set-cluster myk8s --certificate-authority=./ca.pem --embed-certs=true --server=https://192.168.26.10:8443 --kubeconfig=config
  2. Cluster "myk8s" set.
  3. [root@vms21 certs]# kubectl config set-credentials cluster-admin --client-certificate=./admin.pem --client-key=./admin-key.pem --embed-certs=true --kubeconfig=config
  4. User "cluster-admin" set.
  5. [root@vms21 certs]# kubectl config set-context myk8s-context --cluster=myk8s --user=cluster-admin --kubeconfig=config
  6. Context "myk8s-context" created.
  7. [root@vms21 certs]# kubectl config use-context myk8s-context --kubeconfig=config
  8. Switched to context "myk8s-context".
  9. [root@vms21 certs]# kubectl create clusterrolebinding myk8s-admin --clusterrole=cluster-admin --user=cluster-admin
  10. clusterrolebinding.rbac.authorization.k8s.io/myk8s-admin created
  11. [root@vms21 certs]# ls -l config
  12. -rw------- 1 root root 6201 Sep 29 18:17 config

此时使用kubectl config view查看是空的

  1. [root@vms21 certs]# ls -l /root/.kube/
  2. total 8
  3. drwxr-x--- 3 root root 23 Jul 17 16:03 cache
  4. drwxr-x--- 3 root root 4096 Sep 29 18:40 http-cache
  5. [root@vms21 certs]# cp config /root/.kube/
  6. [root@vms21 certs]# ls -l /root/.kube/
  7. total 16
  8. drwxr-x--- 3 root root 23 Jul 17 16:03 cache
  9. -rw------- 1 root root 6201 Sep 29 18:41 config
  10. drwxr-x--- 3 root root 4096 Sep 29 18:40 http-cache
  11. [root@vms21 certs]# kubectl config view
  1. apiVersion: v1
  2. clusters:
  3. - cluster:
  4. certificate-authority-data: DATA+OMITTED
  5. server: https://192.168.26.10:8443
  6. name: myk8s
  7. contexts:
  8. - context:
  9. cluster: myk8s
  10. user: cluster-admin
  11. name: myk8s-context
  12. current-context: myk8s-context
  13. kind: Config
  14. preferences: {}
  15. users:
  16. - name: cluster-admin
  17. user:
  18. client-certificate-data: REDACTED
  19. client-key-data: REDACTED
  • vms200使用kubectl验证测试kubeconfig是否可以用
  1. [root@vms200 ~]# mkdir /root/.kube
  2. [root@vms200 ~]# cd /root/.kube
  3. [root@vms200 .kube]# scp vms21:/opt/certs/config .
  4. ...
  5. [root@vms200 .kube]# scp vms21:/opt/kubernetes/server/bin/kubectl .
  6. ...
  7. [root@vms200 .kube]# mv kubectl /usr/bin/
  8. [root@vms200 .kube]# kubectl config view
  9. ...
  10. [root@vms200 .kube]# kubectl get pod -n armory
  11. NAME READY STATUS RESTARTS AGE
  12. minio-5d6b989d46-wxh4c 1/1 Running 0 3h9m
  13. redis-5979d767cd-dp2sb 1/1 Running 0 122m

如果config文件不放在缺省目录/root/.kube,可以在命令行中使用--kubeconfig指定:

  1. kubectl get pod -n armory --kubeconfig=/xxx/config
  • 使用config创建cm资源

vms21上:

  1. [root@vms21 certs]# cd /root/.kube/
  2. [root@vms21 .kube]# mv config default-kubeconfig
  3. [root@vms21 .kube]# kubectl create configmap default-kubeconfig --from-file=./default-kubeconfig -n armory
  4. configmap/default-kubeconfig created

创建并应用资源清单

Spinnaker 的配置比较繁琐,其中有一个default-config.yaml的configmap非常复杂,一般不需要修改。

运维主机vms200上:/data/k8s-yaml/armory/clouddriver

  • 创建环境变量配置:init-env.yaml (包括redis地址、对外的API接口域名等)
  1. kind: ConfigMap
  2. apiVersion: v1
  3. metadata:
  4. name: init-env
  5. namespace: armory
  6. data:
  7. API_HOST: http://spinnaker.op.com/api
  8. ARMORY_ID: c02f0781-92f5-4e80-86db-0ba8fe7b8544
  9. ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform
  10. ARMORYSPINNAKER_CONF_STORE_PREFIX: front50
  11. ARMORYSPINNAKER_GCS_ENABLED: "false"
  12. ARMORYSPINNAKER_S3_ENABLED: "true"
  13. AUTH_ENABLED: "false"
  14. AWS_REGION: us-east-1
  15. BASE_IP: 127.0.0.1
  16. CLOUDDRIVER_OPTS: -Dspring.profiles.active=armory,configurator,local
  17. CONFIGURATOR_ENABLED: "false"
  18. DECK_HOST: http://spinnaker.op.com
  19. ECHO_OPTS: -Dspring.profiles.active=armory,configurator,local
  20. GATE_OPTS: -Dspring.profiles.active=armory,configurator,local
  21. IGOR_OPTS: -Dspring.profiles.active=armory,configurator,local
  22. PLATFORM_ARCHITECTURE: k8s
  23. REDIS_HOST: redis://redis:6379
  24. SERVER_ADDRESS: 0.0.0.0
  25. SPINNAKER_AWS_DEFAULT_REGION: us-east-1
  26. SPINNAKER_AWS_ENABLED: "false"
  27. SPINNAKER_CONFIG_DIR: /home/spinnaker/config
  28. SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH: ""
  29. SPINNAKER_HOME: /home/spinnaker
  30. SPRING_PROFILES_ACTIVE: armory,configurator,local
  • 创建组件配置文件:custom-config.yaml
  1. kind: ConfigMap
  2. apiVersion: v1
  3. metadata:
  4. name: custom-config
  5. namespace: armory
  6. data:
  7. clouddriver-local.yml: |
  8. kubernetes:
  9. enabled: true
  10. accounts:
  11. - name: cluster-admin
  12. serviceAccount: false
  13. dockerRegistries:
  14. - accountName: harbor
  15. namespace: []
  16. namespaces:
  17. - test
  18. - prod
  19. kubeconfigFile: /opt/spinnaker/credentials/custom/default-kubeconfig
  20. primaryAccount: cluster-admin
  21. dockerRegistry:
  22. enabled: true
  23. accounts:
  24. - name: harbor
  25. requiredGroupMembership: []
  26. providerVersion: V1
  27. insecureRegistry: true
  28. address: http://harbor.op.com
  29. username: admin
  30. password: Harbor12543
  31. primaryAccount: harbor
  32. artifacts:
  33. s3:
  34. enabled: true
  35. accounts:
  36. - name: armory-config-s3-account
  37. apiEndpoint: http://minio
  38. apiRegion: us-east-1
  39. gcs:
  40. enabled: false
  41. accounts:
  42. - name: armory-config-gcs-account
  43. custom-config.json: ""
  44. echo-configurator.yml: |
  45. diagnostics:
  46. enabled: true
  47. front50-local.yml: |
  48. spinnaker:
  49. s3:
  50. endpoint: http://minio
  51. igor-local.yml: |
  52. jenkins:
  53. enabled: true
  54. masters:
  55. - name: jenkins-admin
  56. address: http://jenkins.op.com
  57. username: admin
  58. password: admin123
  59. primaryAccount: jenkins-admin
  60. nginx.conf: |
  61. gzip on;
  62. gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
  63. server {
  64. listen 80;
  65. location / {
  66. proxy_pass http://armory-deck/;
  67. }
  68. location /api/ {
  69. proxy_pass http://armory-gate:8084/;
  70. }
  71. rewrite ^/login(.*)$ /api/login$1 last;
  72. rewrite ^/auth(.*)$ /api/auth$1 last;
  73. }
  74. spinnaker-local.yml: |
  75. services:
  76. igor:
  77. enabled: true

该配置文件指定访问k8s、harbor、minio、Jenkins的访问方式;其中部分地址可以根据是否在k8s内部,和是否同一个名称空间来选择是否使用短域名。

  • 创建默认配置文件:default-config.yaml

此配置文件超长(放在文末位置),是用armory部署工具部署好后,基本不需要改动。 此配置也被其他组件引用。

  • Deployment:dp.yaml
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: armory-clouddriver
  6. name: armory-clouddriver
  7. namespace: armory
  8. spec:
  9. replicas: 1
  10. revisionHistoryLimit: 7
  11. selector:
  12. matchLabels:
  13. app: armory-clouddriver
  14. template:
  15. metadata:
  16. annotations:
  17. artifact.spinnaker.io/location: '"armory"'
  18. artifact.spinnaker.io/name: '"armory-clouddriver"'
  19. artifact.spinnaker.io/type: '"kubernetes/deployment"'
  20. moniker.spinnaker.io/application: '"armory"'
  21. moniker.spinnaker.io/cluster: '"clouddriver"'
  22. labels:
  23. app: armory-clouddriver
  24. spec:
  25. containers:
  26. - name: armory-clouddriver
  27. image: harbor.op.com/armory/clouddriver:v1.11.x
  28. imagePullPolicy: IfNotPresent
  29. command:
  30. - bash
  31. - -c
  32. args:
  33. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
  34. && /opt/clouddriver/bin/clouddriver
  35. ports:
  36. - containerPort: 7002
  37. protocol: TCP
  38. env:
  39. - name: JAVA_OPTS
  40. value: -Xmx512M
  41. envFrom:
  42. - configMapRef:
  43. name: init-env
  44. livenessProbe:
  45. failureThreshold: 5
  46. httpGet:
  47. path: /health
  48. port: 7002
  49. scheme: HTTP
  50. initialDelaySeconds: 600
  51. periodSeconds: 3
  52. successThreshold: 1
  53. timeoutSeconds: 1
  54. readinessProbe:
  55. failureThreshold: 5
  56. httpGet:
  57. path: /health
  58. port: 7002
  59. scheme: HTTP
  60. initialDelaySeconds: 180
  61. periodSeconds: 3
  62. successThreshold: 5
  63. timeoutSeconds: 1
  64. securityContext:
  65. runAsUser: 0
  66. volumeMounts:
  67. - mountPath: /etc/podinfo
  68. name: podinfo
  69. - mountPath: /home/spinnaker/.aws
  70. name: credentials
  71. - mountPath: /opt/spinnaker/credentials/custom
  72. name: default-kubeconfig
  73. - mountPath: /opt/spinnaker/config/default
  74. name: default-config
  75. - mountPath: /opt/spinnaker/config/custom
  76. name: custom-config
  77. imagePullSecrets:
  78. - name: harbor
  79. volumes:
  80. - configMap:
  81. defaultMode: 420
  82. name: default-kubeconfig
  83. name: default-kubeconfig
  84. - configMap:
  85. defaultMode: 420
  86. name: custom-config
  87. name: custom-config
  88. - configMap:
  89. defaultMode: 420
  90. name: default-config
  91. name: default-config
  92. - name: credentials
  93. secret:
  94. defaultMode: 420
  95. secretName: credentials
  96. - downwardAPI:
  97. defaultMode: 420
  98. items:
  99. - fieldRef:
  100. apiVersion: v1
  101. fieldPath: metadata.labels
  102. path: labels
  103. - fieldRef:
  104. apiVersion: v1
  105. fieldPath: metadata.annotations
  106. path: annotations
  107. name: podinfo
  • service:svc.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: armory-clouddriver
  5. namespace: armory
  6. spec:
  7. ports:
  8. - port: 7002
  9. protocol: TCP
  10. targetPort: 7002
  11. selector:
  12. app: armory-clouddriver
  • 应用资源清单:任一运算节点或在vms200上(因为之前可以使用kubectl)
    vms200:/data/k8s-yaml/armory/clouddriver
  1. [root@vms200 clouddriver]# kubectl apply -f ./init-env.yaml
  2. configmap/init-env created
  3. [root@vms200 clouddriver]# kubectl apply -f ./default-config.yaml
  4. configmap/default-config created
  5. [root@vms200 clouddriver]# kubectl apply -f ./custom-config.yaml
  6. configmap/custom-config created
  7. [root@vms200 clouddriver]# kubectl apply -f ./dp.yaml
  8. deployment.apps/armory-clouddriver created
  9. [root@vms200 clouddriver]# kubectl apply -f ./svc.yaml
  10. service/armory-clouddriver created

检查

  • vms200上查看pod
  1. [root@vms200 clouddriver]# kubectl get pod -n armory -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. armory-clouddriver-5fdb54ffbd-gwtzh 0/1 Running 0 3m34s 172.26.21.5 vms21.cos.com <none> <none>
  4. minio-5d6b989d46-wxh4c 1/1 Running 0 4h10m 172.26.21.3 vms21.cos.com <none> <none>
  5. redis-5979d767cd-dp2sb 1/1 Running 0 3h3m 172.26.21.4 vms21.cos.com <none> <none>
  6. [root@vms200 clouddriver]# kubectl exec -n armory minio-5d6b989d46-wxh4c -- curl -s armory-clouddriver:7002/health
  7. {"status":"UP","kubernetes":{"status":"UP"},"dockerRegistry":{"status":"UP"},"redisHealth":{"status":"UP","maxIdle":100,"minIdle":25,"numActive":0,"numIdle":3,"numWaiters":0},"diskSpace":{"status":"UP","total":101954621440,"free":83474780160,"threshold":10485760}}
  • 从上可知minio运行在vms21上,进入容器进行验证
  1. [root@vms21 .kube]# docker ps -a|grep minio
  2. 748c44a5333c harbor.op.com/armory/minio "/usr/bin/docker-ent…" 4 hours ago Up 4 hours k8s_minio_minio-5d6b989d46-wxh4c_armory_d482fdee-351b-44d0-87ae-c40a70978633_0
  3. 22088e51e681 harbor.op.com/public/pause:latest "/pause" 4 hours ago Up 4 hours k8s_POD_minio-5d6b989d46-wxh4c_armory_d482fdee-351b-44d0-87ae-c40a70978633_0
  4. [root@vms21 .kube]# docker exec -it 748c44a5333c /bin/sh
  5. / # curl armory-clouddriver:7002/health

输出以下内容:(已格式化)

  1. {
  2. "status": "UP",
  3. "kubernetes": {
  4. "status": "UP"
  5. },
  6. "dockerRegistry": {
  7. "status": "UP"
  8. },
  9. "redisHealth": {
  10. "status": "UP",
  11. "maxIdle": 100,
  12. "minIdle": 25,
  13. "numActive": 0,
  14. "numIdle": 3,
  15. "numWaiters": 0
  16. },
  17. "diskSpace": {
  18. "status": "UP",
  19. "total": 101954621440,
  20. "free": 83473608704,
  21. "threshold": 10485760
  22. }
  23. }
  1. / # curl armory-clouddriver:7002
  2. {
  3. "_links" : {
  4. "profile" : {
  5. "href" : "http://armory-clouddriver:7002/profile"
  6. }
  7. }
  8. }
  • 使用pod/容器IP验证
  1. [root@vms22 ~]# curl 172.26.21.5:7002/health
  2. {"status":"UP","kubernetes":{"status":"UP"},"dockerRegistry":{"status":"UP"},"redisHealth":{"status":"UP","maxIdle":100,"minIdle":25,"numActive":0,"numIdle":3,"numWaiters":0},"diskSpace":{"status":"UP","total":101954621440,"free":83475595264,"threshold":10485760}}[root@vms22 ~]
  3. [root@vms22 ~]# curl 172.26.21.5:7002
  4. {
  5. "_links" : {
  6. "profile" : {
  7. "href" : "http://172.26.21.5:7002/profile"
  8. }
  9. }
  10. }[

"status":"UP"表示正常

部署Front50

部署数据持久化组件Front50

运维主机vms200上:

  1. [root@vms200 ~]# mkdir /data/k8s-yaml/armory/front50
  2. [root@vms200 ~]# cd /data/k8s-yaml/armory/front50

准备docker镜像

镜像下载地址:

  1. [root@vms200 front50]# docker pull armory/spinnaker-front50-slim:release-1.10.x-98b4ab9
  2. release-1.10.x-98b4ab9: Pulling from armory/spinnaker-front50-slim
  3. 6c40cc604d8e: Already exists
  4. e78b80385239: Already exists
  5. f41fe1b6eee3: Pull complete
  6. 43986b0e233e: Pull complete
  7. bbee873f8a25: Pull complete
  8. f9e1630d99d3: Pull complete
  9. Digest: sha256:893442d2ccc55ad9ea1e48651b19a89228516012de8e7618260b2e35cbeab77f
  10. Status: Downloaded newer image for armory/spinnaker-front50-slim:release-1.10.x-98b4ab9
  11. docker.io/armory/spinnaker-front50-slim:release-1.10.x-98b4ab9
  12. [root@vms200 front50]# docker images|grep front50
  13. armory/spinnaker-front50-slim release-1.10.x-98b4ab9 97d161022d93 20 months ago 276MB
  14. [root@vms200 front50]# docker tag 97d161022d93 harbor.op.com/armory/front50:v1.10.x
  15. [root@vms200 front50]# docker push harbor.op.com/armory/front50:v1.10.x
  16. ...
  17. [root@vms200 front50]# docker pull armory/spinnaker-front50-slim:release-1.8.x-93febf2
  18. ...
  19. [root@vms200 front50]# docker tag armory/spinnaker-front50-slim:release-1.8.x-93febf2 harbor.op.com/armory/front50:v1.8.x
  20. [root@vms200 front50]# docker push harbor.op.com/armory/front50:v1.8.x
  21. ...

准备资源配置清单

  • Deployment:dp.yaml
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: armory-front50
  6. name: armory-front50
  7. namespace: armory
  8. spec:
  9. replicas: 1
  10. revisionHistoryLimit: 7
  11. selector:
  12. matchLabels:
  13. app: armory-front50
  14. template:
  15. metadata:
  16. annotations:
  17. artifact.spinnaker.io/location: '"armory"'
  18. artifact.spinnaker.io/name: '"armory-front50"'
  19. artifact.spinnaker.io/type: '"kubernetes/deployment"'
  20. moniker.spinnaker.io/application: '"armory"'
  21. moniker.spinnaker.io/cluster: '"front50"'
  22. labels:
  23. app: armory-front50
  24. spec:
  25. containers:
  26. - name: armory-front50
  27. image: harbor.op.com/armory/front50:v1.10.x
  28. imagePullPolicy: IfNotPresent
  29. command:
  30. - bash
  31. - -c
  32. args:
  33. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
  34. && /opt/front50/bin/front50
  35. ports:
  36. - containerPort: 8080
  37. protocol: TCP
  38. # env:
  39. # - name: JAVA_OPTS
  40. # value: -javaagent:/opt/front50/lib/jamm-0.2.5.jar -Xmx1000M
  41. envFrom:
  42. - configMapRef:
  43. name: init-env
  44. livenessProbe:
  45. failureThreshold: 3
  46. httpGet:
  47. path: /health
  48. port: 8080
  49. scheme: HTTP
  50. initialDelaySeconds: 600
  51. periodSeconds: 3
  52. successThreshold: 1
  53. timeoutSeconds: 1
  54. readinessProbe:
  55. failureThreshold: 3
  56. httpGet:
  57. path: /health
  58. port: 8080
  59. scheme: HTTP
  60. initialDelaySeconds: 180
  61. periodSeconds: 5
  62. successThreshold: 8
  63. timeoutSeconds: 1
  64. volumeMounts:
  65. - mountPath: /etc/podinfo
  66. name: podinfo
  67. - mountPath: /home/spinnaker/.aws
  68. name: credentials
  69. - mountPath: /opt/spinnaker/config/default
  70. name: default-config
  71. - mountPath: /opt/spinnaker/config/custom
  72. name: custom-config
  73. imagePullSecrets:
  74. - name: harbor
  75. volumes:
  76. - configMap:
  77. defaultMode: 420
  78. name: custom-config
  79. name: custom-config
  80. - configMap:
  81. defaultMode: 420
  82. name: default-config
  83. name: default-config
  84. - name: credentials
  85. secret:
  86. defaultMode: 420
  87. secretName: credentials
  88. - downwardAPI:
  89. defaultMode: 420
  90. items:
  91. - fieldRef:
  92. apiVersion: v1
  93. fieldPath: metadata.labels
  94. path: labels
  95. - fieldRef:
  96. apiVersion: v1
  97. fieldPath: metadata.annotations
  98. path: annotations
  99. name: podinfo

使用1.10.x版本时注释env那3行 使用了default-config.yaml的配置,如fetch.sh来自这个配置文件

  • Service:svc.yaml
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: armory-front50
  5. namespace: armory
  6. spec:
  7. ports:
  8. - port: 8080
  9. protocol: TCP
  10. targetPort: 8080
  11. selector:
  12. app: armory-front50

应用资源配置清单

任意一台运算节点上:

  1. [root@vms22 ~]# kubectl apply -f http://k8s-yaml.op.com/armory/front50/dp.yaml
  2. deployment.apps/armory-front50 created
  3. [root@vms22 ~]# kubectl apply -f http://k8s-yaml.op.com/armory/front50/svc.yaml
  4. service/armory-front50 created

检查:

  1. [root@vms22 ~]# kubectl get po -n armory -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. armory-clouddriver-5fdb54ffbd-gwtzh 1/1 Running 0 95m 172.26.21.5 vms21.cos.com <none> <none>
  4. armory-front50-c74947d8b-dtv5f 0/1 Running 0 5s 172.26.22.6 vms22.cos.com <none> <none>
  5. minio-5d6b989d46-wxh4c 1/1 Running 0 5h42m 172.26.21.3 vms21.cos.com <none> <none>
  6. redis-5979d767cd-dp2sb 1/1 Running 0 4h35m 172.26.21.4 vms21.cos.com <none> <none>
  7. [root@vms22 ~]# kubectl exec -n armory minio-5d6b989d46-wxh4c -- curl -s http://armory-front50:8080/health
  8. {"status":"UP"}
[root@vms21 ~]# docker ps -a|grep minio
748c44a5333c        harbor.op.com/armory/minio          "/usr/bin/docker-ent…"   6 hours ago         Up 6 hours                                                             k8s_minio_minio-5d6b989d46-wxh4c_armory_d482fdee-351b-44d0-87ae-c40a70978633_0
22088e51e681        harbor.op.com/public/pause:latest   "/pause"                 6 hours ago         Up 6 hours                                                             k8s_POD_minio-5d6b989d46-wxh4c_armory_d482fdee-351b-44d0-87ae-c40a70978633_0
[root@vms21 ~]# docker exec -it 748c44a5333c sh
/ #  curl armory-front50:8080/health
{"status":"UP"}

浏览器访问

http://minio.op.com 登录并观察存储是否创建
image.png

关机前停止pod,可以加快关机和开机。

[root@vms22 ~]# kubectl -n armory get deployments.apps
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
armory-clouddriver   1/1     1            1           111m
armory-front50       1/1     1            1           15m
minio                1/1     1            1           5h58m
redis                1/1     1            1           4h51m
[root@vms22 ~]# kubectl -n armory scale deployment armory-clouddriver --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-front50 --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment redis --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment minio --replicas=0
[root@vms22 ~]# kubectl -n armory get pod
No resources found in armory namespace.
[root@vms22 ~]# kubectl -n armory get deployments.apps
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
armory-clouddriver   0/0     0            0           114m
armory-front50       0/0     0            0           18m
minio                0/0     0            0           6h1m
redis                0/0     0            0           4h54m

部署Orca

运维主机vms200上:

准备docker镜像

镜像下载

[root@vms200 ~]# docker pull docker.io/armory/spinnaker-orca-slim:release-1.8.x-de4ab55
release-1.8.x-de4ab55: Pulling from armory/spinnaker-orca-slim
...
docker.io/armory/spinnaker-orca-slim:release-1.8.x-de4ab55
[root@vms200 ~]# docker images|grep orca
armory/spinnaker-orca-slim   release-1.8.x-de4ab55      5103b1f73e04        2 years ago         141MB
[root@vms200 ~]# docker tag 5103b1f73e04 harbor.op.com/armory/orca:v1.8.x
[root@vms200 ~]# docker push harbor.op.com/armory/orca:v1.8.x
[root@vms200 ~]# docker pull armory/spinnaker-orca-slim:release-1.10.x-769f4e5
release-1.10.x-769f4e5: Pulling from armory/spinnaker-orca-slim
...
Status: Downloaded newer image for armory/spinnaker-orca-slim:release-1.10.x-769f4e5
docker.io/armory/spinnaker-orca-slim:release-1.10.x-769f4e5
[root@vms200 ~]# docker tag armory/spinnaker-orca-slim:release-1.10.x-769f4e5 harbor.op.com/armory/orca:v1.10.x
[root@vms200 ~]# docker push harbor.op.com/armory/orca:v1.10.x

准备资源配置清单

[root@vms200 ~]# mkdir /data/k8s-yaml/armory/orca
[root@vms200 ~]# cd /data/k8s-yaml/armory/orca
  • Deployment:dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-orca
  name: armory-orca
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-orca
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-orca"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"orca"'
      labels:
        app: armory-orca
    spec:
      containers:
      - name: armory-orca
        image: harbor.op.com/armory/orca:v1.10.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/orca/bin/orca
        ports:
        - containerPort: 8083
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -Xmx512M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /health
            port: 8083
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8083
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
  • Service:svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: armory-orca
  namespace: armory
spec:
  ports:
  - port: 8083
    protocol: TCP
    targetPort: 8083
  selector:
    app: armory-orca

应用资源配置清单

任意一台运算节点或vms200上:

  • vms200:/data/k8s-yaml/armory/orca
[root@vms200 orca]# kubectl apply -f ./dp.yaml
deployment.apps/armory-orca created
[root@vms200 orca]# kubectl apply -f ./svc.yaml
service/armory-orca created
  • vms21
[root@vms21 ~]# kubectl -n armory get po -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
armory-clouddriver-5fdb54ffbd-mbmpv   1/1     Running   0          6m22s   172.26.21.6   vms21.cos.com   <none>           <none>
armory-front50-c74947d8b-8vpxt        1/1     Running   0          3m50s   172.26.22.6   vms22.cos.com   <none>           <none>
armory-orca-59cf846bf9-hrtjc          1/1     Running   4          11m     172.26.21.3   vms21.cos.com   <none>           <none>
minio-5d6b989d46-t24js                1/1     Running   0          7m39s   172.26.21.4   vms21.cos.com   <none>           <none>
redis-5979d767cd-jxkrt                1/1     Running   0          7m12s   172.26.21.5   vms21.cos.com   <none>           <none>
[root@vms21 ~]# kubectl -n armory exec minio-5d6b989d46-t24js -- curl -s 'http://armory-orca:8083/health'
{"status":"UP"}

也可进入minio的pod/容器进行curl armory-orca:8083/health

部署Echo

运维主机vms200上:

准备docker镜像

镜像下载

[root@vms200 ~]# docker pull docker.io/armory/echo-armory:c36d576-release-1.8.x-617c567
c36d576-release-1.8.x-617c567: Pulling from armory/echo-armory
...
Digest: sha256:33f6d25aa536d245bc1181a9d6f42eceb8ce59c9daa954fa9e4a64095acf8356
Status: Downloaded newer image for armory/echo-armory:c36d576-release-1.8.x-617c567
docker.io/armory/echo-armory:c36d576-release-1.8.x-617c567
[root@vms200 ~]# docker images |grep echo
armory/echo-armory   c36d576-release-1.8.x-617c567   415efd46f474        2 years ago         287MB
[root@vms200 ~]# docker tag 415efd46f474 harbor.op.com/armory/echo:v1.8.x
[root@vms200 ~]# docker push harbor.op.com/armory/echo:v1.8.x
[root@vms200 ~]# docker pull armory/echo-armory:5891816-release-1.10.x-a568cf9
5891816-release-1.10.x-a568cf9: Pulling from armory/echo-armory
...
Digest: sha256:cd60f8af39079a3e943ddae03e468d76ef6964b3becc71c607554c356453574e
Status: Downloaded newer image for armory/echo-armory:5891816-release-1.10.x-a568cf9
docker.io/armory/echo-armory:5891816-release-1.10.x-a568cf9
[root@vms200 ~]# docker tag armory/echo-armory:5891816-release-1.10.x-a568cf9 harbor.op.com/armory/echo:v1.10.x
[root@vms200 ~]# docker push harbor.op.com/armory/echo:v1.10.x

准备资源配置清单

[root@vms200 ~]# mkdir /data/k8s-yaml/armory/echo
[root@vms200 ~]# cd /data/k8s-yaml/armory/echo
  • Deployment:dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-echo
  name: armory-echo
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-echo
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-echo"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"echo"'
      labels:
        app: armory-echo
    spec:
      containers:
      - name: armory-echo
        image: harbor.op.com/armory/echo:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/echo/bin/echo
        ports:
        - containerPort: 8089
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -javaagent:/opt/echo/lib/jamm-0.2.5.jar -Xmx512M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8089
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8089
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
  • Service:svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: armory-echo
  namespace: armory
spec:
  ports:
  - port: 8089
    protocol: TCP
    targetPort: 8089
  selector:
    app: armory-echo

应用资源配置清单

任意一台运算节点或vms200上:

  • vms200:/data/k8s-yaml/armory/echo
[root@vms200 echo]# kubectl apply -f ./dp.yaml
deployment.apps/armory-echo created
[root@vms200 echo]# kubectl apply -f ./svc.yaml
service/armory-echo created
  • vms21
[root@vms21 ~]# kubectl -n armory get po -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
armory-clouddriver-5fdb54ffbd-mbmpv   1/1     Running   0          37m     172.26.21.6   vms21.cos.com   <none>           <none>
armory-echo-774875cc4-sxtcs           1/1     Running   0          5m22s   172.26.22.7   vms22.cos.com   <none>           <none>
armory-front50-c74947d8b-8vpxt        1/1     Running   0          34m     172.26.22.6   vms22.cos.com   <none>           <none>
armory-orca-59cf846bf9-hrtjc          1/1     Running   4          41m     172.26.21.3   vms21.cos.com   <none>           <none>
minio-5d6b989d46-t24js                1/1     Running   0          38m     172.26.21.4   vms21.cos.com   <none>           <none>
redis-5979d767cd-jxkrt                1/1     Running   0          37m     172.26.21.5   vms21.cos.com   <none>           <none
[root@vms21 ~]# kubectl -n armory exec minio-5d6b989d46-t24js -- curl -s 'http://armory-echo:8089/health'
{"status":"UP"}

部署Igor

运维主机vms200上:

准备docker镜像

镜像下载:

[root@vms200 ~]# docker pull docker.io/armory/spinnaker-igor-slim:release-1.8-x-new-install-healthy-ae2b329
release-1.8-x-new-install-healthy-ae2b329: Pulling from armory/spinnaker-igor-slim
...
Digest: sha256:2a487385908647f24ffa6cd11071ad571bec717008b7f16bc470ba754a7ad258
Status: Downloaded newer image for armory/spinnaker-igor-slim:release-1.8-x-new-install-healthy-ae2b329
docker.io/armory/spinnaker-igor-slim:release-1.8-x-new-install-healthy-ae2b329
[root@vms200 ~]# docker images | grep igor
armory/spinnaker-igor-slim    release-1.8-x-new-install-healthy-ae2b329   23984f5b43f6        2 years ago         135MB
[root@vms200 ~]# docker tag 23984f5b43f6 harbor.op.com/armory/igor:v1.8.x
[root@vms200 ~]# docker push harbor.op.com/armory/igor:v1.8.x
[root@vms200 ~]# docker pull armory/spinnaker-igor-slim:release-1.10.x-a4fd897
release-1.10.x-a4fd897: Pulling from armory/spinnaker-igor-slim
...
Digest: sha256:99cb6d52d8585bf736b00ec52d4d2c460f6cdac181bc32a08ecd40c1241f977a
Status: Downloaded newer image for armory/spinnaker-igor-slim:release-1.10.x-a4fd897
docker.io/armory/spinnaker-igor-slim:release-1.10.x-a4fd897
[root@vms200 ~]# docker tag armory/spinnaker-igor-slim:release-1.10.x-a4fd897 harbor.op.com/armory/igor:v1.10.x
[root@vms200 ~]# docker push harbor.op.com/armory/igor:v1.10.x

准备资源配置清单

[root@vms200 ~]# mkdir /data/k8s-yaml/armory/igor
[root@vms200 ~]# cd /data/k8s-yaml/armory/igor
  • Deployment:dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-igor
  name: armory-igor
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-igor
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-igor"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"igor"'
      labels:
        app: armory-igor
    spec:
      containers:
      - name: armory-igor
        image: harbor.op.com/armory/igor:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/igor/bin/igor
        ports:
        - containerPort: 8088
          protocol: TCP
        env:
        - name: IGOR_PORT_MAPPING
          value: -8088:8088
        - name: JAVA_OPTS
          value: -Xmx512M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8088
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8088
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 5
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
  • Service:svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: armory-igor
  namespace: armory
spec:
  ports:
  - port: 8088
    protocol: TCP
    targetPort: 8088
  selector:
    app: armory-igor

应用资源配置清单

任意一台运算节点或vms200上:

  • vms200:/data/k8s-yaml/armory/igor
[root@vms200 igor]# kubectl apply -f ./dp.yaml
deployment.apps/armory-igor created
[root@vms200 igor]# kubectl apply -f ./svc.yaml
service/armory-igor created
  • vms21
[root@vms21 ~]# kubectl -n armory get po -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
armory-clouddriver-5fdb54ffbd-mbmpv   1/1     Running   0          58m     172.26.21.6   vms21.cos.com   <none>           <none>
armory-echo-774875cc4-sxtcs           1/1     Running   0          26m     172.26.22.7   vms22.cos.com   <none>           <none>
armory-front50-c74947d8b-8vpxt        1/1     Running   0          56m     172.26.22.6   vms22.cos.com   <none>           <none>
armory-igor-7689f5cc96-v6g9v          1/1     Running   0          8m41s   172.26.21.7   vms21.cos.com   <none>           <none>
armory-orca-59cf846bf9-hrtjc          1/1     Running   4          63m     172.26.21.3   vms21.cos.com   <none>           <none>
minio-5d6b989d46-t24js                1/1     Running   0          59m     172.26.21.4   vms21.cos.com   <none>           <none>
redis-5979d767cd-jxkrt                1/1     Running   0          59m     172.26.21.5   vms21.cos.com   <none>           <none>
[root@vms21 ~]# kubectl -n armory exec minio-5d6b989d46-t24js -- curl -s 'http://armory-igor:8088/health'
{"status":"UP"}

部署Gate

运维主机vms200上:

准备docker镜像

镜像下载:

[root@vms200 ~]# docker pull docker.io/armory/gate-armory:dfafe73-release-1.8.x-5d505ca
dfafe73-release-1.8.x-5d505ca: Pulling from armory/gate-armory
...
Digest: sha256:e3ea88c29023bce211a1b0772cc6cb631f3db45c81a4c0394c4fc9999a417c1f
Status: Downloaded newer image for armory/gate-armory:dfafe73-release-1.8.x-5d505ca
docker.io/armory/gate-armory:dfafe73-release-1.8.x-5d505ca
[root@vms200 ~]# docker images | grep gate
armory/gate-armory     dfafe73-release-1.8.x-5d505ca     b092d4665301        2 years ago         179MB
[root@vms200 ~]# docker tag b092d4665301 harbor.op.com/armory/gate:v1.8.x
[root@vms200 ~]# docker push harbor.op.com/armory/gate:v1.8.x
[root@vms200 ~]# docker pull armory/gate-armory:0d6729c-release-1.10.x-a8bb998
0d6729c-release-1.10.x-a8bb998: Pulling from armory/gate-armory
...
Digest: sha256:6d06a597a9d8c98362230c6d4aa1c13944673702f433f504fd9eadf90b91d0e0
Status: Downloaded newer image for armory/gate-armory:0d6729c-release-1.10.x-a8bb998
docker.io/armory/gate-armory:0d6729c-release-1.10.x-a8bb998
[root@vms200 ~]# docker tag armory/gate-armory:0d6729c-release-1.10.x-a8bb998 harbor.op.com/armory/gate:v1.10.x
[root@vms200 ~]# docker push harbor.op.com/armory/gate:v1.10.x

准备资源配置清单

[root@vms200 ~]# mkdir /data/k8s-yaml/armory/gate
[root@vms200 ~]# cd /data/k8s-yaml/armory/gate
  • Deployment:dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-gate
  name: armory-gate
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-gate
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-gate"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"gate"'
      labels:
        app: armory-gate
    spec:
      containers:
      - name: armory-gate
        image: harbor.op.com/armory/gate:v1.10.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh gate && cd /home/spinnaker/config
          && /opt/gate/bin/gate
        ports:
        - containerPort: 8084
          name: gate-port
          protocol: TCP
        - containerPort: 8085
          name: gate-api-port
          protocol: TCP
        env:
        - name: GATE_PORT_MAPPING
          value: -8084:8084
        - name: GATE_API_PORT_MAPPING
          value: -8085:8085
        - name: JAVA_OPTS
          value: -Xmx512M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - wget -O - http://localhost:8084/health || wget -O - https://localhost:8084/health
          failureThreshold: 5
          initialDelaySeconds: 600
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - wget -O - http://localhost:8084/health?checkDownstreamServices=true&downstreamServices=true
              || wget -O - https://localhost:8084/health?checkDownstreamServices=true&downstreamServices=true
          failureThreshold: 3
          initialDelaySeconds: 180
          periodSeconds: 5
          successThreshold: 10
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
  • Service:svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: armory-gate
  namespace: armory
spec:
  ports:
  - name: gate-port
    port: 8084
    protocol: TCP
    targetPort: 8084
  - name: gate-api-port
    port: 8085
    protocol: TCP
    targetPort: 8085
  selector:
    app: armory-gate

应用资源配置清单

任意一台运算节点或vms200上:

  • vms200:/data/k8s-yaml/armory/gate
[root@vms200 gate]# kubectl apply -f ./dp.yaml
deployment.apps/armory-gate created
[root@vms200 gate]# kubectl apply -f ./svc.yaml
service/armory-gate created
  • vms21
[root@vms21 ~]# kubectl -n armory get po -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
armory-clouddriver-5fdb54ffbd-mbmpv   1/1     Running   0          74m     172.26.21.6   vms21.cos.com   <none>           <none>
armory-echo-774875cc4-sxtcs           1/1     Running   0          42m     172.26.22.7   vms22.cos.com   <none>           <none>
armory-front50-c74947d8b-8vpxt        1/1     Running   0          71m     172.26.22.6   vms22.cos.com   <none>           <none>
armory-gate-68bbb98cb9-gblp6          1/1     Running   0          4m10s   172.26.22.8   vms22.cos.com   <none>           <none>
armory-igor-7689f5cc96-v6g9v          1/1     Running   0          24m     172.26.21.7   vms21.cos.com   <none>           <none>
armory-orca-59cf846bf9-hrtjc          1/1     Running   4          78m     172.26.21.3   vms21.cos.com   <none>           <none>
minio-5d6b989d46-t24js                1/1     Running   0          75m     172.26.21.4   vms21.cos.com   <none>           <none>
redis-5979d767cd-jxkrt                1/1     Running   0          75m     172.26.21.5   vms21.cos.com   <none>           <none>
[root@vms21 ~]# kubectl -n armory exec minio-5d6b989d46-t24js -- curl -s 'http://armory-gate:8084/health'
{"status":"UP"}

部署Deck

运维主机vms200上:

准备docker镜像

镜像下载:

[root@vms200 ~]# docker pull docker.io/armory/deck-armory:d4bf0cf-release-1.8.x-0a33f94
d4bf0cf-release-1.8.x-0a33f94: Pulling from armory/deck-armory
...
Digest: sha256:ad85eb8e1ada327ab0b98471d10ed2a4e5eada3c154a2f17b6b23a089c74839f
Status: Downloaded newer image for armory/deck-armory:d4bf0cf-release-1.8.x-0a33f94
docker.io/armory/deck-armory:d4bf0cf-release-1.8.x-0a33f94
[root@vms200 ~]# docker images | grep deck
armory/deck-armory   d4bf0cf-release-1.8.x-0a33f94   9a87ba3b319f        2 years ago         518MB
[root@vms200 ~]# docker tag 9a87ba3b319f harbor.op.com/armory/deck:v1.8.x
[root@vms200 ~]# docker push harbor.op.com/armory/deck:v1.8.x
[root@vms200 ~]# docker pull armory/deck-armory:12927b8-release-1.10.x-c9abb38e5
12927b8-release-1.10.x-c9abb38e5: Pulling from armory/deck-armory
...
Digest: sha256:f6421f5a3bae09f0ea78fb915fe2269ce5ec60fb4ed1737d0e534ddf8149c69d
Status: Downloaded newer image for armory/deck-armory:12927b8-release-1.10.x-c9abb38e5
docker.io/armory/deck-armory:12927b8-release-1.10.x-c9abb38e5
[root@vms200 ~]# docker tag armory/deck-armory:12927b8-release-1.10.x-c9abb38e5 harbor.op.com/armory/deck:v1.10.x
[root@vms200 ~]# docker push harbor.op.com/armory/deck:v1.10.x

准备资源配置清单

[root@vms200 ~]# mkdir /data/k8s-yaml/armory/deck
[root@vms200 ~]# cd /data/k8s-yaml/armory/deck
  • Deployment:dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-deck
  name: armory-deck
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-deck
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-deck"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"deck"'
      labels:
        app: armory-deck
    spec:
      containers:
      - name: armory-deck
        image: harbor.op.com/armory/deck:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && /entrypoint.sh
        ports:
        - containerPort: 9000
          protocol: TCP
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 9000
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 5
          httpGet:
            path: /
            port: 9000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
  • Service:svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: armory-deck
  namespace: armory
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9000
  selector:
    app: armory-deck

应用资源配置清单

任意一台运算节点或vms200上:

  • vms200:/data/k8s-yaml/armory/deck
[root@vms200 deck]# kubectl apply -f ./dp.yaml
deployment.apps/armory-deck created
[root@vms200 deck]# kubectl apply -f ./svc.yaml
service/armory-deck created
  • vms21
[root@vms21 ~]# kubectl -n armory get po -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
armory-clouddriver-5fdb54ffbd-mbmpv   1/1     Running   0          89m     172.26.21.6   vms21.cos.com   <none>           <none>
armory-deck-8688b588d5-mjbgm          1/1     Running   0          3m22s   172.26.21.8   vms21.cos.com   <none>           <none>
armory-echo-774875cc4-sxtcs           1/1     Running   0          57m     172.26.22.7   vms22.cos.com   <none>           <none>
armory-front50-c74947d8b-8vpxt        1/1     Running   0          86m     172.26.22.6   vms22.cos.com   <none>           <none>
armory-gate-68bbb98cb9-gblp6          1/1     Running   0          19m     172.26.22.8   vms22.cos.com   <none>           <none>
armory-igor-7689f5cc96-v6g9v          1/1     Running   0          39m     172.26.21.7   vms21.cos.com   <none>           <none>
armory-orca-59cf846bf9-hrtjc          1/1     Running   4          93m     172.26.21.3   vms21.cos.com   <none>           <none>
minio-5d6b989d46-t24js                1/1     Running   0          90m     172.26.21.4   vms21.cos.com   <none>           <none>
redis-5979d767cd-jxkrt                1/1     Running   0          90m     172.26.21.5   vms21.cos.com   <none>           <none>
[root@vms21 ~]# kubectl -n armory exec minio-5d6b989d46-t24js -- curl -Is 'http://armory-deck'
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Thu, 01 Oct 2020 01:21:16 GMT
Content-Type: text/html
Content-Length: 22031
Last-Modified: Tue, 17 Jul 2018 17:42:20 GMT
Connection: keep-alive
ETag: "5b4e2a7c-560f"
Accept-Ranges: bytes

部署前端代理Nginx

准备docker镜像

镜像下载:

[root@vms200 ~]# docker pull nginx:1.12.2
1.12.2: Pulling from library/nginx
...
Digest: sha256:72daaf46f11cc753c4eab981cbf869919bd1fee3d2170a2adeac12400f494728
Status: Downloaded newer image for nginx:1.12.2
docker.io/library/nginx:1.12.2
[root@vms200 ~]# docker tag nginx:1.12.2 harbor.op.com/armory/nginx:v1.12.2
[root@vms200 ~]# docker push harbor.op.com/armory/nginx:v1.12.2

准备资源配置清单

[root@vms200 ~]# mkdir /data/k8s-yaml/armory/nginx
[root@vms200 ~]# cd /data/k8s-yaml/armory/nginx
  • Deployment:dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-nginx
  name: armory-nginx
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-nginx
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-nginx"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"nginx"'
      labels:
        app: armory-nginx
    spec:
      containers:
      - name: armory-nginx
        image: harbor.op.com/armory/nginx:v1.12.2
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh nginx && nginx -g 'daemon off;'
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8085
          name: api
          protocol: TCP
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 80
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 80
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /etc/nginx/conf.d
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
  • Service:svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: armory-nginx
  namespace: armory
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  - name: api
    port: 8085
    protocol: TCP
    targetPort: 8085
  selector:
    app: armory-nginx
  • ingress.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  labels:
    app: spinnaker
    web: spinnaker.op.com
  name: spinnaker-route
  namespace: armory
spec:
  entryPoints:
    - web
  routes:
    - match: Host(`spinnaker.op.com`)
      kind: Rule
      services:
        - name: armory-nginx
          port: 80

IngressRoute版本获取:

[root@vms22 ~]# kubectl explain ingressroute
KIND:     IngressRoute
VERSION:  traefik.containo.us/v1alpha1

或者使用Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: spinnaker
    web: spinnaker.op.com
  name: armory-nginx
  namespace: armory
spec:
  rules:
  - host: spinnaker.op.com
    http:
      paths:
      - backend:
          serviceName: armory-nginx
          servicePort: 80

应用资源配置清单

任意一台运算节点或vms200上:

  • vms200:/data/k8s-yaml/armory/nginx
[root@vms200 nginx]# kubectl apply -f dp.yaml
deployment.apps/armory-nginx created
[root@vms200 nginx]# kubectl apply -f svc.yaml
service/armory-nginx created
[root@vms200 nginx]# kubectl apply -f ./ingress.yaml
ingressroute.traefik.containo.us/spinnaker-route created
  • vms21
[root@vms21 ~]# kubectl -n armory get po -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
armory-clouddriver-5fdb54ffbd-mbmpv   1/1     Running   0          120m    172.26.21.6   vms21.cos.com   <none>           <none>
armory-deck-8688b588d5-mjbgm          1/1     Running   0          34m     172.26.21.8   vms21.cos.com   <none>           <none>
armory-echo-774875cc4-sxtcs           1/1     Running   0          88m     172.26.22.7   vms22.cos.com   <none>           <none>
armory-front50-c74947d8b-8vpxt        1/1     Running   0          117m    172.26.22.6   vms22.cos.com   <none>           <none>
armory-gate-68bbb98cb9-gblp6          1/1     Running   0          50m     172.26.22.8   vms22.cos.com   <none>           <none>
armory-igor-7689f5cc96-v6g9v          1/1     Running   0          70m     172.26.21.7   vms21.cos.com   <none>           <none>
armory-nginx-64f88b4bc8-542nl         1/1     Running   0          4m24s   172.26.22.9   vms22.cos.com   <none>           <none>
armory-orca-59cf846bf9-hrtjc          1/1     Running   4          124m    172.26.21.3   vms21.cos.com   <none>           <none>
minio-5d6b989d46-t24js                1/1     Running   0          121m    172.26.21.4   vms21.cos.com   <none>           <none>
redis-5979d767cd-jxkrt                1/1     Running   0          120m    172.26.21.5   vms21.cos.com   <none>           <none>

解析域名

vms11

[root@vms11 ~]# vi /var/named/op.com.zone
...
spinnaker          A    192.168.26.10

注意serial前滚一个序号

[root@vms11 ~]# systemctl restart named
[root@vms11 ~]# dig -t A spinnaker.op.com +short
192.168.26.10
[root@vms11 ~]# host spinnaker.op.com
minio.op.com has address 192.168.26.10

浏览器访问:http://spinnaker.op.com

image.png

至此,**spinnaker**完美成功部署!

启动/关闭spinnaker

[root@vms22 ~]# kubectl -n armory get deploy
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
armory-clouddriver   1/1     1            0           44h
armory-deck          1/1     1            1           7h2m
armory-echo          1/1     1            0           8h
armory-front50       1/1     1            0           42h
armory-gate          1/1     1            1           7h15m
armory-igor          1/1     1            1           7h38m
armory-nginx         1/1     1            1           6h29m
armory-orca          1/1     1            1           8h
minio                1/1     1            1           2d
redis                1/1     1            1           47h
[root@vms22 ~]# kubectl -n armory scale deployment armory-clouddriver --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-deck --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-echo --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-front50 --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-gate --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-igor --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-nginx --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment armory-orca --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment minio --replicas=0
[root@vms22 ~]# kubectl -n armory scale deployment redis --replicas=0

Spinnaker的使用

全流程的图形化界面操作及自动化运维。能够在统一的集中管理平台完成之前所有实验:

  • 使用spinnaker结合jenkins构建镜像
  • 使用spinnkaer配置dubbo服务提供者发布至K8S
  • 使用spinnaker配置dubbo服务消费者到K8S
  • 使用spinnkaer发版和生产环境配置

创建应用集

Applications > Actions : Create Application

image.png

New Application : fat0cicd
image.png

应用名不要太长,命名规则:环境名 + 0 + 应用/项目/部门

Create : (如果页面抖动,调整浏览器缩放比例)
image.png

创建service

  • fat0cicd :LOAD BALANCERS

image.png

  • Create Load Balancer
    • Account : cluster-admin
    • Namespace : test
    • Detail : demo-web
    • Target Port : 8080
  • Create : 刷新

image.png

创建ingress

  • FIREWALLS : Create Firewall

image.png

  • Create New Firewall
    • Account : cluster-admin
    • Namespace : test
    • Detail : demo-web
    • Rules : Add New Rule
      • Host : demo-fat.op.com
      • Add New Path
        • Load Balancer : fat0cicd—demo-web
          Path : /
          Port : 80
  • Create : 刷新

image.png

创建Pipeline

  • PIPELINES

image.png

  • Configure a new pipeline
    • Type : Pipeline
      Pipeline Name : dubbo-demo-service
  • Create

image.png

  • Parameters : Add Parameters 添加四个参数,通过参数构建流水线

增加以下四个参数,本次编译和发布 dubbo-demo-service,因此默认的项目名称和镜像名称是基本确定的

  1. name: app_name
    required: true
    default: dubbo-demo-service
    description: 项目在Git仓库名称
  2. name: git_ver
    required: true
    description: 项目的版本或者commit ID或者分支
  3. name: image_name
    required: true
    default: app/dubbo-demo-service
    description: 镜像名称,仓库/image
  4. name: add_tag
    required: true
    description: 标签的一部分,追加在git_ver后面,使用YYYYmmdd_HHMM

image.png

Save 保存的参数会写入jenkins。

创建Jenkins构建步骤

如果在测试环境中,Jenkins一般是流水线的一部分,而在生产环境中,一般跳过Jenkins这个步骤。

  • Add stage : Jenkins

image.png

  • Jenkins Configuration
    • 指定登陆jenkins用户,可以看到Jenkins已经被加载进来了,下面的参数化构建的选项也被加载进来了
    • 关联jenkins中的流水线
    • 基于服务创建流水线,每个服务编译时只需要修改少量参数 | 选项 | 值 | | —- | —- | | add_tag | ${ parameters.add_tag } | | app_name | ${ parameters.name } | | base_image | base/jre8:8u112 | | git_repo | https://gitee.com/cloudlove2007/dubbo-demo-service.git | | get_ver | ${ parameters.git_ver } | | image_name | ${ parameters.image_name } | | maven | 3.6.3-8u261 | | mvn_cmd | 勾选Use default | | mvn_dir | ./ | | target_dir | ./dubbo-server/target |

image.png

Save Changes

  • 点击PIPELINES

image.png

执行流水线

其它

  • Spinnaker的账号认证系统如何实现?
  • 当前灰度发布、金丝雀发布、蓝绿发布如何实现?

default-config.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: default-config
  namespace: armory
data:
  barometer.yml: |
    server:
      port: 9092

    spinnaker:
      redis:
        host: ${services.redis.host}
        port: ${services.redis.port}
  clouddriver-armory.yml: |
    aws:
      defaultAssumeRole: role/${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile}
      accounts:
        - name: default-aws-account
          accountId: ${SPINNAKER_AWS_DEFAULT_ACCOUNT_ID:none}

      client:
        maxErrorRetry: 20

    serviceLimits:
      cloudProviderOverrides:
        aws:
          rateLimit: 15.0

      implementationLimits:
        AmazonAutoScaling:
          defaults:
            rateLimit: 3.0
        AmazonElasticLoadBalancing:
          defaults:
            rateLimit: 5.0

    security.basic.enabled: false
    management.security.enabled: false
  clouddriver-dev.yml: |

    serviceLimits:
      defaults:
        rateLimit: 2
  clouddriver.yml: |
    server:
      port: ${services.clouddriver.port:7002}
      address: ${services.clouddriver.host:localhost}

    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}

    udf:
      enabled: ${services.clouddriver.aws.udf.enabled:true}
      udfRoot: /opt/spinnaker/config/udf
      defaultLegacyUdf: false

    default:
      account:
        env: ${providers.aws.primaryCredentials.name}

    aws:
      enabled: ${providers.aws.enabled:false}
      defaults:
        iamRole: ${providers.aws.defaultIAMRole:BaseIAMRole}
      defaultRegions:
        - name: ${providers.aws.defaultRegion:us-east-1}
      defaultFront50Template: ${services.front50.baseUrl}
      defaultKeyPairTemplate: ${providers.aws.defaultKeyPairTemplate}

    azure:
      enabled: ${providers.azure.enabled:false}

      accounts:
        - name: ${providers.azure.primaryCredentials.name}
          clientId: ${providers.azure.primaryCredentials.clientId}
          appKey: ${providers.azure.primaryCredentials.appKey}
          tenantId: ${providers.azure.primaryCredentials.tenantId}
          subscriptionId: ${providers.azure.primaryCredentials.subscriptionId}

    google:
      enabled: ${providers.google.enabled:false}

      accounts:
        - name: ${providers.google.primaryCredentials.name}
          project: ${providers.google.primaryCredentials.project}
          jsonPath: ${providers.google.primaryCredentials.jsonPath}
          consul:
            enabled: ${providers.google.primaryCredentials.consul.enabled:false}

    cf:
      enabled: ${providers.cf.enabled:false}

      accounts:
        - name: ${providers.cf.primaryCredentials.name}
          api: ${providers.cf.primaryCredentials.api}
          console: ${providers.cf.primaryCredentials.console}
          org: ${providers.cf.defaultOrg}
          space: ${providers.cf.defaultSpace}
          username: ${providers.cf.account.name:}
          password: ${providers.cf.account.password:}

    kubernetes:
      enabled: ${providers.kubernetes.enabled:false}
      accounts:
        - name: ${providers.kubernetes.primaryCredentials.name}
          dockerRegistries:
            - accountName: ${providers.kubernetes.primaryCredentials.dockerRegistryAccount}

    openstack:
      enabled: ${providers.openstack.enabled:false}
      accounts:
        - name: ${providers.openstack.primaryCredentials.name}
          authUrl: ${providers.openstack.primaryCredentials.authUrl}
          username: ${providers.openstack.primaryCredentials.username}
          password: ${providers.openstack.primaryCredentials.password}
          projectName: ${providers.openstack.primaryCredentials.projectName}
          domainName: ${providers.openstack.primaryCredentials.domainName:Default}
          regions: ${providers.openstack.primaryCredentials.regions}
          insecure: ${providers.openstack.primaryCredentials.insecure:false}
          userDataFile: ${providers.openstack.primaryCredentials.userDataFile:}

          lbaas:
            pollTimeout: 60
            pollInterval: 5

    dockerRegistry:
      enabled: ${providers.dockerRegistry.enabled:false}
      accounts:
        - name: ${providers.dockerRegistry.primaryCredentials.name}
          address: ${providers.dockerRegistry.primaryCredentials.address}
          username: ${providers.dockerRegistry.primaryCredentials.username:}
          passwordFile: ${providers.dockerRegistry.primaryCredentials.passwordFile}

    credentials:
      primaryAccountTypes: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}
      challengeDestructiveActionsEnvironments: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}

      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}

    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - account
          - region
  dinghy.yml: ""
  echo-armory.yml: |
    diagnostics:
      enabled: true
      id: ${ARMORY_ID:unknown}

    armorywebhooks:
      enabled: false
      forwarding:
        baseUrl: http://armory-dinghy:8081
        endpoint: v1/webhooks
  echo-noncron.yml: |
    scheduler:
      enabled: false
  echo.yml: |
    server:
      port: ${services.echo.port:8089}
      address: ${services.echo.host:localhost}

    cassandra:
      enabled: ${services.echo.cassandra.enabled:false}
      embedded: ${services.cassandra.embedded:false}
      host: ${services.cassandra.host:localhost}

    spinnaker:
      baseUrl: ${services.deck.baseUrl}
      cassandra:
         enabled: ${services.echo.cassandra.enabled:false}
      inMemory:
         enabled: ${services.echo.inMemory.enabled:true}

    front50:
      baseUrl: ${services.front50.baseUrl:http://localhost:8080 }

    orca:
      baseUrl: ${services.orca.baseUrl:http://localhost:8083 }

    endpoints.health.sensitive: false

    slack:
      enabled: ${services.echo.notifications.slack.enabled:false}
      token: ${services.echo.notifications.slack.token}

    spring:
      mail:
        host: ${mail.host}

    mail:
      enabled: ${services.echo.notifications.mail.enabled:false}
      host: ${services.echo.notifications.mail.host}
      from: ${services.echo.notifications.mail.fromAddress}

    hipchat:
      enabled: ${services.echo.notifications.hipchat.enabled:false}
      baseUrl: ${services.echo.notifications.hipchat.url}
      token: ${services.echo.notifications.hipchat.token}

    twilio:
      enabled: ${services.echo.notifications.sms.enabled:false}
      baseUrl: ${services.echo.notifications.sms.url:https://api.twilio.com/ }
      account: ${services.echo.notifications.sms.account}
      token: ${services.echo.notifications.sms.token}
      from: ${services.echo.notifications.sms.from}

    scheduler:
      enabled: ${services.echo.cron.enabled:true}
      threadPoolSize: 20
      triggeringEnabled: true
      pipelineConfigsPoller:
        enabled: true
        pollingIntervalMs: 30000
      cron:
        timezone: ${services.echo.cron.timezone}

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}

      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}

    webhooks:
      artifacts:
        enabled: true
  fetch.sh: |+

    CONFIG_LOCATION=${SPINNAKER_HOME:-"/opt/spinnaker"}/config
    CONTAINER=$1

    rm -f /opt/spinnaker/config/*.yml

    mkdir -p ${CONFIG_LOCATION}

    for filename in /opt/spinnaker/config/default/*.yml; do
        cp $filename ${CONFIG_LOCATION}
    done

    if [ -d /opt/spinnaker/config/custom ]; then
        for filename in /opt/spinnaker/config/custom/*; do
            cp $filename ${CONFIG_LOCATION}
        done
    fi

    add_ca_certs() {
      ca_cert_path="$1"
      jks_path="$2"
      alias="$3"

      if [[ "$(whoami)" != "root" ]]; then
        echo "INFO: I do not have proper permisions to add CA roots"
        return
      fi

      if [[ ! -f ${ca_cert_path} ]]; then
        echo "INFO: No CA cert found at ${ca_cert_path}"
        return
      fi
      keytool -importcert \
          -file ${ca_cert_path} \
          -keystore ${jks_path} \
          -alias ${alias} \
          -storepass changeit \
          -noprompt
    }

    if [ `which keytool` ]; then
      echo "INFO: Keytool found adding certs where appropriate"
      add_ca_certs "${CONFIG_LOCATION}/ca.crt" "/etc/ssl/certs/java/cacerts" "custom-ca"
    else
      echo "INFO: Keytool not found, not adding any certs/private keys"
    fi

    saml_pem_path="/opt/spinnaker/config/custom/saml.pem"
    saml_pkcs12_path="/tmp/saml.pkcs12"
    saml_jks_path="${CONFIG_LOCATION}/saml.jks"

    x509_ca_cert_path="/opt/spinnaker/config/custom/x509ca.crt"
    x509_client_cert_path="/opt/spinnaker/config/custom/x509client.crt"
    x509_jks_path="${CONFIG_LOCATION}/x509.jks"
    x509_nginx_cert_path="/opt/nginx/certs/ssl.crt"

    if [ "${CONTAINER}" == "gate" ]; then
        if [ -f ${saml_pem_path} ]; then
            echo "Loading ${saml_pem_path} into ${saml_jks_path}"
            openssl pkcs12 -export -out ${saml_pkcs12_path} -in ${saml_pem_path} -password pass:changeit -name saml
            keytool -genkey -v -keystore ${saml_jks_path} -alias saml \
                    -keyalg RSA -keysize 2048 -validity 10000 \
                    -storepass changeit -keypass changeit -dname "CN=armory"
            keytool -importkeystore \
                    -srckeystore ${saml_pkcs12_path} \
                    -srcstoretype PKCS12 \
                    -srcstorepass changeit \
                    -destkeystore ${saml_jks_path} \
                    -deststoretype JKS \
                    -storepass changeit \
                    -alias saml \
                    -destalias saml \
                    -noprompt
        else
            echo "No SAML IDP pemfile found at ${saml_pem_path}"
        fi
        if [ -f ${x509_ca_cert_path} ]; then
            echo "Loading ${x509_ca_cert_path} into ${x509_jks_path}"
            add_ca_certs ${x509_ca_cert_path} ${x509_jks_path} "ca"
        else
            echo "No x509 CA cert found at ${x509_ca_cert_path}"
        fi
        if [ -f ${x509_client_cert_path} ]; then
            echo "Loading ${x509_client_cert_path} into ${x509_jks_path}"
            add_ca_certs ${x509_client_cert_path} ${x509_jks_path} "client"
        else
            echo "No x509 Client cert found at ${x509_client_cert_path}"
        fi

        if [ -f ${x509_nginx_cert_path} ]; then
            echo "Creating a self-signed CA (EXPIRES IN 360 DAYS) with java keystore: ${x509_jks_path}"
            echo -e "\n\n\n\n\n\ny\n" | keytool -genkey -keyalg RSA -alias server -keystore keystore.jks -storepass changeit -validity 360 -keysize 2048
            keytool -importkeystore \
                    -srckeystore keystore.jks \
                    -srcstorepass changeit \
                    -destkeystore "${x509_jks_path}" \
                    -storepass changeit \
                    -srcalias server \
                    -destalias server \
                    -noprompt
        else
            echo "No x509 nginx cert found at ${x509_nginx_cert_path}"
        fi
    fi

    if [ "${CONTAINER}" == "nginx" ]; then
        nginx_conf_path="/opt/spinnaker/config/default/nginx.conf"
        if [ -f ${nginx_conf_path} ]; then
            cp ${nginx_conf_path} /etc/nginx/nginx.conf
        fi
    fi

  fiat.yml: |-
    server:
      port: ${services.fiat.port:7003}
      address: ${services.fiat.host:localhost}

    redis:
      connection: ${services.redis.connection:redis://localhost:6379}

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}

    hystrix:
     command:
       default.execution.isolation.thread.timeoutInMilliseconds: 20000

    logging:
      level:
        com.netflix.spinnaker.fiat: DEBUG
  front50-armory.yml: |
    spinnaker:
      redis:
        enabled: true
        host: redis
  front50.yml: |
    server:
      port: ${services.front50.port:8080}
      address: ${services.front50.host:localhost}

    hystrix:
      command:
        default.execution.isolation.thread.timeoutInMilliseconds: 15000

    cassandra:
      enabled: ${services.front50.cassandra.enabled:false}
      embedded: ${services.cassandra.embedded:false}
      host: ${services.cassandra.host:localhost}

    aws:
      simpleDBEnabled: ${providers.aws.simpleDBEnabled:false}
      defaultSimpleDBDomain: ${providers.aws.defaultSimpleDBDomain}

    spinnaker:
      cassandra:
        enabled: ${services.front50.cassandra.enabled:false}
        host: ${services.cassandra.host:localhost}
        port: ${services.cassandra.port:9042}
        cluster: ${services.cassandra.cluster:CASS_SPINNAKER}
        keyspace: front50
        name: global

      redis:
        enabled: ${services.front50.redis.enabled:false}

      gcs:
        enabled: ${services.front50.gcs.enabled:false}
        bucket: ${services.front50.storage_bucket:}
        bucketLocation: ${services.front50.bucket_location:}
        rootFolder: ${services.front50.rootFolder:front50}
        project: ${providers.google.primaryCredentials.project}
        jsonPath: ${providers.google.primaryCredentials.jsonPath}

      s3:
        enabled: ${services.front50.s3.enabled:false}
        bucket: ${services.front50.storage_bucket:}
        rootFolder: ${services.front50.rootFolder:front50}

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}

      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}

    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - application
          - cause
        - name: aws.request.httpRequestTime
          labels:
          - status
          - exception
          - AWSErrorCode
        - name: aws.request.requestSigningTime
          labels:
          - exception
  gate-armory.yml: |+
    lighthouse:
        baseUrl: http://${DEFAULT_DNS_NAME:lighthouse}:5000

  gate.yml: |
    server:
      port: ${services.gate.port:8084}
      address: ${services.gate.host:localhost}

    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
      configuration:
        secure: true

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}

      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}

    stackdriver:
      hints:
        - name: EurekaOkClient_Request
          labels:
          - cause
          - reason
          - status
  igor-nonpolling.yml: |
    jenkins:
      polling:
        enabled: false
  igor.yml: |
    server:
      port: ${services.igor.port:8088}
      address: ${services.igor.host:localhost}

    jenkins:
      enabled: ${services.jenkins.enabled:false}
      masters:
        - name: ${services.jenkins.defaultMaster.name}
          address: ${services.jenkins.defaultMaster.baseUrl}
          username: ${services.jenkins.defaultMaster.username}
          password: ${services.jenkins.defaultMaster.password}
          csrf: ${services.jenkins.defaultMaster.csrf:false}

    travis:
      enabled: ${services.travis.enabled:false}
      masters:
        - name: ${services.travis.defaultMaster.name}
          baseUrl: ${services.travis.defaultMaster.baseUrl}
          address: ${services.travis.defaultMaster.address}
          githubToken: ${services.travis.defaultMaster.githubToken}


    dockerRegistry:
      enabled: ${providers.dockerRegistry.enabled:false}


    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}

    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - master
  kayenta-armory.yml: |
    kayenta:
      aws:
        enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
        accounts:
          - name: aws-s3-storage
            bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
            rootFolder: kayenta
            supportedTypes:
              - OBJECT_STORE
              - CONFIGURATION_STORE

      s3:
        enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}

      google:
        enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
        accounts:
          - name: cloud-armory
            bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
            rootFolder: kayenta-prod
            supportedTypes:
              - METRICS_STORE
              - OBJECT_STORE
              - CONFIGURATION_STORE

      gcs:
        enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
  kayenta.yml: |2

    server:
      port: 8090

    kayenta:
      atlas:
        enabled: false

      google:
        enabled: false

      aws:
        enabled: false

      datadog:
        enabled: false

      prometheus:
        enabled: false

      gcs:
        enabled: false

      s3:
        enabled: false

      stackdriver:
        enabled: false

      memory:
        enabled: false

      configbin:
        enabled: false

    keiko:
      queue:
        redis:
          queueName: kayenta.keiko.queue
          deadLetterQueueName: kayenta.keiko.queue.deadLetters

    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: true

    swagger:
      enabled: true
      title: Kayenta API
      description:
      contact:
      patterns:
        - /admin.*
        - /canary.*
        - /canaryConfig.*
        - /canaryJudgeResult.*
        - /credentials.*
        - /fetch.*
        - /health
        - /judges.*
        - /metadata.*
        - /metricSetList.*
        - /metricSetPairList.*
        - /pipeline.*

    security.basic.enabled: false
    management.security.enabled: false
  nginx.conf: |
    user  nginx;
    worker_processes  1;

    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;

    events {
        worker_connections  1024;
    }

    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;

        sendfile        on;
        keepalive_timeout  65;
        include /etc/nginx/conf.d/*.conf;
    }

    stream {
        upstream gate_api {
            server armory-gate:8085;
        }

        server {
            listen 8085;
            proxy_pass gate_api;
        }
    }
  nginx.http.conf: |
    gzip on;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

    server {
           listen 80;
           listen [::]:80;

           location / {
                proxy_pass http://armory-deck/;
           }

           location /api/ {
                proxy_pass http://armory-gate:8084/;
           }

           location /slack/ {
               proxy_pass http://armory-platform:10000/;
           }

           rewrite ^/login(.*)$ /api/login$1 last;
           rewrite ^/auth(.*)$ /api/auth$1 last;
    }
  nginx.https.conf: |
    gzip on;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

    server {
        listen 80;
        listen [::]:80;
        return 301 https://$host$request_uri;
    }

    server {
        listen 443 ssl;
        listen [::]:443 ssl;
        ssl on;
        ssl_certificate /opt/nginx/certs/ssl.crt;
        ssl_certificate_key /opt/nginx/certs/ssl.key;

        location / {
            proxy_pass http://armory-deck/;
        }

        location /api/ {
            proxy_pass http://armory-gate:8084/;
            proxy_set_header Host            $host;
            proxy_set_header X-Real-IP       $proxy_protocol_addr;
            proxy_set_header X-Forwarded-For $proxy_protocol_addr;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        location /slack/ {
            proxy_pass http://armory-platform:10000/;
        }
        rewrite ^/login(.*)$ /api/login$1 last;
        rewrite ^/auth(.*)$ /api/auth$1 last;
    }
  orca-armory.yml: |
    mine:
      baseUrl: http://${services.barometer.host}:${services.barometer.port}

    pipelineTemplate:
      enabled: ${features.pipelineTemplates.enabled:false}
      jinja:
        enabled: true

    kayenta:
      enabled: ${services.kayenta.enabled:false}
      baseUrl: ${services.kayenta.baseUrl}

    jira:
      enabled: ${features.jira.enabled:false}
      basicAuth:  "Basic ${features.jira.basicAuthToken}"
      url: ${features.jira.createIssueUrl}

    webhook:
      preconfigured:
        - label: Enforce Pipeline Policy
          description: Checks pipeline configuration against policy requirements
          type: enforcePipelinePolicy
          enabled: ${features.certifiedPipelines.enabled:false}
          url: "http://lighthouse:5000/v1/pipelines/${execution.application}/${execution.pipelineConfigId}?check_policy=yes"
          headers:
            Accept:
              - application/json
          method: GET
          waitForCompletion: true
          statusUrlResolution: getMethod
          statusJsonPath: $.status
          successStatuses: pass
          canceledStatuses:
          terminalStatuses: TERMINAL

        - label: "Jira: Create Issue"
          description:  Enter a Jira ticket when this pipeline runs
          type: createJiraIssue
          enabled: ${jira.enabled}
          url:  ${jira.url}
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: POST
          parameters:
            - name: summary
              label: Issue Summary
              description: A short summary of your issue.
            - name: description
              label: Issue Description
              description: A longer description of your issue.
            - name: projectKey
              label: Project key
              description: The key of your JIRA project.
            - name: type
              label: Issue Type
              description: The type of your issue, e.g. "Task", "Story", etc.
          payload: |
            {
              "fields" : {
                "description": "${parameterValues['description']}",
                "issuetype": {
                   "name": "${parameterValues['type']}"
                },
                "project": {
                   "key": "${parameterValues['projectKey']}"
                },
                "summary":  "${parameterValues['summary']}"
              }
            }
          waitForCompletion: false

        - label: "Jira: Update Issue"
          description:  Update a previously created Jira Issue
          type: updateJiraIssue
          enabled: ${jira.enabled}
          url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}"
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: PUT
          parameters:
            - name: summary
              label: Issue Summary
              description: A short summary of your issue.
            - name: description
              label: Issue Description
              description: A longer description of your issue.
          payload: |
            {
              "fields" : {
                "description": "${parameterValues['description']}",
                "summary": "${parameterValues['summary']}"
              }
            }
          waitForCompletion: false

        - label: "Jira: Transition Issue"
          description:  Change state of existing Jira Issue
          type: transitionJiraIssue
          enabled: ${jira.enabled}
          url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/transitions"
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: POST
          parameters:
            - name: newStateID
              label: New State ID
              description: The ID of the state you want to transition the issue to.
          payload: |
            {
              "transition" : {
                "id" : "${parameterValues['newStateID']}"
              }
            }
          waitForCompletion: false
        - label: "Jira: Add Comment"
          description:  Add a comment to an existing Jira Issue
          type: commentJiraIssue
          enabled: ${jira.enabled}
          url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/comment"
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: POST
          parameters:
            - name: body
              label: Comment body
              description: The text body of the component.
          payload: |
            {
              "body" : "${parameterValues['body']}"
            }
          waitForCompletion: false

  orca.yml: |
    server:
        port: ${services.orca.port:8083}
        address: ${services.orca.host:localhost}
    oort:
        baseUrl: ${services.oort.baseUrl:localhost:7002}
    front50:
        baseUrl: ${services.front50.baseUrl:localhost:8080}
    mort:
        baseUrl: ${services.mort.baseUrl:localhost:7002}
    kato:
        baseUrl: ${services.kato.baseUrl:localhost:7002}
    bakery:
        baseUrl: ${services.bakery.baseUrl:localhost:8087}
        extractBuildDetails: ${services.bakery.extractBuildDetails:true}
        allowMissingPackageInstallation: ${services.bakery.allowMissingPackageInstallation:true}
    echo:
        enabled: ${services.echo.enabled:false}
        baseUrl: ${services.echo.baseUrl:8089}
    igor:
        baseUrl: ${services.igor.baseUrl:8088}
    flex:
      baseUrl: http://not-a-host
    default:
      bake:
        account: ${providers.aws.primaryCredentials.name}
      securityGroups:
      vpc:
        securityGroups:
    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
    tasks:
      executionWindow:
        timezone: ${services.orca.timezone}
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}        
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - application
  rosco-armory.yml: |
    redis:
      timeout: 50000

    rosco:
      jobs:
        local:
          timeoutMinutes: 60
  rosco.yml: |
    server:
      port: ${services.rosco.port:8087}
      address: ${services.rosco.host:localhost}

    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}

    aws:
      enabled: ${providers.aws.enabled:false}

    docker:
      enabled: ${services.docker.enabled:false}
      bakeryDefaults:
        targetRepository: ${services.docker.targetRepository}

    google:
      enabled: ${providers.google.enabled:false}
      accounts:
        - name: ${providers.google.primaryCredentials.name}
          project: ${providers.google.primaryCredentials.project}
          jsonPath: ${providers.google.primaryCredentials.jsonPath}
      gce:
        bakeryDefaults:
          zone: ${providers.google.defaultZone}

    rosco:
      configDir: ${services.rosco.configDir}
      jobs:
        local:
          timeoutMinutes: 30

    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}

    stackdriver:
      hints:
        - name: bakes
          labels:
          - success
  spinnaker-armory.yml: |
    armory:
      architecture: 'k8s'

    features:
      artifacts:
        enabled: true
      pipelineTemplates:
        enabled: ${PIPELINE_TEMPLATES_ENABLED:false}
      infrastructureStages:
        enabled: ${INFRA_ENABLED:false}
      certifiedPipelines:
        enabled: ${CERTIFIED_PIPELINES_ENABLED:false}
      configuratorEnabled:
        enabled: true
      configuratorWizard:
        enabled: true
      configuratorCerts:
        enabled: true
      loadtestStage:
        enabled: ${LOADTEST_ENABLED:false}
      jira:
        enabled: ${JIRA_ENABLED:false}
        basicAuthToken: ${JIRA_BASIC_AUTH}
        url: ${JIRA_URL}
        login: ${JIRA_LOGIN}
        password: ${JIRA_PASSWORD}

      slaEnabled:
        enabled: ${SLA_ENABLED:false}
      chaosMonkey:
        enabled: ${CHAOS_ENABLED:false}

      armoryPlatform:
        enabled: ${PLATFORM_ENABLED:false}
        uiEnabled: ${PLATFORM_UI_ENABLED:false}

    services:
      default:
        host: ${DEFAULT_DNS_NAME:localhost}

      clouddriver:
        host: ${DEFAULT_DNS_NAME:armory-clouddriver}
        entityTags:
          enabled: false

      configurator:
        baseUrl: http://${CONFIGURATOR_HOST:armory-configurator}:8069

      echo:
        host: ${DEFAULT_DNS_NAME:armory-echo}

      deck:
        gateUrl: ${API_HOST:service.default.host}
        baseUrl: ${DECK_HOST:armory-deck}

      dinghy:
        enabled: ${DINGHY_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-dinghy}
        baseUrl: ${services.default.protocol}://${services.dinghy.host}:${services.dinghy.port}
        port: 8081

      front50:
        host: ${DEFAULT_DNS_NAME:armory-front50}
        cassandra:
          enabled: false
        redis:
          enabled: true
        gcs:
          enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
        s3:
          enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
        storage_bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
        rootFolder: ${ARMORYSPINNAKER_CONF_STORE_PREFIX:front50}

      gate:
        host: ${DEFAULT_DNS_NAME:armory-gate}

      igor:
        host: ${DEFAULT_DNS_NAME:armory-igor}


      kayenta:
        enabled: true
        host: ${DEFAULT_DNS_NAME:armory-kayenta}
        canaryConfigStore: true
        port: 8090
        baseUrl: ${services.default.protocol}://${services.kayenta.host}:${services.kayenta.port}
        metricsStore: ${METRICS_STORE:stackdriver}
        metricsAccountName: ${METRICS_ACCOUNT_NAME}
        storageAccountName: ${STORAGE_ACCOUNT_NAME}
        atlasWebComponentsUrl: ${ATLAS_COMPONENTS_URL:}

      lighthouse:
        host: ${DEFAULT_DNS_NAME:armory-lighthouse}
        port: 5000
        baseUrl: ${services.default.protocol}://${services.lighthouse.host}:${services.lighthouse.port}

      orca:
        host: ${DEFAULT_DNS_NAME:armory-orca}

      platform:
        enabled: ${PLATFORM_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-platform}
        baseUrl: ${services.default.protocol}://${services.platform.host}:${services.platform.port}
        port: 5001

      rosco:
        host: ${DEFAULT_DNS_NAME:armory-rosco}
        enabled: true
        configDir: /opt/spinnaker/config/packer

      bakery:
        allowMissingPackageInstallation: true

      barometer:
        enabled: ${BAROMETER_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-barometer}
        baseUrl: ${services.default.protocol}://${services.barometer.host}:${services.barometer.port}
        port: 9092
        newRelicEnabled: ${NEW_RELIC_ENABLED:false}

      redis:
        host: redis
        port: 6379
        connection: ${REDIS_HOST:redis://localhost:6379}

      fiat:
        enabled: ${FIAT_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-fiat}
        port: 7003
        baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port}

    providers:
      aws:
        enabled: ${SPINNAKER_AWS_ENABLED:true}
        defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2}
        defaultIAMRole: ${SPINNAKER_AWS_DEFAULT_IAM_ROLE:SpinnakerInstanceProfile}
        defaultAssumeRole: ${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile}
        primaryCredentials:
          name: ${SPINNAKER_AWS_DEFAULT_ACCOUNT:default-aws-account}

      kubernetes:
        proxy: localhost:8001
        apiPrefix: api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#
  spinnaker.yml: |2
    global:
      spinnaker:
        timezone: 'America/Los_Angeles'
        architecture: ${PLATFORM_ARCHITECTURE}

    services:
      default:
        host: localhost
        protocol: http
      clouddriver:
        host: ${services.default.host}
        port: 7002
        baseUrl: ${services.default.protocol}://${services.clouddriver.host}:${services.clouddriver.port}
        aws:
          udf:
            enabled: true

      echo:
        enabled: true
        host: ${services.default.host}
        port: 8089
        baseUrl: ${services.default.protocol}://${services.echo.host}:${services.echo.port}
        cassandra:
          enabled: false
        inMemory:
          enabled: true

        cron:
          enabled: true
          timezone: ${global.spinnaker.timezone}

        notifications:
          mail:
            enabled: false
            host: # the smtp host
            fromAddress: # the address for which emails are sent from
          hipchat:
            enabled: false
            url: # the hipchat server to connect to
            token: # the hipchat auth token
            botName: # the username of the bot
          sms:
            enabled: false
            account: # twilio account id
            token: # twilio auth token
            from: # phone number by which sms messages are sent
          slack:
            enabled: false
            token: # the API token for the bot
            botName: # the username of the bot

      deck:
        host: ${services.default.host}
        port: 9000
        baseUrl: ${services.default.protocol}://${services.deck.host}:${services.deck.port}
        gateUrl: ${API_HOST:services.gate.baseUrl}
        bakeryUrl: ${services.bakery.baseUrl}
        timezone: ${global.spinnaker.timezone}
        auth:
          enabled: ${AUTH_ENABLED:false}


      fiat:
        enabled: false
        host: ${services.default.host}
        port: 7003
        baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port}

      front50:
        host: ${services.default.host}
        port: 8080
        baseUrl: ${services.default.protocol}://${services.front50.host}:${services.front50.port}
        storage_bucket: ${SPINNAKER_DEFAULT_STORAGE_BUCKET:}
        bucket_location:
        bucket_root: front50
        cassandra:
          enabled: false
        redis:
          enabled: false
        gcs:
          enabled: false
        s3:
          enabled: false

      gate:
        host: ${services.default.host}
        port: 8084
        baseUrl: ${services.default.protocol}://${services.gate.host}:${services.gate.port}

      igor:
        enabled: false
        host: ${services.default.host}
        port: 8088
        baseUrl: ${services.default.protocol}://${services.igor.host}:${services.igor.port}

      kato:
        host: ${services.clouddriver.host}
        port: ${services.clouddriver.port}
        baseUrl: ${services.clouddriver.baseUrl}

      mort:
        host: ${services.clouddriver.host}
        port: ${services.clouddriver.port}
        baseUrl: ${services.clouddriver.baseUrl}

      orca:
        host: ${services.default.host}
        port: 8083
        baseUrl: ${services.default.protocol}://${services.orca.host}:${services.orca.port}
        timezone: ${global.spinnaker.timezone}
        enabled: true

      oort:
        host: ${services.clouddriver.host}
        port: ${services.clouddriver.port}
        baseUrl: ${services.clouddriver.baseUrl}

      rosco:
        host: ${services.default.host}
        port: 8087
        baseUrl: ${services.default.protocol}://${services.rosco.host}:${services.rosco.port}
        configDir: /opt/rosco/config/packer

      bakery:
        host: ${services.rosco.host}
        port: ${services.rosco.port}
        baseUrl: ${services.rosco.baseUrl}
        extractBuildDetails: true
        allowMissingPackageInstallation: false

      docker:
        targetRepository: # Optional, but expected in spinnaker-local.yml if specified.

      jenkins:
        enabled: ${services.igor.enabled:false}
        defaultMaster:
          name: Jenkins
          baseUrl:   # Expected in spinnaker-local.yml
          username:  # Expected in spinnaker-local.yml
          password:  # Expected in spinnaker-local.yml

      redis:
        host: redis
        port: 6379
        connection: ${REDIS_HOST:redis://localhost:6379}

      cassandra:
        host: ${services.default.host}
        port: 9042
        embedded: false
        cluster: CASS_SPINNAKER

      travis:
        enabled: false
        defaultMaster:
          name: ci # The display name for this server. Gets prefixed with "travis-"
          baseUrl: https://travis-ci.com
          address: https://api.travis-ci.org
          githubToken: # GitHub scopes currently required by Travis is required.

      spectator:
        webEndpoint:
          enabled: false

      stackdriver:
        enabled: ${SPINNAKER_STACKDRIVER_ENABLED:false}
        projectName: ${SPINNAKER_STACKDRIVER_PROJECT_NAME:${providers.google.primaryCredentials.project}}
        credentialsPath: ${SPINNAKER_STACKDRIVER_CREDENTIALS_PATH:${providers.google.primaryCredentials.jsonPath}}


    providers:
      aws:
        enabled: ${SPINNAKER_AWS_ENABLED:false}
        simpleDBEnabled: false
        defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2}
        defaultIAMRole: BaseIAMRole
        defaultSimpleDBDomain: CLOUD_APPLICATIONS
        primaryCredentials:
          name: default
        defaultKeyPairTemplate: "{{name}}-keypair"


      google:
        enabled: ${SPINNAKER_GOOGLE_ENABLED:false}
        defaultRegion: ${SPINNAKER_GOOGLE_DEFAULT_REGION:us-central1}
        defaultZone: ${SPINNAKER_GOOGLE_DEFAULT_ZONE:us-central1-f}


        primaryCredentials:
          name: my-account-name
          project: ${SPINNAKER_GOOGLE_PROJECT_ID:}
          jsonPath: ${SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH:}
          consul:
            enabled: ${SPINNAKER_GOOGLE_CONSUL_ENABLED:false}


      cf:
        enabled: false
        defaultOrg: spinnaker-cf-org
        defaultSpace: spinnaker-cf-space
        primaryCredentials:
          name: my-cf-account
          api: my-cf-api-uri
          console: my-cf-console-base-url

      azure:
        enabled: ${SPINNAKER_AZURE_ENABLED:false}
        defaultRegion: ${SPINNAKER_AZURE_DEFAULT_REGION:westus}
        primaryCredentials:
          name: my-azure-account

          clientId:
          appKey:
          tenantId:
          subscriptionId:

      titan:
        enabled: false
        defaultRegion: us-east-1
        primaryCredentials:
          name: my-titan-account

      kubernetes:

        enabled: ${SPINNAKER_KUBERNETES_ENABLED:false}
        primaryCredentials:
          name: my-kubernetes-account
          namespace: default
          dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name}

      dockerRegistry:
        enabled: ${SPINNAKER_KUBERNETES_ENABLED:false}

        primaryCredentials:
          name: my-docker-registry-account
          address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/ }
          repository: ${SPINNAKER_DOCKER_REPOSITORY:}
          username: ${SPINNAKER_DOCKER_USERNAME:}
          passwordFile: ${SPINNAKER_DOCKER_PASSWORD_FILE:}

      openstack:
        enabled: false
        defaultRegion: ${SPINNAKER_OPENSTACK_DEFAULT_REGION:RegionOne}
        primaryCredentials:
          name: my-openstack-account
          authUrl: ${OS_AUTH_URL}
          username: ${OS_USERNAME}
          password: ${OS_PASSWORD}
          projectName: ${OS_PROJECT_NAME}
          domainName: ${OS_USER_DOMAIN_NAME:Default}
          regions: ${OS_REGION_NAME:RegionOne}
          insecure: false