K8s集成实战-使用spinnaker进行自动化部署

1 spinnaker概述和选型

1.1 概述

1.1.1 主要功能

Spinnaker是一个开源的多云持续交付平台,提供快速、可靠、稳定的软件变更服务。主要包含两类功能:集群管理和部署管理

1.1.2 集群管理

集群管理主要用于管理云资源,Spinnaker所说的”云“可以理解成AWS,即主要是laaS的资源,比如OpenStak,Google云,微软云等,后来还支持了容器与Kubernetes,但是管理方式还是按照管理基础设施的模式来设计的。

1.1.3 部署管理

管理部署流程是Spinnaker的核心功能,使用minio作为持久化层,同时对接jenkins流水线创建的镜像,部署到Kubernetes集群中去,让服务真正运行起来。

1.1.4 逻辑架构图

Spinnaker自己就是Spinnake一个微服务,由若干组件组成,整套逻辑架构图如下:
K8S(16)集成实战-使用spinnaker进行自动化部署 - 图1

  • Deck是基于浏览器的UI。
  • Gate是API网关。
    Spinnaker UI和所有api调用程序都通过Gate与Spinnaker进行通信。
  • Clouddriver负责管理云平台,并为所有部署的资源编制索引/缓存。
  • Front50用于管理数据持久化,用于保存应用程序,管道,项目和通知的元数据。
  • Igor用于通过Jenkins和Travis CI等系统中的持续集成作业来触发管道,并且它允许在管道中使用Jenkins / Travis阶段。
  • Orca是编排引擎。它处理所有临时操作和流水线。
  • Rosco是管理调度虚拟机。
  • Kayenta为Spinnaker提供自动化的金丝雀分析。
  • Fiat 是Spinnaker的认证服务。
  • Echo是信息通信服务。
    它支持发送通知(例如,Slack,电子邮件,SMS),并处理来自Github之类的服务中传入的Webhook。

    1.2 部署选型

    Spinnaker官网
    Spinnaker包含组件众多,部署相对复杂,因此官方提供的脚手架工具halyard,但是可惜里面涉及的部分镜像地址被墙
    Armory发行版
    基于Spinnaker,众多公司开发了开发第三方发行版来简化Spinnaker的部署工作,例如我们要用的Armory发行版
    Armory也有自己的脚手架工具,虽然相对halyard更简化了,但仍然部分被墙
    因此我们部署的方式是手动交付Spinnaker的Armory发行版

    2 部署spinnaker第一部分

    2.1 spinnaker之minio部署

    2.1.1 准备minio镜像

    1. docker pull minio/minio:latest
    2. docker tag 533fee13ab07 harbor.zq.com/armory/minio:latest
    3. docker push harbor.od.com/armory/minio:latest
    准备目录
    1. mkdir -p /data/nfs-volume/minio
    2. mkdir -p /data/k8s-yaml/armory/minio
    3. cd /data/k8s-yaml/armory/minio

    2.1.2 准备dp资源清单

    1. cat >dp.yaml <<'EOF'
    2. kind: Deployment
    3. apiVersion: apps/v1
    4. kind: Deployment
    5. metadata:
    6. labels:
    7. name: minio
    8. name: minio
    9. namespace: armory
    10. spec:
    11. progressDeadlineSeconds: 600
    12. replicas: 1
    13. revisionHistoryLimit: 7
    14. selector:
    15. matchLabels:
    16. name: minio
    17. template:
    18. metadata:
    19. labels:
    20. app: minio
    21. name: minio
    22. spec:
    23. containers:
    24. - name: minio
    25. image: harbor.zq.com/armory/minio:latest
    26. imagePullPolicy: IfNotPresent
    27. ports:
    28. - containerPort: 9000
    29. protocol: TCP
    30. args:
    31. - server
    32. - /data
    33. env:
    34. - name: MINIO_ACCESS_KEY
    35. value: admin
    36. - name: MINIO_SECRET_KEY
    37. value: admin123
    38. readinessProbe:
    39. failureThreshold: 3
    40. httpGet:
    41. path: /minio/health/ready
    42. port: 9000
    43. scheme: HTTP
    44. initialDelaySeconds: 10
    45. periodSeconds: 10
    46. successThreshold: 1
    47. timeoutSeconds: 5
    48. volumeMounts:
    49. - mountPath: /data
    50. name: data
    51. imagePullSecrets:
    52. - name: harbor
    53. volumes:
    54. - nfs:
    55. server: ops-200.host.com
    56. path: /data/nfs-volume/minio
    57. name: data
    58. EOF

    2.1.3 准备svc资源清单

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: minio
    6. namespace: armory
    7. spec:
    8. ports:
    9. - port: 80
    10. protocol: TCP
    11. targetPort: 9000
    12. selector:
    13. app: minio
    14. EOF

    2.1.4 准备ingress资源清单

    1. cat >ingress.yaml <<'EOF'
    2. kind: Ingress
    3. apiVersion: extensions/v1beta1
    4. metadata:
    5. name: minio
    6. namespace: armory
    7. spec:
    8. rules:
    9. - host: minio.zq.com
    10. http:
    11. paths:
    12. - path: /
    13. backend:
    14. serviceName: minio
    15. servicePort: 80
    16. EOF

    2.1.5 应用资源配置清单

    任意node节点
    创建namespace和secret
    1. kubectl create namespace armory
    2. kubectl create secret docker-registry harbor \
    3. --docker-server=harbor.zq.com \
    4. --docker-username=admin \
    5. --docker-password=Harbor12345 \
    6. -n armory
    应用清单
    1. kubectl apply -f http://k8s-yaml.zq.com/armory/minio/dp.yaml
    2. kubectl apply -f http://k8s-yaml.zq.com/armory/minio/svc.yaml
    3. kubectl apply -f http://k8s-yaml.zq.com/armory/minio/ingress.yaml

    2.1.6 访问验证

    访问http://minio.zq.com,用户名密码为:admin/admin123
    如果访问并登陆成功,表示minio部署成功

    2.2 spinnaker之redis部署

    2.2.1 准备镜像好目录

    1. docker pull redis:4.0.14
    2. docker tag 6e221e67453d harbor.zq.com/armory/redis:v4.0.14
    3. docker push harbor.od.com/armory/redis:v4.0.14
    准备目录
    1. mkdir -p /data/k8s-yaml/armory/redis
    2. cd /data/k8s-yaml/armory/redis

    2.2.2 准备dp资源清单

    1. cat >dp.yaml <<'EOF'
    2. kind: Deployment
    3. apiVersion: apps:v1
    4. metadata:
    5. labels:
    6. name: redis
    7. name: redis
    8. namespace: armory
    9. spec:
    10. replicas: 1
    11. revisionHistoryLimit: 7
    12. selector:
    13. matchLabels:
    14. name: redis
    15. template:
    16. metadata:
    17. labels:
    18. app: redis
    19. name: redis
    20. spec:
    21. containers:
    22. - name: redis
    23. image: harbor.zq.com/armory/redis:v4.0.14
    24. imagePullPolicy: IfNotPresent
    25. ports:
    26. - containerPort: 6379
    27. protocol: TCP
    28. imagePullSecrets:
    29. - name: harbor
    30. EOF

    2.2.3 准备svc资源清单

    1. cat >svc.yaml <<'EOF'
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: redis
    6. namespace: armory
    7. spec:
    8. ports:
    9. - port: 6379
    10. protocol: TCP
    11. targetPort: 6379
    12. selector:
    13. app: redis
    14. EOF

    2.3.4 应用资源配置清单

    1. kubectl apply -f http://k8s-yaml.zq.com/armory/redis/dp.yaml
    2. kubectl apply -f http://k8s-yaml.zq.com/armory/redis/svc.yaml

    3 部署spinnaker之CloudDriver

    CloudDriver是整套spinnaker部署中最难的部分,因此单独写一章来说明

    3.1 部署准备工作

    3.1.1 准备镜像和目录

    1. docker pull armory/spinnaker-clouddriver-slim:release-1.11.x-bee52673a
    2. docker tag f1d52d01e28d harbor.od.com/armory/clouddriver:v1.11.x
    3. docker push harbor.od.com/armory/clouddriver:v1.11.x
    准备目录
    1. mkdir /data/k8s-yaml/armory/clouddriver
    2. cd /data/k8s-yaml/armory/clouddriver

    3.1.2 准备minio的secret

    准备配置文件
    1. cat >credentials <<'EOF'
    2. [default]
    3. aws_access_key_id=admin
    4. aws_secret_access_key=admin123
    5. EOF
    NODE节点创建secret
    1. wget http://k8s-yaml.od.com/armory/clouddriver/credentials
    2. kubectl create secret generic credentials \
    3. --from-file=./credentials \
    4. -n armory
    5. # 也可以不急于配置文件,直接命令行创建
    6. kubectl create secret generic credentials \
    7. --aws_access_key_id=admin \
    8. --aws_secret_access_key=admin123 \
    9. -n armory

    3.1.3 签发证书与私钥

    1. cd /opt/certs
    2. cp client-csr.json admin-csr.json
    3. sed -i 's##cluster-admin#g' admin-csr.json
    4. cfssl gencert \
    5. -ca=ca.pem \
    6. -ca-key=ca-key.pem \
    7. -config=ca-config.json \
    8. -profile=client \
    9. admin-csr.json |cfssl-json -bare admin
    10. ls admin*

    3.1.3 分发证书

    在任意node节点
    1. cd /opt/certs
    2. scp hdss7-200:/opt/certs/ca.pem .
    3. scp hdss7-200:/opt/certs/admin.pem .
    4. scp hdss7-200:/opt/certs/admin-key.pem .

    3.1.4 创建用户

    ```

    4步法创建用户

    kubectl config set-cluster myk8s \ —certificate-authority=./ca.pem \ —embed-certs=true —server=https://192.168.1.10:7443 \ —kubeconfig=config kubectl config set-credentials cluster-admin \ —client-certificate=./admin.pem \ —client-key=./admin-key.pem \ —embed-certs=true —kubeconfig=config kubectl config set-context myk8s-context \ —cluster=myk8s \ —user=cluster-admin \ —kubeconfig=config kubectl config use-context myk8s-context \ —kubeconfig=config

集群角色绑定

kubectl create clusterrolebinding myk8s-admin \ —clusterrole=cluster-admin \ —user=cluster-admin

  1. <a name="7138cc8f"></a>
  2. #### 3.1.5 使用config创建cm资源

cp config default-kubeconfig kubectl create cm default-kubeconfig —from-file=default-kubeconfig -n armory

  1. <a name="07ace32b"></a>
  2. ### 3.2 创建并应用资源清单
  3. 回到7.200管理机

cd /data/k8s-yaml/armory/clouddriver

  1. <a name="019050c8"></a>
  2. #### 3.2.1 创建环境变量配置

cat >vim init-env.yaml <<’EOF’ kind: ConfigMap apiVersion: v1 metadata: name: init-env namespace: armory data: API_HOST: http://spinnaker.zq.com/api ARMORY_ID: c02f0781-92f5-4e80-86db-0ba8fe7b8544 ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform ARMORYSPINNAKER_CONF_STORE_PREFIX: front50 ARMORYSPINNAKER_GCS_ENABLED: “false” ARMORYSPINNAKER_S3_ENABLED: “true” AUTH_ENABLED: “false” AWS_REGION: us-east-1 BASE_IP: 127.0.0.1 CLOUDDRIVER_OPTS: -Dspring.profiles.active=armory,configurator,local CONFIGURATOR_ENABLED: “false” DECK_HOST: http://spinnaker.zq.com ECHO_OPTS: -Dspring.profiles.active=armory,configurator,local GATE_OPTS: -Dspring.profiles.active=armory,configurator,local IGOR_OPTS: -Dspring.profiles.active=armory,configurator,local PLATFORM_ARCHITECTURE: k8s REDIS_HOST: redis://redis:6379 SERVER_ADDRESS: 0.0.0.0 SPINNAKER_AWS_DEFAULT_REGION: us-east-1 SPINNAKER_AWS_ENABLED: “false” SPINNAKER_CONFIG_DIR: /home/spinnaker/config SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH: “” SPINNAKER_HOME: /home/spinnaker SPRING_PROFILES_ACTIVE: armory,configurator,local EOF

  1. <a name="11b9deeb"></a>
  2. #### 3.2.2 创建组件配置文件

cat >custom-config.yaml <<’EOF’ kind: ConfigMap apiVersion: v1 metadata: name: custom-config namespace: armory data: clouddriver-local.yml: | kubernetes: enabled: true accounts:

  1. - name: cluster-admin
  2. serviceAccount: false
  3. dockerRegistries:
  4. - accountName: harbor
  5. namespace: []
  6. namespaces:
  7. - test
  8. - prod
  9. kubeconfigFile: /opt/spinnaker/credentials/custom/default-kubeconfig
  10. primaryAccount: cluster-admin
  11. dockerRegistry:
  12. enabled: true
  13. accounts:
  14. - name: harbor
  15. requiredGroupMembership: []
  16. providerVersion: V1
  17. insecureRegistry: true
  18. address: http://harbor.zq.com
  19. username: admin
  20. password: Harbor12345
  21. primaryAccount: harbor
  22. artifacts:
  23. s3:
  24. enabled: true
  25. accounts:
  26. - name: armory-config-s3-account
  27. apiEndpoint: http://minio
  28. apiRegion: us-east-1
  29. gcs:
  30. enabled: false
  31. accounts:
  32. - name: armory-config-gcs-account

custom-config.json: “” echo-configurator.yml: | diagnostics: enabled: true front50-local.yml: | spinnaker: s3: endpoint: http://minio igor-local.yml: | jenkins: enabled: true masters:

  1. - name: jenkins-admin
  2. address: http://jenkins.zq.com
  3. username: admin
  4. password: admin123
  5. primaryAccount: jenkins-admin

nginx.conf: | gzip on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon; server { listen 80; location / { proxy_pass http://armory-deck/; } location /api/ { proxy_pass http://armory-gate:8084/; } rewrite ^/login(.)$ /api/login$1 last; rewrite ^/auth(.)$ /api/auth$1 last; } spinnaker-local.yml: | services: igor: enabled: true EOF

  1. <a name="b9b8418a"></a>
  2. #### 3.2.3 创建默认配置文件
  3. > 注意:
  4. > 此配置文件超长,是用armory部署工具部署好后,基本不需要改动

cat >default-config.yaml <<’EOF’ kind: ConfigMap apiVersion: v1 metadata: name: default-config namespace: armory data: barometer.yml: | server: port: 9092 spinnaker: redis: host: ${services.redis.host} port: ${services.redis.port} clouddriver-armory.yml: | aws: defaultAssumeRole: role/${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile} accounts:

  1. - name: default-aws-account
  2. accountId: ${SPINNAKER_AWS_DEFAULT_ACCOUNT_ID:none}
  3. client:
  4. maxErrorRetry: 20
  5. serviceLimits:
  6. cloudProviderOverrides:
  7. aws:
  8. rateLimit: 15.0
  9. implementationLimits:
  10. AmazonAutoScaling:
  11. defaults:
  12. rateLimit: 3.0
  13. AmazonElasticLoadBalancing:
  14. defaults:
  15. rateLimit: 5.0
  16. security.basic.enabled: false
  17. management.security.enabled: false

clouddriver-dev.yml: | serviceLimits: defaults: rateLimit: 2 clouddriver.yml: | server: port: ${services.clouddriver.port:7002} address: ${services.clouddriver.host:localhost} redis: connection: ${REDIS_HOST:redis://localhost:6379} udf: enabled: ${services.clouddriver.aws.udf.enabled:true} udfRoot: /opt/spinnaker/config/udf defaultLegacyUdf: false default: account: env: ${providers.aws.primaryCredentials.name} aws: enabled: ${providers.aws.enabled:false} defaults: iamRole: ${providers.aws.defaultIAMRole:BaseIAMRole} defaultRegions:

  1. - name: ${providers.aws.defaultRegion:us-east-1}
  2. defaultFront50Template: ${services.front50.baseUrl}
  3. defaultKeyPairTemplate: ${providers.aws.defaultKeyPairTemplate}
  4. azure:
  5. enabled: ${providers.azure.enabled:false}
  6. accounts:
  7. - name: ${providers.azure.primaryCredentials.name}
  8. clientId: ${providers.azure.primaryCredentials.clientId}
  9. appKey: ${providers.azure.primaryCredentials.appKey}
  10. tenantId: ${providers.azure.primaryCredentials.tenantId}
  11. subscriptionId: ${providers.azure.primaryCredentials.subscriptionId}
  12. google:
  13. enabled: ${providers.google.enabled:false}
  14. accounts:
  15. - name: ${providers.google.primaryCredentials.name}
  16. project: ${providers.google.primaryCredentials.project}
  17. jsonPath: ${providers.google.primaryCredentials.jsonPath}
  18. consul:
  19. enabled: ${providers.google.primaryCredentials.consul.enabled:false}
  20. cf:
  21. enabled: ${providers.cf.enabled:false}
  22. accounts:
  23. - name: ${providers.cf.primaryCredentials.name}
  24. api: ${providers.cf.primaryCredentials.api}
  25. console: ${providers.cf.primaryCredentials.console}
  26. org: ${providers.cf.defaultOrg}
  27. space: ${providers.cf.defaultSpace}
  28. username: ${providers.cf.account.name:}
  29. password: ${providers.cf.account.password:}
  30. kubernetes:
  31. enabled: ${providers.kubernetes.enabled:false}
  32. accounts:
  33. - name: ${providers.kubernetes.primaryCredentials.name}
  34. dockerRegistries:
  35. - accountName: ${providers.kubernetes.primaryCredentials.dockerRegistryAccount}
  36. openstack:
  37. enabled: ${providers.openstack.enabled:false}
  38. accounts:
  39. - name: ${providers.openstack.primaryCredentials.name}
  40. authUrl: ${providers.openstack.primaryCredentials.authUrl}
  41. username: ${providers.openstack.primaryCredentials.username}
  42. password: ${providers.openstack.primaryCredentials.password}
  43. projectName: ${providers.openstack.primaryCredentials.projectName}
  44. domainName: ${providers.openstack.primaryCredentials.domainName:Default}
  45. regions: ${providers.openstack.primaryCredentials.regions}
  46. insecure: ${providers.openstack.primaryCredentials.insecure:false}
  47. userDataFile: ${providers.openstack.primaryCredentials.userDataFile:}
  48. lbaas:
  49. pollTimeout: 60
  50. pollInterval: 5
  51. dockerRegistry:
  52. enabled: ${providers.dockerRegistry.enabled:false}
  53. accounts:
  54. - name: ${providers.dockerRegistry.primaryCredentials.name}
  55. address: ${providers.dockerRegistry.primaryCredentials.address}
  56. username: ${providers.dockerRegistry.primaryCredentials.username:}
  57. passwordFile: ${providers.dockerRegistry.primaryCredentials.passwordFile}
  58. credentials:
  59. primaryAccountTypes: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}
  60. challengeDestructiveActionsEnvironments: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}
  61. spectator:
  62. applicationName: ${spring.application.name}
  63. webEndpoint:
  64. enabled: ${services.spectator.webEndpoint.enabled:false}
  65. prototypeFilter:
  66. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
  67. stackdriver:
  68. enabled: ${services.stackdriver.enabled}
  69. projectName: ${services.stackdriver.projectName}
  70. credentialsPath: ${services.stackdriver.credentialsPath}
  71. stackdriver:
  72. hints:
  73. - name: controller.invocations
  74. labels:
  75. - account
  76. - region

dinghy.yml: “” echo-armory.yml: | diagnostics: enabled: true id: ${ARMORY_ID:unknown} armorywebhooks: enabled: false forwarding: baseUrl: http://armory-dinghy:8081 endpoint: v1/webhooks echo-noncron.yml: | scheduler: enabled: false echo.yml: | server: port: ${services.echo.port:8089} address: ${services.echo.host:localhost} cassandra: enabled: ${services.echo.cassandra.enabled:false} embedded: ${services.cassandra.embedded:false} host: ${services.cassandra.host:localhost} spinnaker: baseUrl: ${services.deck.baseUrl} cassandra: enabled: ${services.echo.cassandra.enabled:false} inMemory: enabled: ${services.echo.inMemory.enabled:true} front50: baseUrl: ${services.front50.baseUrl:http://localhost:8080 } orca: baseUrl: ${services.orca.baseUrl:http://localhost:8083 } endpoints.health.sensitive: false slack: enabled: ${services.echo.notifications.slack.enabled:false} token: ${services.echo.notifications.slack.token} spring: mail: host: ${mail.host} mail: enabled: ${services.echo.notifications.mail.enabled:false} host: ${services.echo.notifications.mail.host} from: ${services.echo.notifications.mail.fromAddress} hipchat: enabled: ${services.echo.notifications.hipchat.enabled:false} baseUrl: ${services.echo.notifications.hipchat.url} token: ${services.echo.notifications.hipchat.token} twilio: enabled: ${services.echo.notifications.sms.enabled:false} baseUrl: ${services.echo.notifications.sms.url:https://api.twilio.com/ } account: ${services.echo.notifications.sms.account} token: ${services.echo.notifications.sms.token} from: ${services.echo.notifications.sms.from} scheduler: enabled: ${services.echo.cron.enabled:true} threadPoolSize: 20 triggeringEnabled: true pipelineConfigsPoller: enabled: true pollingIntervalMs: 30000 cron: timezone: ${services.echo.cron.timezone} spectator: applicationName: ${spring.application.name} webEndpoint: enabled: ${services.spectator.webEndpoint.enabled:false} prototypeFilter: path: ${services.spectator.webEndpoint.prototypeFilter.path:} stackdriver: enabled: ${services.stackdriver.enabled} projectName: ${services.stackdriver.projectName} credentialsPath: ${services.stackdriver.credentialsPath} webhooks: artifacts: enabled: true fetch.sh: |+

  1. CONFIG_LOCATION=${SPINNAKER_HOME:-"/opt/spinnaker"}/config
  2. CONTAINER=$1
  3. rm -f /opt/spinnaker/config/*.yml
  4. mkdir -p ${CONFIG_LOCATION}
  5. for filename in /opt/spinnaker/config/default/*.yml; do
  6. cp $filename ${CONFIG_LOCATION}
  7. done
  8. if [ -d /opt/spinnaker/config/custom ]; then
  9. for filename in /opt/spinnaker/config/custom/*; do
  10. cp $filename ${CONFIG_LOCATION}
  11. done
  12. fi
  13. add_ca_certs() {
  14. ca_cert_path="$1"
  15. jks_path="$2"
  16. alias="$3"
  17. if [[ "$(whoami)" != "root" ]]; then
  18. echo "INFO: I do not have proper permisions to add CA roots"
  19. return
  20. fi
  21. if [[ ! -f ${ca_cert_path} ]]; then
  22. echo "INFO: No CA cert found at ${ca_cert_path}"
  23. return
  24. fi
  25. keytool -importcert \
  26. -file ${ca_cert_path} \
  27. -keystore ${jks_path} \
  28. -alias ${alias} \
  29. -storepass changeit \
  30. -noprompt
  31. }
  32. if [ `which keytool` ]; then
  33. echo "INFO: Keytool found adding certs where appropriate"
  34. add_ca_certs "${CONFIG_LOCATION}/ca.crt" "/etc/ssl/certs/java/cacerts" "custom-ca"
  35. else
  36. echo "INFO: Keytool not found, not adding any certs/private keys"
  37. fi
  38. saml_pem_path="/opt/spinnaker/config/custom/saml.pem"
  39. saml_pkcs12_path="/tmp/saml.pkcs12"
  40. saml_jks_path="${CONFIG_LOCATION}/saml.jks"
  41. x509_ca_cert_path="/opt/spinnaker/config/custom/x509ca.crt"
  42. x509_client_cert_path="/opt/spinnaker/config/custom/x509client.crt"
  43. x509_jks_path="${CONFIG_LOCATION}/x509.jks"
  44. x509_nginx_cert_path="/opt/nginx/certs/ssl.crt"
  45. if [ "${CONTAINER}" == "gate" ]; then
  46. if [ -f ${saml_pem_path} ]; then
  47. echo "Loading ${saml_pem_path} into ${saml_jks_path}"
  48. openssl pkcs12 -export -out ${saml_pkcs12_path} -in ${saml_pem_path} -password pass:changeit -name saml
  49. keytool -genkey -v -keystore ${saml_jks_path} -alias saml \
  50. -keyalg RSA -keysize 2048 -validity 10000 \
  51. -storepass changeit -keypass changeit -dname "CN=armory"
  52. keytool -importkeystore \
  53. -srckeystore ${saml_pkcs12_path} \
  54. -srcstoretype PKCS12 \
  55. -srcstorepass changeit \
  56. -destkeystore ${saml_jks_path} \
  57. -deststoretype JKS \
  58. -storepass changeit \
  59. -alias saml \
  60. -destalias saml \
  61. -noprompt
  62. else
  63. echo "No SAML IDP pemfile found at ${saml_pem_path}"
  64. fi
  65. if [ -f ${x509_ca_cert_path} ]; then
  66. echo "Loading ${x509_ca_cert_path} into ${x509_jks_path}"
  67. add_ca_certs ${x509_ca_cert_path} ${x509_jks_path} "ca"
  68. else
  69. echo "No x509 CA cert found at ${x509_ca_cert_path}"
  70. fi
  71. if [ -f ${x509_client_cert_path} ]; then
  72. echo "Loading ${x509_client_cert_path} into ${x509_jks_path}"
  73. add_ca_certs ${x509_client_cert_path} ${x509_jks_path} "client"
  74. else
  75. echo "No x509 Client cert found at ${x509_client_cert_path}"
  76. fi
  77. if [ -f ${x509_nginx_cert_path} ]; then
  78. echo "Creating a self-signed CA (EXPIRES IN 360 DAYS) with java keystore: ${x509_jks_path}"
  79. echo -e "\n\n\n\n\n\ny\n" | keytool -genkey -keyalg RSA -alias server -keystore keystore.jks -storepass changeit -validity 360 -keysize 2048
  80. keytool -importkeystore \
  81. -srckeystore keystore.jks \
  82. -srcstorepass changeit \
  83. -destkeystore "${x509_jks_path}" \
  84. -storepass changeit \
  85. -srcalias server \
  86. -destalias server \
  87. -noprompt
  88. else
  89. echo "No x509 nginx cert found at ${x509_nginx_cert_path}"
  90. fi
  91. fi
  92. if [ "${CONTAINER}" == "nginx" ]; then
  93. nginx_conf_path="/opt/spinnaker/config/default/nginx.conf"
  94. if [ -f ${nginx_conf_path} ]; then
  95. cp ${nginx_conf_path} /etc/nginx/nginx.conf
  96. fi
  97. fi

fiat.yml: |- server: port: ${services.fiat.port:7003} address: ${services.fiat.host:localhost} redis: connection: ${services.redis.connection:redis://localhost:6379} spectator: applicationName: ${spring.application.name} webEndpoint: enabled: ${services.spectator.webEndpoint.enabled:false} prototypeFilter: path: ${services.spectator.webEndpoint.prototypeFilter.path:} stackdriver: enabled: ${services.stackdriver.enabled} projectName: ${services.stackdriver.projectName} credentialsPath: ${services.stackdriver.credentialsPath} hystrix: command: default.execution.isolation.thread.timeoutInMilliseconds: 20000 logging: level: com.netflix.spinnaker.fiat: DEBUG front50-armory.yml: | spinnaker: redis: enabled: true host: redis front50.yml: | server: port: ${services.front50.port:8080} address: ${services.front50.host:localhost} hystrix: command: default.execution.isolation.thread.timeoutInMilliseconds: 15000 cassandra: enabled: ${services.front50.cassandra.enabled:false} embedded: ${services.cassandra.embedded:false} host: ${services.cassandra.host:localhost} aws: simpleDBEnabled: ${providers.aws.simpleDBEnabled:false} defaultSimpleDBDomain: ${providers.aws.defaultSimpleDBDomain} spinnaker: cassandra: enabled: ${services.front50.cassandra.enabled:false} host: ${services.cassandra.host:localhost} port: ${services.cassandra.port:9042} cluster: ${services.cassandra.cluster:CASS_SPINNAKER} keyspace: front50 name: global redis: enabled: ${services.front50.redis.enabled:false} gcs: enabled: ${services.front50.gcs.enabled:false} bucket: ${services.front50.storage_bucket:} bucketLocation: ${services.front50.bucket_location:} rootFolder: ${services.front50.rootFolder:front50} project: ${providers.google.primaryCredentials.project} jsonPath: ${providers.google.primaryCredentials.jsonPath} s3: enabled: ${services.front50.s3.enabled:false} bucket: ${services.front50.storage_bucket:} rootFolder: ${services.front50.rootFolder:front50} spectator: applicationName: ${spring.application.name} webEndpoint: enabled: ${services.spectator.webEndpoint.enabled:false} prototypeFilter: path: ${services.spectator.webEndpoint.prototypeFilter.path:} stackdriver: enabled: ${services.stackdriver.enabled} projectName: ${services.stackdriver.projectName} credentialsPath: ${services.stackdriver.credentialsPath} stackdriver: hints:

  1. - name: controller.invocations
  2. labels:
  3. - application
  4. - cause
  5. - name: aws.request.httpRequestTime
  6. labels:
  7. - status
  8. - exception
  9. - AWSErrorCode
  10. - name: aws.request.requestSigningTime
  11. labels:
  12. - exception

gate-armory.yml: |+ lighthouse: baseUrl: http://${DEFAULT_DNS_NAME:lighthouse}:5000 gate.yml: | server: port: ${services.gate.port:8084} address: ${services.gate.host:localhost} redis: connection: ${REDIS_HOST:redis://localhost:6379} configuration: secure: true spectator: applicationName: ${spring.application.name} webEndpoint: enabled: ${services.spectator.webEndpoint.enabled:false} prototypeFilter: path: ${services.spectator.webEndpoint.prototypeFilter.path:} stackdriver: enabled: ${services.stackdriver.enabled} projectName: ${services.stackdriver.projectName} credentialsPath: ${services.stackdriver.credentialsPath} stackdriver: hints:

  1. - name: EurekaOkClient_Request
  2. labels:
  3. - cause
  4. - reason
  5. - status

igor-nonpolling.yml: | jenkins: polling: enabled: false igor.yml: | server: port: ${services.igor.port:8088} address: ${services.igor.host:localhost} jenkins: enabled: ${services.jenkins.enabled:false} masters:

  1. - name: ${services.jenkins.defaultMaster.name}
  2. address: ${services.jenkins.defaultMaster.baseUrl}
  3. username: ${services.jenkins.defaultMaster.username}
  4. password: ${services.jenkins.defaultMaster.password}
  5. csrf: ${services.jenkins.defaultMaster.csrf:false}
  6. travis:
  7. enabled: ${services.travis.enabled:false}
  8. masters:
  9. - name: ${services.travis.defaultMaster.name}
  10. baseUrl: ${services.travis.defaultMaster.baseUrl}
  11. address: ${services.travis.defaultMaster.address}
  12. githubToken: ${services.travis.defaultMaster.githubToken}
  13. dockerRegistry:
  14. enabled: ${providers.dockerRegistry.enabled:false}
  15. redis:
  16. connection: ${REDIS_HOST:redis://localhost:6379}
  17. spectator:
  18. applicationName: ${spring.application.name}
  19. webEndpoint:
  20. enabled: ${services.spectator.webEndpoint.enabled:false}
  21. prototypeFilter:
  22. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
  23. stackdriver:
  24. enabled: ${services.stackdriver.enabled}
  25. projectName: ${services.stackdriver.projectName}
  26. credentialsPath: ${services.stackdriver.credentialsPath}
  27. stackdriver:
  28. hints:
  29. - name: controller.invocations
  30. labels:
  31. - master

kayenta-armory.yml: | kayenta: aws: enabled: ${ARMORYSPINNAKER_S3_ENABLED:false} accounts:

  1. - name: aws-s3-storage
  2. bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
  3. rootFolder: kayenta
  4. supportedTypes:
  5. - OBJECT_STORE
  6. - CONFIGURATION_STORE
  7. s3:
  8. enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
  9. google:
  10. enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
  11. accounts:
  12. - name: cloud-armory
  13. bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
  14. rootFolder: kayenta-prod
  15. supportedTypes:
  16. - METRICS_STORE
  17. - OBJECT_STORE
  18. - CONFIGURATION_STORE
  19. gcs:
  20. enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}

kayenta.yml: |2 server: port: 8090 kayenta: atlas: enabled: false google: enabled: false aws: enabled: false datadog: enabled: false prometheus: enabled: false gcs: enabled: false s3: enabled: false stackdriver: enabled: false memory: enabled: false configbin: enabled: false keiko: queue: redis: queueName: kayenta.keiko.queue deadLetterQueueName: kayenta.keiko.queue.deadLetters redis: connection: ${REDIS_HOST:redis://localhost:6379} spectator: applicationName: ${spring.application.name} webEndpoint: enabled: true swagger: enabled: true title: Kayenta API description: contact: patterns:

  1. - /admin.*
  2. - /canary.*
  3. - /canaryConfig.*
  4. - /canaryJudgeResult.*
  5. - /credentials.*
  6. - /fetch.*
  7. - /health
  8. - /judges.*
  9. - /metadata.*
  10. - /metricSetList.*
  11. - /metricSetPairList.*
  12. - /pipeline.*
  13. security.basic.enabled: false
  14. management.security.enabled: false

nginx.conf: | user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main ‘$remote_addr - $remote_user [$time_local] “$request” ‘ ‘$status $body_bytes_sent “$http_referer” ‘ ‘“$http_user_agent” “$http_x_forwarded_for”‘; access_log /var/log/nginx/access.log main;

  1. sendfile on;
  2. keepalive_timeout 65;
  3. include /etc/nginx/conf.d/*.conf;
  4. }
  5. stream {
  6. upstream gate_api {
  7. server armory-gate:8085;
  8. }
  9. server {
  10. listen 8085;
  11. proxy_pass gate_api;
  12. }
  13. }

nginx.http.conf: | gzip on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon; server { listen 80; listen [::]:80; location / { proxy_pass http://armory-deck/; } location /api/ { proxy_pass http://armory-gate:8084/; } location /slack/ { proxy_pass http://armory-platform:10000/; } rewrite ^/login(.)$ /api/login$1 last; rewrite ^/auth(.)$ /api/auth$1 last; } nginx.https.conf: | gzip on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon; server { listen 80; listen [::]:80; return 301 https://$host$request_uri; } server { listen 443 ssl; listen [::]:443 ssl; ssl on; ssl_certificate /opt/nginx/certs/ssl.crt; ssl_certificate_key /opt/nginx/certs/ssl.key; location / { proxy_pass http://armory-deck/; } location /api/ { proxy_pass http://armory-gate:8084/; proxy_set_header Host $host; proxy_set_header X-Real-IP $proxy_protocol_addr; proxy_set_header X-Forwarded-For $proxy_protocol_addr; proxy_set_header X-Forwarded-Proto $scheme; } location /slack/ { proxy_pass http://armory-platform:10000/; } rewrite ^/login(.)$ /api/login$1 last; rewrite ^/auth(.)$ /api/auth$1 last; } orca-armory.yml: | mine: baseUrl: http://${services.barometer.host}:${services.barometer.port} pipelineTemplate: enabled: ${features.pipelineTemplates.enabled:false} jinja: enabled: true kayenta: enabled: ${services.kayenta.enabled:false} baseUrl: ${services.kayenta.baseUrl} jira: enabled: ${features.jira.enabled:false} basicAuth: “Basic ${features.jira.basicAuthToken}” url: ${features.jira.createIssueUrl} webhook: preconfigured:

  1. - label: Enforce Pipeline Policy
  2. description: Checks pipeline configuration against policy requirements
  3. type: enforcePipelinePolicy
  4. enabled: ${features.certifiedPipelines.enabled:false}
  5. url: "http://lighthouse:5000/v1/pipelines/${execution.application}/${execution.pipelineConfigId}?check_policy=yes"
  6. headers:
  7. Accept:
  8. - application/json
  9. method: GET
  10. waitForCompletion: true
  11. statusUrlResolution: getMethod
  12. statusJsonPath: $.status
  13. successStatuses: pass
  14. canceledStatuses:
  15. terminalStatuses: TERMINAL
  16. - label: "Jira: Create Issue"
  17. description: Enter a Jira ticket when this pipeline runs
  18. type: createJiraIssue
  19. enabled: ${jira.enabled}
  20. url: ${jira.url}
  21. customHeaders:
  22. "Content-Type": application/json
  23. Authorization: ${jira.basicAuth}
  24. method: POST
  25. parameters:
  26. - name: summary
  27. label: Issue Summary
  28. description: A short summary of your issue.
  29. - name: description
  30. label: Issue Description
  31. description: A longer description of your issue.
  32. - name: projectKey
  33. label: Project key
  34. description: The key of your JIRA project.
  35. - name: type
  36. label: Issue Type
  37. description: The type of your issue, e.g. "Task", "Story", etc.
  38. payload: |
  39. {
  40. "fields" : {
  41. "description": "${parameterValues['description']}",
  42. "issuetype": {
  43. "name": "${parameterValues['type']}"
  44. },
  45. "project": {
  46. "key": "${parameterValues['projectKey']}"
  47. },
  48. "summary": "${parameterValues['summary']}"
  49. }
  50. }
  51. waitForCompletion: false
  52. - label: "Jira: Update Issue"
  53. description: Update a previously created Jira Issue
  54. type: updateJiraIssue
  55. enabled: ${jira.enabled}
  56. url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}"
  57. customHeaders:
  58. "Content-Type": application/json
  59. Authorization: ${jira.basicAuth}
  60. method: PUT
  61. parameters:
  62. - name: summary
  63. label: Issue Summary
  64. description: A short summary of your issue.
  65. - name: description
  66. label: Issue Description
  67. description: A longer description of your issue.
  68. payload: |
  69. {
  70. "fields" : {
  71. "description": "${parameterValues['description']}",
  72. "summary": "${parameterValues['summary']}"
  73. }
  74. }
  75. waitForCompletion: false
  76. - label: "Jira: Transition Issue"
  77. description: Change state of existing Jira Issue
  78. type: transitionJiraIssue
  79. enabled: ${jira.enabled}
  80. url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/transitions"
  81. customHeaders:
  82. "Content-Type": application/json
  83. Authorization: ${jira.basicAuth}
  84. method: POST
  85. parameters:
  86. - name: newStateID
  87. label: New State ID
  88. description: The ID of the state you want to transition the issue to.
  89. payload: |
  90. {
  91. "transition" : {
  92. "id" : "${parameterValues['newStateID']}"
  93. }
  94. }
  95. waitForCompletion: false
  96. - label: "Jira: Add Comment"
  97. description: Add a comment to an existing Jira Issue
  98. type: commentJiraIssue
  99. enabled: ${jira.enabled}
  100. url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/comment"
  101. customHeaders:
  102. "Content-Type": application/json
  103. Authorization: ${jira.basicAuth}
  104. method: POST
  105. parameters:
  106. - name: body
  107. label: Comment body
  108. description: The text body of the component.
  109. payload: |
  110. {
  111. "body" : "${parameterValues['body']}"
  112. }
  113. waitForCompletion: false

orca.yml: | server: port: ${services.orca.port:8083} address: ${services.orca.host:localhost} oort: baseUrl: ${services.oort.baseUrl:localhost:7002} front50: baseUrl: ${services.front50.baseUrl:localhost:8080} mort: baseUrl: ${services.mort.baseUrl:localhost:7002} kato: baseUrl: ${services.kato.baseUrl:localhost:7002} bakery: baseUrl: ${services.bakery.baseUrl:localhost:8087} extractBuildDetails: ${services.bakery.extractBuildDetails:true} allowMissingPackageInstallation: ${services.bakery.allowMissingPackageInstallation:true} echo: enabled: ${services.echo.enabled:false} baseUrl: ${services.echo.baseUrl:8089} igor: baseUrl: ${services.igor.baseUrl:8088} flex: baseUrl: http://not-a-host default: bake: account: ${providers.aws.primaryCredentials.name} securityGroups: vpc: securityGroups: redis: connection: ${REDIS_HOST:redis://localhost:6379} tasks: executionWindow: timezone: ${services.orca.timezone} spectator: applicationName: ${spring.application.name} webEndpoint: enabled: ${services.spectator.webEndpoint.enabled:false} prototypeFilter: path: ${services.spectator.webEndpoint.prototypeFilter.path:}
stackdriver: enabled: ${services.stackdriver.enabled} projectName: ${services.stackdriver.projectName} credentialsPath: ${services.stackdriver.credentialsPath} stackdriver: hints:

  1. - name: controller.invocations
  2. labels:
  3. - application

rosco-armory.yml: | redis: timeout: 50000 rosco: jobs: local: timeoutMinutes: 60 rosco.yml: | server: port: ${services.rosco.port:8087} address: ${services.rosco.host:localhost} redis: connection: ${REDIS_HOST:redis://localhost:6379} aws: enabled: ${providers.aws.enabled:false} docker: enabled: ${services.docker.enabled:false} bakeryDefaults: targetRepository: ${services.docker.targetRepository} google: enabled: ${providers.google.enabled:false} accounts:

  1. - name: ${providers.google.primaryCredentials.name}
  2. project: ${providers.google.primaryCredentials.project}
  3. jsonPath: ${providers.google.primaryCredentials.jsonPath}
  4. gce:
  5. bakeryDefaults:
  6. zone: ${providers.google.defaultZone}
  7. rosco:
  8. configDir: ${services.rosco.configDir}
  9. jobs:
  10. local:
  11. timeoutMinutes: 30
  12. spectator:
  13. applicationName: ${spring.application.name}
  14. webEndpoint:
  15. enabled: ${services.spectator.webEndpoint.enabled:false}
  16. prototypeFilter:
  17. path: ${services.spectator.webEndpoint.prototypeFilter.path:}
  18. stackdriver:
  19. enabled: ${services.stackdriver.enabled}
  20. projectName: ${services.stackdriver.projectName}
  21. credentialsPath: ${services.stackdriver.credentialsPath}
  22. stackdriver:
  23. hints:
  24. - name: bakes
  25. labels:
  26. - success

spinnaker-armory.yml: | armory: architecture: ‘k8s’

  1. features:
  2. artifacts:
  3. enabled: true
  4. pipelineTemplates:
  5. enabled: ${PIPELINE_TEMPLATES_ENABLED:false}
  6. infrastructureStages:
  7. enabled: ${INFRA_ENABLED:false}
  8. certifiedPipelines:
  9. enabled: ${CERTIFIED_PIPELINES_ENABLED:false}
  10. configuratorEnabled:
  11. enabled: true
  12. configuratorWizard:
  13. enabled: true
  14. configuratorCerts:
  15. enabled: true
  16. loadtestStage:
  17. enabled: ${LOADTEST_ENABLED:false}
  18. jira:
  19. enabled: ${JIRA_ENABLED:false}
  20. basicAuthToken: ${JIRA_BASIC_AUTH}
  21. url: ${JIRA_URL}
  22. login: ${JIRA_LOGIN}
  23. password: ${JIRA_PASSWORD}
  24. slaEnabled:
  25. enabled: ${SLA_ENABLED:false}
  26. chaosMonkey:
  27. enabled: ${CHAOS_ENABLED:false}
  28. armoryPlatform:
  29. enabled: ${PLATFORM_ENABLED:false}
  30. uiEnabled: ${PLATFORM_UI_ENABLED:false}
  31. services:
  32. default:
  33. host: ${DEFAULT_DNS_NAME:localhost}
  34. clouddriver:
  35. host: ${DEFAULT_DNS_NAME:armory-clouddriver}
  36. entityTags:
  37. enabled: false
  38. configurator:
  39. baseUrl: http://${CONFIGURATOR_HOST:armory-configurator}:8069
  40. echo:
  41. host: ${DEFAULT_DNS_NAME:armory-echo}
  42. deck:
  43. gateUrl: ${API_HOST:service.default.host}
  44. baseUrl: ${DECK_HOST:armory-deck}
  45. dinghy:
  46. enabled: ${DINGHY_ENABLED:false}
  47. host: ${DEFAULT_DNS_NAME:armory-dinghy}
  48. baseUrl: ${services.default.protocol}://${services.dinghy.host}:${services.dinghy.port}
  49. port: 8081
  50. front50:
  51. host: ${DEFAULT_DNS_NAME:armory-front50}
  52. cassandra:
  53. enabled: false
  54. redis:
  55. enabled: true
  56. gcs:
  57. enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
  58. s3:
  59. enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
  60. storage_bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
  61. rootFolder: ${ARMORYSPINNAKER_CONF_STORE_PREFIX:front50}
  62. gate:
  63. host: ${DEFAULT_DNS_NAME:armory-gate}
  64. igor:
  65. host: ${DEFAULT_DNS_NAME:armory-igor}
  66. kayenta:
  67. enabled: true
  68. host: ${DEFAULT_DNS_NAME:armory-kayenta}
  69. canaryConfigStore: true
  70. port: 8090
  71. baseUrl: ${services.default.protocol}://${services.kayenta.host}:${services.kayenta.port}
  72. metricsStore: ${METRICS_STORE:stackdriver}
  73. metricsAccountName: ${METRICS_ACCOUNT_NAME}
  74. storageAccountName: ${STORAGE_ACCOUNT_NAME}
  75. atlasWebComponentsUrl: ${ATLAS_COMPONENTS_URL:}
  76. lighthouse:
  77. host: ${DEFAULT_DNS_NAME:armory-lighthouse}
  78. port: 5000
  79. baseUrl: ${services.default.protocol}://${services.lighthouse.host}:${services.lighthouse.port}
  80. orca:
  81. host: ${DEFAULT_DNS_NAME:armory-orca}
  82. platform:
  83. enabled: ${PLATFORM_ENABLED:false}
  84. host: ${DEFAULT_DNS_NAME:armory-platform}
  85. baseUrl: ${services.default.protocol}://${services.platform.host}:${services.platform.port}
  86. port: 5001
  87. rosco:
  88. host: ${DEFAULT_DNS_NAME:armory-rosco}
  89. enabled: true
  90. configDir: /opt/spinnaker/config/packer
  91. bakery:
  92. allowMissingPackageInstallation: true
  93. barometer:
  94. enabled: ${BAROMETER_ENABLED:false}
  95. host: ${DEFAULT_DNS_NAME:armory-barometer}
  96. baseUrl: ${services.default.protocol}://${services.barometer.host}:${services.barometer.port}
  97. port: 9092
  98. newRelicEnabled: ${NEW_RELIC_ENABLED:false}
  99. redis:
  100. host: redis
  101. port: 6379
  102. connection: ${REDIS_HOST:redis://localhost:6379}
  103. fiat:
  104. enabled: ${FIAT_ENABLED:false}
  105. host: ${DEFAULT_DNS_NAME:armory-fiat}
  106. port: 7003
  107. baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port}
  108. providers:
  109. aws:
  110. enabled: ${SPINNAKER_AWS_ENABLED:true}
  111. defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2}
  112. defaultIAMRole: ${SPINNAKER_AWS_DEFAULT_IAM_ROLE:SpinnakerInstanceProfile}
  113. defaultAssumeRole: ${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile}
  114. primaryCredentials:
  115. name: ${SPINNAKER_AWS_DEFAULT_ACCOUNT:default-aws-account}
  116. kubernetes:
  117. proxy: localhost:8001
  118. apiPrefix: api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#

spinnaker.yml: |2 global: spinnaker: timezone: ‘America/Los_Angeles’ architecture: ${PLATFORM_ARCHITECTURE} services: default: host: localhost protocol: http clouddriver: host: ${services.default.host} port: 7002 baseUrl: ${services.default.protocol}://${services.clouddriver.host}:${services.clouddriver.port} aws: udf: enabled: true echo: enabled: true host: ${services.default.host} port: 8089 baseUrl: ${services.default.protocol}://${services.echo.host}:${services.echo.port} cassandra: enabled: false inMemory: enabled: true cron: enabled: true timezone: ${global.spinnaker.timezone} notifications: mail: enabled: false host: # the smtp host fromAddress: # the address for which emails are sent from hipchat: enabled: false url: # the hipchat server to connect to token: # the hipchat auth token botName: # the username of the bot sms: enabled: false account: # twilio account id token: # twilio auth token from: # phone number by which sms messages are sent slack: enabled: false token: # the API token for the bot botName: # the username of the bot deck: host: ${services.default.host} port: 9000 baseUrl: ${services.default.protocol}://${services.deck.host}:${services.deck.port} gateUrl: ${API_HOST:services.gate.baseUrl} bakeryUrl: ${services.bakery.baseUrl} timezone: ${global.spinnaker.timezone} auth: enabled: ${AUTH_ENABLED:false} fiat: enabled: false host: ${services.default.host} port: 7003 baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port} front50: host: ${services.default.host} port: 8080 baseUrl: ${services.default.protocol}://${services.front50.host}:${services.front50.port} storage_bucket: ${SPINNAKER_DEFAULT_STORAGE_BUCKET:} bucket_location: bucket_root: front50 cassandra: enabled: false redis: enabled: false gcs: enabled: false s3: enabled: false gate: host: ${services.default.host} port: 8084 baseUrl: ${services.default.protocol}://${services.gate.host}:${services.gate.port} igor: enabled: false host: ${services.default.host} port: 8088 baseUrl: ${services.default.protocol}://${services.igor.host}:${services.igor.port} kato: host: ${services.clouddriver.host} port: ${services.clouddriver.port} baseUrl: ${services.clouddriver.baseUrl} mort: host: ${services.clouddriver.host} port: ${services.clouddriver.port} baseUrl: ${services.clouddriver.baseUrl} orca: host: ${services.default.host} port: 8083 baseUrl: ${services.default.protocol}://${services.orca.host}:${services.orca.port} timezone: ${global.spinnaker.timezone} enabled: true oort: host: ${services.clouddriver.host} port: ${services.clouddriver.port} baseUrl: ${services.clouddriver.baseUrl} rosco: host: ${services.default.host} port: 8087 baseUrl: ${services.default.protocol}://${services.rosco.host}:${services.rosco.port} configDir: /opt/rosco/config/packer bakery: host: ${services.rosco.host} port: ${services.rosco.port} baseUrl: ${services.rosco.baseUrl} extractBuildDetails: true allowMissingPackageInstallation: false docker: targetRepository: # Optional, but expected in spinnaker-local.yml if specified. jenkins: enabled: ${services.igor.enabled:false} defaultMaster: name: Jenkins baseUrl: # Expected in spinnaker-local.yml username: # Expected in spinnaker-local.yml password: # Expected in spinnaker-local.yml redis: host: redis port: 6379 connection: ${REDIS_HOST:redis://localhost:6379} cassandra: host: ${services.default.host} port: 9042 embedded: false cluster: CASS_SPINNAKER travis: enabled: false defaultMaster: name: ci # The display name for this server. Gets prefixed with “travis-“ baseUrl: https://travis-ci.com address: https://api.travis-ci.org githubToken: # GitHub scopes currently required by Travis is required. spectator: webEndpoint: enabled: false stackdriver: enabled: ${SPINNAKER_STACKDRIVER_ENABLED:false} projectName: ${SPINNAKER_STACKDRIVER_PROJECT_NAME:${providers.google.primaryCredentials.project}} credentialsPath: ${SPINNAKER_STACKDRIVER_CREDENTIALS_PATH:${providers.google.primaryCredentials.jsonPath}} providers: aws: enabled: ${SPINNAKER_AWS_ENABLED:false} simpleDBEnabled: false defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2} defaultIAMRole: BaseIAMRole defaultSimpleDBDomain: CLOUD_APPLICATIONS primaryCredentials: name: default defaultKeyPairTemplate: “{{name}}-keypair” google: enabled: ${SPINNAKER_GOOGLE_ENABLED:false} defaultRegion: ${SPINNAKER_GOOGLE_DEFAULT_REGION:us-central1} defaultZone: ${SPINNAKER_GOOGLE_DEFAULT_ZONE:us-central1-f} primaryCredentials: name: my-account-name project: ${SPINNAKER_GOOGLE_PROJECT_ID:} jsonPath: ${SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH:} consul: enabled: ${SPINNAKER_GOOGLE_CONSUL_ENABLED:false} cf: enabled: false defaultOrg: spinnaker-cf-org defaultSpace: spinnaker-cf-space primaryCredentials: name: my-cf-account api: my-cf-api-uri console: my-cf-console-base-url azure: enabled: ${SPINNAKER_AZURE_ENABLED:false} defaultRegion: ${SPINNAKER_AZURE_DEFAULT_REGION:westus} primaryCredentials: name: my-azure-account clientId: appKey: tenantId: subscriptionId: titan: enabled: false defaultRegion: us-east-1 primaryCredentials: name: my-titan-account kubernetes: enabled: ${SPINNAKER_KUBERNETES_ENABLED:false} primaryCredentials: name: my-kubernetes-account namespace: default dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name} dockerRegistry: enabled: ${SPINNAKER_KUBERNETES_ENABLED:false} primaryCredentials: name: my-docker-registry-account address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/ } repository: ${SPINNAKER_DOCKER_REPOSITORY:} username: ${SPINNAKER_DOCKER_USERNAME:} passwordFile: ${SPINNAKER_DOCKER_PASSWORD_FILE:}

  1. openstack:
  2. enabled: false
  3. defaultRegion: ${SPINNAKER_OPENSTACK_DEFAULT_REGION:RegionOne}
  4. primaryCredentials:
  5. name: my-openstack-account
  6. authUrl: ${OS_AUTH_URL}
  7. username: ${OS_USERNAME}
  8. password: ${OS_PASSWORD}
  9. projectName: ${OS_PROJECT_NAME}
  10. domainName: ${OS_USER_DOMAIN_NAME:Default}
  11. regions: ${OS_REGION_NAME:RegionOne}
  12. insecure: false

EOF

  1. <a name="35131581"></a>
  2. #### 3.2.5 创建dp资源文件

cat >dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-clouddriver name: armory-clouddriver namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-clouddriver template: metadata: annotations: artifact.spinnaker.io/location: ‘“armory”‘ artifact.spinnaker.io/name: ‘“armory-clouddriver”‘ artifact.spinnaker.io/type: ‘“kubernetes/deployment”‘ moniker.spinnaker.io/application: ‘“armory”‘ moniker.spinnaker.io/cluster: ‘“clouddriver”‘ labels: app: armory-clouddriver spec: containers:

  1. - name: armory-clouddriver
  2. image: harbor.od.com/armory/clouddriver:v1.11.x
  3. imagePullPolicy: IfNotPresent
  4. command:
  5. - bash
  6. - -c
  7. args:
  8. - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
  9. && /opt/clouddriver/bin/clouddriver
  10. ports:
  11. - containerPort: 7002
  12. protocol: TCP
  13. env:
  14. - name: JAVA_OPTS
  15. value: -Xmx2048M
  16. envFrom:
  17. - configMapRef:
  18. name: init-env
  19. livenessProbe:
  20. failureThreshold: 5
  21. httpGet:
  22. path: /health
  23. port: 7002
  24. scheme: HTTP
  25. initialDelaySeconds: 600
  26. periodSeconds: 3
  27. successThreshold: 1
  28. timeoutSeconds: 1
  29. readinessProbe:
  30. failureThreshold: 5
  31. httpGet:
  32. path: /health
  33. port: 7002
  34. scheme: HTTP
  35. initialDelaySeconds: 180
  36. periodSeconds: 3
  37. successThreshold: 5
  38. timeoutSeconds: 1
  39. securityContext:
  40. runAsUser: 0
  41. volumeMounts:
  42. - mountPath: /etc/podinfo
  43. name: podinfo
  44. - mountPath: /home/spinnaker/.aws
  45. name: credentials
  46. - mountPath: /opt/spinnaker/credentials/custom
  47. name: default-kubeconfig
  48. - mountPath: /opt/spinnaker/config/default
  49. name: default-config
  50. - mountPath: /opt/spinnaker/config/custom
  51. name: custom-config
  52. imagePullSecrets:
  53. - name: harbor
  54. volumes:
  55. - configMap:
  56. defaultMode: 420
  57. name: default-kubeconfig
  58. name: default-kubeconfig
  59. - configMap:
  60. defaultMode: 420
  61. name: custom-config
  62. name: custom-config
  63. - configMap:
  64. defaultMode: 420
  65. name: default-config
  66. name: default-config
  67. - name: credentials
  68. secret:
  69. defaultMode: 420
  70. secretName: credentials
  71. - downwardAPI:
  72. defaultMode: 420
  73. items:
  74. - fieldRef:
  75. apiVersion: v1
  76. fieldPath: metadata.labels
  77. path: labels
  78. - fieldRef:
  79. apiVersion: v1
  80. fieldPath: metadata.annotations
  81. path: annotations
  82. name: podinfo

EOF

  1. <a name="8010ce69"></a>
  2. #### 3.2.6 床架svc资源文件

cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-clouddriver namespace: armory spec: ports:

  • port: 7002 protocol: TCP targetPort: 7002 selector: app: armory-clouddriver EOF
    1. <a name="d93a0cc0"></a>
    2. #### 3.2.7 应用资源清单
    3. 任意node节点执行
    kubectl apply -f http://k8s-yaml.zq.com/armory/clouddriver/init-env.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/clouddriver/default-config.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/clouddriver/custom-config.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/clouddriver/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/clouddriver/svc.yaml
    1. <a name="c9aea557"></a>
    2. #### 3.2.8 检查
    ~]# docker ps -a|grep minio ~]# docker exec -it b71a5af3c57e sh / # curl armory-clouddriver:7002/health { “status”: “UP”, “kubernetes”: {
    1. "status": "UP"
    }, “dockerRegistry”: {
    1. "status": "UP"
    }, “redisHealth”: {
    1. "status": "UP",
    2. "maxIdle": 100,
    3. "minIdle": 25,
    4. "numActive": 0,
    5. "numIdle": 5,
    6. "numWaiters": 0
    }, “diskSpace”: {
    1. "status": "UP",
    2. "total": 21250441216,
    3. "free": 15657390080,
    4. "threshold": 10485760
    } }
    1. <a name="2d24558f"></a>
    2. ## 4 部署spinnaker第三部分
    3. <a name="cc5022d1"></a>
    4. ### 4.1 spinnaker之front50部署
    mkdir /data/k8s-yaml/armory/front50 cd /data/k8s-yaml/armory/front50
    1. <a name="92e78e03"></a>
    2. #### 4.1.1 准备镜像
    docker pull armory/spinnaker-front50-slim:release-1.8.x-93febf2 docker tag 0d353788f4f2 harbor.zq.com/armory/front50:v1.8.x docker push harbor.zq.com/armory/front50:v1.8.x
    1. <a name="69b3994a"></a>
    2. #### 4.1.2 准备dp资源清单
    cat <dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-front50 name: armory-front50 namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-front50 template: metadata: annotations:
    1. artifact.spinnaker.io/location: '"armory"'
    2. artifact.spinnaker.io/name: '"armory-front50"'
    3. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    4. moniker.spinnaker.io/application: '"armory"'
    5. moniker.spinnaker.io/cluster: '"front50"'
    labels:
    1. app: armory-front50
    spec: containers:
    • name: armory-front50 image: harbor.od.com/armory/front50:v1.8.x imagePullPolicy: IfNotPresent command:
      • bash
      • -c args:
      • bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config && /opt/front50/bin/front50 ports:
      • containerPort: 8080 protocol: TCP env:
      • name: JAVA_OPTS value: -javaagent:/opt/front50/lib/jamm-0.2.5.jar -Xmx1000M envFrom:
      • configMapRef: name: init-env livenessProbe: failureThreshold: 3 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 600 periodSeconds: 3 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 180 periodSeconds: 5 successThreshold: 8 timeoutSeconds: 1 volumeMounts:
      • mountPath: /etc/podinfo name: podinfo
      • mountPath: /home/spinnaker/.aws name: credentials
      • mountPath: /opt/spinnaker/config/default name: default-config
      • mountPath: /opt/spinnaker/config/custom name: custom-config imagePullSecrets:
    • name: harbor volumes:
    • configMap: defaultMode: 420 name: custom-config name: custom-config
    • configMap: defaultMode: 420 name: default-config name: default-config
    • name: credentials secret: defaultMode: 420 secretName: credentials
    • downwardAPI: defaultMode: 420 items:
      • fieldRef: apiVersion: v1 fieldPath: metadata.labels path: labels
      • fieldRef: apiVersion: v1 fieldPath: metadata.annotations path: annotations name: podinfo EOF
        1. <a name="5d8bfc98"></a>
        2. #### 4.1.3 创建svc资源清单
        cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-front50 namespace: armory spec: ports:
  • port: 8080 protocol: TCP targetPort: 8080 selector: app: armory-front50 EOF
    1. <a name="f65e0596"></a>
    2. #### 4.1.4 应用资源清单
    kubectl apply -f http://k8s-yaml.zq.com/armory/front50/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/front50/svc.yaml
    1. 验证
    ~]# docker ps -qa|grep minio b71a5af3c57e ~]# docker exec -it b71a5af3c57e sh / # curl armory-front50:8080/health {“status”:”UP”}
    1. <a name="65e9860c"></a>
    2. ### 4.2 spinnaker之orca部署
    mkdir /data/k8s-yaml/armory/orca cd /data/k8s-yaml/armory/orca
    1. <a name="242fd782"></a>
    2. #### 4.2.1 准备docker镜像
    docker pull docker.io/armory/spinnaker-orca-slim:release-1.8.x-de4ab55 docker tag 5103b1f73e04 harbor.zq.com/armory/orca:v1.8.x docker push harbor.zq.com/armory/orca:v1.8.x
    1. <a name="b074280f"></a>
    2. #### 4.2.2 准备dp资源清单
    cat >dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-orca name: armory-orca namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-orca template: metadata: annotations:
    1. artifact.spinnaker.io/location: '"armory"'
    2. artifact.spinnaker.io/name: '"armory-orca"'
    3. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    4. moniker.spinnaker.io/application: '"armory"'
    5. moniker.spinnaker.io/cluster: '"orca"'
    labels:
    1. app: armory-orca
    spec: containers:
    • name: armory-orca image: harbor.od.com/armory/orca:v1.8.x imagePullPolicy: IfNotPresent command:
      • bash
      • -c args:
      • bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config && /opt/orca/bin/orca ports:
      • containerPort: 8083 protocol: TCP env:
      • name: JAVA_OPTS value: -Xmx1000M envFrom:
      • configMapRef: name: init-env livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8083 scheme: HTTP initialDelaySeconds: 600 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /health port: 8083 scheme: HTTP initialDelaySeconds: 180 periodSeconds: 3 successThreshold: 5 timeoutSeconds: 1 volumeMounts:
      • mountPath: /etc/podinfo name: podinfo
      • mountPath: /opt/spinnaker/config/default name: default-config
      • mountPath: /opt/spinnaker/config/custom name: custom-config imagePullSecrets:
    • name: harbor volumes:
    • configMap: defaultMode: 420 name: custom-config name: custom-config
    • configMap: defaultMode: 420 name: default-config name: default-config
    • downwardAPI: defaultMode: 420 items:
      • fieldRef: apiVersion: v1 fieldPath: metadata.labels path: labels
      • fieldRef: apiVersion: v1 fieldPath: metadata.annotations path: annotations name: podinfo EOF
        1. <a name="2060bc40"></a>
        2. #### 4.2.3 准备svc资源清单
        cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-orca namespace: armory spec: ports:
  • port: 8083 protocol: TCP targetPort: 8083 selector: app: armory-orca EOF
    1. <a name="3dedda30"></a>
    2. #### 4.2.4 应用资源配置清单
    kubectl apply -f http://k8s-yaml.zq.com/armory/orca/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/orca/svc.yaml
    1. 检查
    ~]# docker exec -it b71a5af3c57e sh / # curl armory-orca:8083/health {“status”:”UP”}
    1. <a name="286db2fe"></a>
    2. ### 4.3 spinnaker之echo部署
    mkdir /data/k8s-yaml/armory/echo cd /data/k8s-yaml/armory/echo
    1. <a name="f573d1b1"></a>
    2. #### 4.3.1 准备docker镜像
    docker pull docker.io/armory/echo-armory:c36d576-release-1.8.x-617c567 docker tag 415efd46f474 harbor.od.com/armory/echo:v1.8.x docker push harbor.od.com/armory/echo:v1.8.x
    1. <a name="ab66aba4"></a>
    2. #### 4.3.2 准备dp资源清单
    cat >dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-echo name: armory-echo namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-echo template: metadata: annotations:
    1. artifact.spinnaker.io/location: '"armory"'
    2. artifact.spinnaker.io/name: '"armory-echo"'
    3. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    4. moniker.spinnaker.io/application: '"armory"'
    5. moniker.spinnaker.io/cluster: '"echo"'
    labels:
    1. app: armory-echo
    spec: containers:
    • name: armory-echo image: harbor.od.com/armory/echo:v1.8.x imagePullPolicy: IfNotPresent command:
      • bash
      • -c args:
      • bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config && /opt/echo/bin/echo ports:
      • containerPort: 8089 protocol: TCP env:
      • name: JAVA_OPTS value: -javaagent:/opt/echo/lib/jamm-0.2.5.jar -Xmx1000M envFrom:
      • configMapRef: name: init-env livenessProbe: failureThreshold: 3 httpGet: path: /health port: 8089 scheme: HTTP initialDelaySeconds: 600 periodSeconds: 3 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /health port: 8089 scheme: HTTP initialDelaySeconds: 180 periodSeconds: 3 successThreshold: 5 timeoutSeconds: 1 volumeMounts:
      • mountPath: /etc/podinfo name: podinfo
      • mountPath: /opt/spinnaker/config/default name: default-config
      • mountPath: /opt/spinnaker/config/custom name: custom-config imagePullSecrets:
    • name: harbor volumes:
    • configMap: defaultMode: 420 name: custom-config name: custom-config
    • configMap: defaultMode: 420 name: default-config name: default-config
    • downwardAPI: defaultMode: 420 items:
      • fieldRef: apiVersion: v1 fieldPath: metadata.labels path: labels
      • fieldRef: apiVersion: v1 fieldPath: metadata.annotations path: annotations name: podinfo EOF
        1. <a name="28fbb8b6"></a>
        2. #### 4.3.3 准备svc资源清单
        cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-echo namespace: armory spec: ports:
  • port: 8089 protocol: TCP targetPort: 8089 selector: app: armory-echo EOF
    1. <a name="433cca4a"></a>
    2. #### 4.3.4 应用资源配置清单
    kubectl apply -f http://k8s-yaml.zq.com/armory/echo/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/echo/svc.yaml
    1. 检查
    ~]# docker exec -it b71a5af3c57e sh / # curl armory-echo:8089/health {“status”:”UP”}
    1. <a name="ecca7d47"></a>
    2. ### 4.4 spinnaker之igor部署
    mkdir /data/k8s-yaml/armory/igor cd /data/k8s-yaml/armory/igor
    1. <a name="d2948d61"></a>
    2. #### 4.4.1 准备docker镜像
    docker pull docker.io/armory/spinnaker-igor-slim:release-1.8-x-new-install-healthy-ae2b329 docker tag 23984f5b43f6 harbor.zq.com/armory/igor:v1.8.x docker push harbor.zq.com/armory/igor:v1.8.x
    1. <a name="653293da"></a>
    2. #### 4.4.2 准备dp资源清单
    cat >dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-igor name: armory-igor namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-igor template: metadata: annotations:
    1. artifact.spinnaker.io/location: '"armory"'
    2. artifact.spinnaker.io/name: '"armory-igor"'
    3. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    4. moniker.spinnaker.io/application: '"armory"'
    5. moniker.spinnaker.io/cluster: '"igor"'
    labels:
    1. app: armory-igor
    spec: containers:
    • name: armory-igor image: harbor.od.com/armory/igor:v1.8.x imagePullPolicy: IfNotPresent command:
      • bash
      • -c args:
      • bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config && /opt/igor/bin/igor ports:
      • containerPort: 8088 protocol: TCP env:
      • name: IGOR_PORT_MAPPING value: -8088:8088
      • name: JAVA_OPTS value: -Xmx1000M envFrom:
      • configMapRef: name: init-env livenessProbe: failureThreshold: 3 httpGet: path: /health port: 8088 scheme: HTTP initialDelaySeconds: 600 periodSeconds: 3 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /health port: 8088 scheme: HTTP initialDelaySeconds: 180 periodSeconds: 5 successThreshold: 5 timeoutSeconds: 1 volumeMounts:
      • mountPath: /etc/podinfo name: podinfo
      • mountPath: /opt/spinnaker/config/default name: default-config
      • mountPath: /opt/spinnaker/config/custom name: custom-config imagePullSecrets:
    • name: harbor securityContext: runAsUser: 0 volumes:
    • configMap: defaultMode: 420 name: custom-config name: custom-config
    • configMap: defaultMode: 420 name: default-config name: default-config
    • downwardAPI: defaultMode: 420 items:
      • fieldRef: apiVersion: v1 fieldPath: metadata.labels path: labels
      • fieldRef: apiVersion: v1 fieldPath: metadata.annotations path: annotations name: podinfo EOF
        1. <a name="debd2611"></a>
        2. #### 4.4.3 准备svc资源清单
        cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-igor namespace: armory spec: ports:
  • port: 8088 protocol: TCP targetPort: 8088 selector: app: armory-igor EOF
    1. <a name="ed92db8c"></a>
    2. #### 4.4.4 应用资源配置清单
    kubectl apply -f http://k8s-yaml.zq.com/armory/igor/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/igor/svc.yaml
    1. 检查
    ~]# docker exec -it b71a5af3c57e sh / # curl armory-igor:8088/health {“status”:”UP”}
    1. <a name="bfdbdf79"></a>
    2. ### 4.5 spinnaker之gate部署
    mkdir /data/k8s-yaml/armory/gate cd /data/k8s-yaml/armory/gate
    1. <a name="cea9f2cd"></a>
    2. #### 4.5.1 准备docker镜像
    docker pull docker.io/armory/gate-armory:dfafe73-release-1.8.x-5d505ca docker tag b092d4665301 harbor.zq.com/armory/gate:v1.8.x docker push harbor.zq.com/armory/gate:v1.8.x
    1. <a name="15e96126"></a>
    2. #### 4.5.2 准备dp资源清单
    cat >dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-gate name: armory-gate namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-gate template: metadata: annotations:
    1. artifact.spinnaker.io/location: '"armory"'
    2. artifact.spinnaker.io/name: '"armory-gate"'
    3. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    4. moniker.spinnaker.io/application: '"armory"'
    5. moniker.spinnaker.io/cluster: '"gate"'
    labels:
    1. app: armory-gate
    spec: containers:
    • name: armory-gate image: harbor.od.com/armory/gate:v1.8.x imagePullPolicy: IfNotPresent command:
      • bash
      • -c args:
      • bash /opt/spinnaker/config/default/fetch.sh gate && cd /home/spinnaker/config && /opt/gate/bin/gate ports:
      • containerPort: 8084 name: gate-port protocol: TCP
      • containerPort: 8085 name: gate-api-port protocol: TCP env:
      • name: GATE_PORT_MAPPING value: -8084:8084
      • name: GATE_API_PORT_MAPPING value: -8085:8085
      • name: JAVA_OPTS value: -Xmx1000M envFrom:
      • configMapRef: name: init-env livenessProbe: exec: command:
      • mountPath: /etc/podinfo name: podinfo
      • mountPath: /opt/spinnaker/config/default name: default-config
      • mountPath: /opt/spinnaker/config/custom name: custom-config imagePullSecrets:
    • name: harbor securityContext: runAsUser: 0 volumes:
    • configMap: defaultMode: 420 name: custom-config name: custom-config
    • configMap: defaultMode: 420 name: default-config name: default-config
    • downwardAPI: defaultMode: 420 items:
      • fieldRef: apiVersion: v1 fieldPath: metadata.labels path: labels
      • fieldRef: apiVersion: v1 fieldPath: metadata.annotations path: annotations name: podinfo EOF
        1. <a name="065d82ff"></a>
        2. #### 4.5.3 准备svc资源清单
        cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-gate namespace: armory spec: ports:
  • name: gate-port port: 8084 protocol: TCP targetPort: 8084
  • name: gate-api-port port: 8085 protocol: TCP targetPort: 8085 selector: app: armory-gate EOF
    1. <a name="bca2dcb3"></a>
    2. #### 4.5.4 应用资源配置清单
    kubectl apply -f http://k8s-yaml.zq.com/armory/gate/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/gate/svc.yaml
    1. 检查
    bin]# docker exec -it b71a5af3c57e sh / # curl armory-gate:8084/health {“status”:”UP”}
    1. <a name="4450c20d"></a>
    2. ### 4.6 spinnaker之deck部署
    mkdir /data/k8s-yaml/armory/deck cd /data/k8s-yaml/armory/deck
    1. <a name="87fe05d5"></a>
    2. #### 4.6.1 准备docker镜像
    docker pull docker.io/armory/deck-armory:d4bf0cf-release-1.8.x-0a33f94 docker tag 9a87ba3b319f harbor.od.com/armory/deck:v1.8.x docker push harbor.od.com/armory/deck:v1.8.x
    1. <a name="e79c9d3e"></a>
    2. #### 4.6.2 准备dp资源清单
    cat >dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-deck name: armory-deck namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-deck template: metadata: annotations:
    1. artifact.spinnaker.io/location: '"armory"'
    2. artifact.spinnaker.io/name: '"armory-deck"'
    3. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    4. moniker.spinnaker.io/application: '"armory"'
    5. moniker.spinnaker.io/cluster: '"deck"'
    labels:
    1. app: armory-deck
    spec: containers:
    • name: armory-deck image: harbor.od.com/armory/deck:v1.8.x imagePullPolicy: IfNotPresent command:
      • bash
      • -c args:
      • bash /opt/spinnaker/config/default/fetch.sh && /entrypoint.sh ports:
      • containerPort: 9000 protocol: TCP envFrom:
      • configMapRef: name: init-env livenessProbe: failureThreshold: 3 httpGet: path: / port: 9000 scheme: HTTP initialDelaySeconds: 180 periodSeconds: 3 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 5 httpGet: path: / port: 9000 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 3 successThreshold: 5 timeoutSeconds: 1 volumeMounts:
      • mountPath: /etc/podinfo name: podinfo
      • mountPath: /opt/spinnaker/config/default name: default-config
      • mountPath: /opt/spinnaker/config/custom name: custom-config imagePullSecrets:
    • name: harbor volumes:
    • configMap: defaultMode: 420 name: custom-config name: custom-config
    • configMap: defaultMode: 420 name: default-config name: default-config
    • downwardAPI: defaultMode: 420 items:
      • fieldRef: apiVersion: v1 fieldPath: metadata.labels path: labels
      • fieldRef: apiVersion: v1 fieldPath: metadata.annotations path: annotations name: podinfo EOF
        1. <a name="451f4cbe"></a>
        2. #### 4.6.3 准备svc资源清单
        cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-deck namespace: armory spec: ports:
  • port: 80 protocol: TCP targetPort: 9000 selector: app: armory-deck EOF
    1. <a name="91d1882e"></a>
    2. #### 4.6.4 应用资源配置清单
    kubectl apply -f http://k8s-yaml.zq.com/armory/deck/dp.yaml kubectl apply -f http://k8s-yaml.zq.com/armory/deck/svc.yaml
    1. 检查
    ~]# docker exec -it b71a5af3c57e sh / # curl armory-igor:8088/health {“status”:”UP”}
    1. <a name="da6d40fa"></a>
    2. ### 4.7 spinnaker之nginx部署
    mkdir /data/k8s-yaml/armory/nginx cd /data/k8s-yaml/armory/nginx
    1. <a name="d5f6a759"></a>
    2. #### 4.7.1 准备docker镜像
    docker pull nginx:1.12.2 docker tag 4037a5562b03 harbor.od.com/armory/nginx:v1.12.2 docker push harbor.od.com/armory/nginx:v1.12.2
    1. <a name="e8368304"></a>
    2. #### 4.7.2 准备dp资源清单
    cat >dp.yaml <<’EOF’ apiVersion: apps/v1 kind: Deployment metadata: labels: app: armory-nginx name: armory-nginx namespace: armory spec: replicas: 1 revisionHistoryLimit: 7 selector: matchLabels: app: armory-nginx template: metadata: annotations:
    1. artifact.spinnaker.io/location: '"armory"'
    2. artifact.spinnaker.io/name: '"armory-nginx"'
    3. artifact.spinnaker.io/type: '"kubernetes/deployment"'
    4. moniker.spinnaker.io/application: '"armory"'
    5. moniker.spinnaker.io/cluster: '"nginx"'
    labels:
    1. app: armory-nginx
    spec: containers:
    • name: armory-nginx image: harbor.od.com/armory/nginx:v1.12.2 imagePullPolicy: Always command:
      • bash
      • -c args:
      • bash /opt/spinnaker/config/default/fetch.sh nginx && nginx -g ‘daemon off;’ ports:
      • containerPort: 80 name: http protocol: TCP
      • containerPort: 443 name: https protocol: TCP
      • containerPort: 8085 name: api protocol: TCP livenessProbe: failureThreshold: 3 httpGet: path: / port: 80 scheme: HTTP initialDelaySeconds: 180 periodSeconds: 3 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: / port: 80 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 3 successThreshold: 5 timeoutSeconds: 1 volumeMounts:
      • mountPath: /opt/spinnaker/config/default name: default-config
      • mountPath: /etc/nginx/conf.d name: custom-config imagePullSecrets:
    • name: harbor volumes:
    • configMap: defaultMode: 420 name: custom-config name: custom-config
    • configMap: defaultMode: 420 name: default-config name: default-config EOF
      1. <a name="451f4cbe-1"></a>
      2. #### 4.6.3 准备svc资源清单
      cat >svc.yaml <<’EOF’ apiVersion: v1 kind: Service metadata: name: armory-nginx namespace: armory spec: ports:
  • name: http port: 80 protocol: TCP targetPort: 80
  • name: https port: 443 protocol: TCP targetPort: 443
  • name: api port: 8085 protocol: TCP targetPort: 8085 selector: app: armory-nginx EOF
    1. <a name="e3d19e6e"></a>
    2. #### 4.6.4 准备ingress资源清单
    cat >ingress.yaml <<’EOF’ kind: IngressRoute metadata: labels: app: spinnaker web: spinnaker.od.com name: spinnaker-route namespace: armory spec: entryPoints: