conductor v3 docker-compose 运行
作者:互联网
运行方式与v2 类似,主要更新了docker 镜像
docker 镜像构建
目前官方暂时没有提供官方镜像,但是可以自己构建,我们只需要修改image name 同时docker-compose build 就可以了
构建目录 docker
参考运行
version: '2.3'
services:
conductor-server:
environment:
- CONFIG_PROP=config.properties
image: dalongrong/conductor:v3-server
volumes:
- "./config.properties:/app/config/config.properties"
networks:
- internal
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl","-I" ,"-XGET", "http://localhost:8080/health"]
interval: 60s
timeout: 30s
retries: 12
links:
- elasticsearch:es
- dynomite:dyno1
depends_on:
elasticsearch:
condition: service_healthy
dynomite:
condition: service_healthy
logging:
driver: "json-file"
options:
max-size: "1k"
max-file: "3"
dynomite:
image: v1r3n/dynomite
networks:
- internal
ports:
- 8102:8102
healthcheck:
test: timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/8102'
interval: 5s
timeout: 5s
retries: 12
logging:
driver: "json-file"
options:
max-size: "1k"
max-file: "3"
conductor-ui:
environment:
- WF_SERVER=http://conductor-server:8080/api/
image: dalongrong/conductor:v3-ui
networks:
- internal
ports:
- 5000:5000
links:
- conductor-server
elasticsearch:
image: elasticsearch:6.8.15
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
- transport.host=0.0.0.0
- discovery.type=single-node
- xpack.security.enabled=false
networks:
- internal
ports:
- 9200:9200
- 9300:9300
healthcheck:
test: timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/9300'
interval: 5s
timeout: 5s
retries: 12
logging:
driver: "json-file"
options:
max-size: "1k"
max-file: "3"
networks:
internal:
效果
说明
默认的配置是不对的,所以使用了一个自定义的,同时进行数据卷的挂载,具体参考上边的文档
# Servers.
conductor.grpc-server.enabled=false
# Database persistence type.
conductor.db.type=dynomite
# Dynomite Cluster details.
# format is host:port:rack separated by semicolon
conductor.redis.hosts=dyno1:8102:us-east-1c
# Dynomite cluster name
conductor.redis.clusterName=dyno1
# Namespace for the keys stored in Dynomite/Redis
conductor.redis.workflowNamespacePrefix=conductor
# Namespace prefix for the dyno queues
conductor.redis.queueNamespacePrefix=conductor_queues
# No. of threads allocated to dyno-queues (optional)
queues.dynomite.threads=10
# By default with dynomite, we want the repairservice enabled
conductor.app.workflowRepairServiceEnabled=true
# Non-quorum port used to connect to local redis. Used by dyno-queues.
# When using redis directly, set this to the same port as redis server
# For Dynomite, this is 22122 by default or the local redis-server port used by Dynomite.
conductor.redis.queuesNonQuorumPort=22122
# Elastic search instance indexing is enabled.
conductor.indexing.enabled=true
# Transport address to elasticsearch 此处是核心
conductor.elasticsearch.url=http://elasticsearch:9200
# Name of the elasticsearch cluster
conductor.elasticsearch.indexName=conductor
# Additional modules for metrics collection exposed via logger (optional)
# conductor.metrics-logger.enabled=true
# conductor.metrics-logger.reportPeriodSeconds=15
# Additional modules for metrics collection exposed to Prometheus (optional)
# conductor.metrics-prometheus.enabled=true
# management.endpoints.web.exposure.include=prometheus
# To enable Workflow/Task Summary Input/Output JSON Serialization, use the following:
# conductor.app.summary-input-output-json-serialization.enabled=true
# Load sample kitchen sink workflow
loadSample=true
参考资料
https://github.com/Netflix/conductor
https://netflix.github.io/conductor/
标签:compose,conductor,enabled,redis,server,v3,elasticsearch,docker,dynomite 来源: https://www.cnblogs.com/rongfengliang/p/14974788.html