Quantcast
Channel: Active questions tagged config - Stack Overflow
Viewing all articles
Browse latest Browse all 5049

Automated Helm chart deployment to kubernetes

$
0
0

before I'd like to ask the main question, I'd like to introduce the problem with an example of analyzing some data from sensors of the same type:

Lets assume that for some data analytics a microservice architecture is used. The incoming data is produced by multiple sensors. So in order to train models and use those trained models against the newest data there is one analyzing microservice per sensor deployed. All analyzing service are sharing one database. The image of that analyzing service can be reused for each sensor and therefor the service itself is fully configurable and deployed via docker-compose in a manual manner. The difference between the analyzing microservices is basically only the id of the sensor. As a ingestion layer for incomming sensor data kafka is used (but this could be anything else).

So the docker-compose.yml could look like this:

version: "3.4"services:    postgres_analysing:        image: postgres_analyser        container_name: postgres_analyser        build:             context: .            dockerfile: Dockerfile_analyser_postgres        ports:            - "5666:5432"        environment:             - POSTGRES_USER=${ANALYSER_DB_User}            - POSTGRES_PASSWORD=${ANALYSER_DB_PW}            - POSTGRES_DB=${ANALYSER_DB}        networks:            MS_Streaming:    analyser_m1:        image: ${DOCKER_REGISTRY-}analyserpz_comp        build:            context: ${PATH_TO_ANALYSER_CONTEXT}            dockerfile: ${PATH_TO_ANALYSER_DOCKERFILE}        ports:            - "6000:80"        environment:             - AnalyserConfig__Id=${M1_ID}            - KafkaConfig__KafkaBootstrapServers=${KAFKA_BOOTSTRAPSERVER}            - KafkaConfig__KafkaBootstrapServerPort=${KAFKA_SERVERPORT}            - KafkaConfig__KafkaGroupId=${ANALYSER_GROUP_ID}            - KafkaConfig__KafkaConsumerTopic=${M1_ID}.duration            - ConnectionStrings__DbContext=Username=${ANALYSER_DB_User};Password=${ANALYSER_DB_PW};Server=${ANALYSER_DB_SERVER};Port=${ANALYSER_DB_PORT};Database=${ANALYSER_DB}        networks:             MS_Streaming:    analyser_m2:        image: ${DOCKER_REGISTRY-}analyserpz_comp        build:            context: ${PATH_TO_ANALYSER_CONTEXT}            dockerfile: ${PATH_TO_ANALYSER_DOCKERFILE}        ports:            - "6001:80"        environment:             - AnalyserConfig__CylinderId=${M2_ID}            - KafkaConfig__KafkaBootstrapServers=${KAFKA_BOOTSTRAPSERVER}            - KafkaConfig__KafkaBootstrapServerPort=${KAFKA_SERVERPORT}            - KafkaConfig__KafkaGroupId=${ANALYSER_GROUP_ID}            - KafkaConfig__KafkaConsumerTopic=${M2_ID}.duration            - ConnectionStrings__DbContext=Username=${ANALYSER_DB_User};Password=${ANALYSER_DB_PW};Server=${ANALYSER_DB_SERVER};Port=${ANALYSER_DB_PORT};Database=${ANALYSER_DB}        networks:            MS_Streaming:

the underlying .env file like this:

KAFKA_BOOTSTRAPSERVER=my-kafkacluster.deKAFKA_SERVERPORT=9092ANALYSER_GROUP_ID=sensor-analyser-groupM1_PZ1_ID=9c1719d0-eabd-4106-8812-9a1b8e0b4d52M1_PZ2_ID=a7cb43d2-bc6e-4836-80ca-7373be95963fANALYSER_DB_SERVER=172.20.0.1ANALYSER_DB_PW=some_PWANALYSER_DB_User=some_UserANALYSER_DB=analyser_dataANALYSER_DB_PORT=5432PATH_TO_ANALYSER_CONTEXT=Path_To_DirectoryPATH_TO_ANALYSER_DOCKERFILE=Dockerfile

Thinking about what steps must be performed if a new sensor joins the whole pipeline it would be in short term:

  • setting up new configuration
  • deploy analyser-service with this config

Both steps are normally manual tasks (either if its the expansion of the docker-compose.yml or helm chart or deployment by "native" kubectl methods).

But in fact, the sensor id is present in the ingestion layer of the pipeline. So it should be possible to retreive this id from a (let's call it) management service and implement an automatisation of the steps above in this management service.

So the goal is that the growth of the structure is driven by the incomming data - nearly exact the same as if a service is auto-scaled, except that each instance has it's own config but same container image.

Until now I haven't found anything that would stick with those requirements. In addition to that it seems kind of weird that a deployment in kubernetes (respectivly the management service) can deploy other services to the cluster.

Does anybody can comprehend this approach and how a solution can look like? Or generally speaking: What would be a recommended approach to fullfill an automated scaling with different configuration?

Thanks and best regards


Viewing all articles
Browse latest Browse all 5049

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>