Scaling monorepo microservices

Greetings community!

We have a Codefresh pipeline that builds images of all miscroservices and deploys to EKS them using a lambda function
Building the images looks something like this:

........title: 'build.docker.images'
........type: 'build'
........stage: 'image'
........working_directory: '${{clone}}'
........image_name: 'image-name-prefix'
........registry: ${{AWS_DOCKER_REGISTRY}}
........dockerfile: 'ci/ci.Dockerfile'
............- DEPLOY_ENV=${{DEPLOY_ENV}}
................working_directory: '${{clone}}/path/to/serviceA'
................title: 'service.A'
................tag: 'ms-service-a-${{CF_REVISION}}'
................working_directory: '${{clone}}/path/to/serviceB'
................title: 'service.B'
................tag: 'ms-service-b-${{CF_REVISION}}'

We currently have 20 such services
New services are being created on a weekly basis (the system is growing)

We want to add a “delta” capability into the pipeline - build and deploy only what changed
I already know how to programmatically list the services that have changed since the last deployment (dependencies an all)

How do we run only the (scaled) build-image steps of the changed services?
Can we avoid having to add each service to the pipeline?

Using a condition on each scale-step… I don’t love it but it’s a solution
What I really want is scaling steps dynamically (according to an env var exported from a freestyle step maybe) - is this a sane thought?


I have AgroCD docs opened in the next tab, I’ll read about it


This is similar to this question Need to build multiple docker images

Basically you need to use our monorepo support and have a single pipeline (with a single service) that automatically gets triggered according to what was changed. See more here How To Use Codefresh with Mono-Repos

Alternatively if you can programmatically find what was changed you can simply call your own pipeline with codefresh-run Codefresh | codefresh-run step

Finally I see that you are passing the environment to the build. This is an anti-pattern (having docker images that know their environment)

See anti-pattern 2 here Kubernetes Deployment Antipatterns - part 1 - or anti-pattern 5 here Docker anti-patterns - for more details.

Hey Kostis
read many of your articles

  • Single pipeline per service:
  1. Deploying is a transaction, if one of the services failed to build (typescript, dockerfile) - none should be deployed
  2. Managing 50 plus pipelines seems daunting, even if the process of creating new pipelines is automatic
  3. we don’t always want to provide backward compatibility - that is one of the perks of using a monorepo
  • codefresh-run
    I used it in the past gained major performance improvements by going back to single pipeline
    Also easier to understand and maintain

  • env vars in the image
    Yes, we are aware
    Currently, services are reading configuration from code (.env files)
    We will be addressing this in the near future by either:

  1. A central configuration service
  2. Setting env var in the kube yaml (dev/staging/prod have separate, per-service files)

Let’s try to get past the “life style” decisions we took and think about a more dynamic way to handle a multi-service pipeline

Tal Faitlov


If you can list the services yourself you can simply create a dynamic step that builds images on the fly. See an example of dynamic git cloning here Steps · Codefresh | Docs

You can also parse the Github event yourself for more details Git Triggers · Codefresh | Docs (Some people also use git diff commands in a freestyle step)

In general however I think it is much easier to simply build all services and let the caching mechanisms do the heavy lifting. Sometimes runtime dependencies and build dependencies are completely different.

The list of changed services that need to be deployed is based on git diff - via pnpm since filter
We are aiming for a system where all runtime APIs are described by a code
Everything has an SDK, nothing is being “guessed”
So runtime dependencies can be statically discoverable at build time in our fully-typed monorepo

Caching alone won’t cut down a 30 min build&deploy for 50 services to less than 2 minutes for a single service commit, no sir, no way

We have an automatically triggered Test-Only pipeline validating PRs
Support for running according to changed packages (and dependents) was added three-week ago
The improvement is very noticeable, this is a developer-experience must-have

How to I write a dynamic step to run parallel build image steps (like “scale:” does)?
Because the below times 50 in my pipeline is just not cutting it:

                working_directory: '${{clone}}/service/is/here/'
                title: 'my service'
                tag: 'my-service-name-${{CF_REVISION}}'
                            shouldDeploy: 'includes("${{MICRO_SERVICES_TO_DEPLOY}}", "@my/service-name") == true'

Ps: a shorthand ‘when’ version would be great, so will being able to base conditions on filesystem queries


How to I write a dynamic step to run parallel build image steps (like “scale:” does)?

You need to create your own Custom step and use a “range” template. See example here step-examples/multi-clone-step.yml at master · kostis-codefresh/step-examples · GitHub

Let me know if you need more clarifications.

Yes, please:
A. How can the arguments be a dynamic?
(according to output from a script called from the pipeline on the cloned repo)
B. How can the steps be executed in parallel?
C. How can the dynamic, parallel steps be displayed within the pipeline as separate steps?
Like this:


First of all you need to read the guide about creating a plugin. See

Then focus on the stepsTemplate section that allows you to use the Gomplate library for templating.

The example in the docs shows how to dynamically create clone steps according to a list of repositories. You need to do something similar for creating build steps according to a list or variable as you want.

I suggest you open a request ticket. Either another customer has already requested this, or our support team will have more pointers.

To answer your questions:

A. See the gomplate syntax. It can work with dynamic arguments such as lists/arrays/objects
B. Just use the “parallel” syntax in your template as you do now
C. I am not sure if this is supported or not. Our support team will know

We wanted to do exactly the same thing.

We ended up writing a script that creates a pipeline steps file, then in another step we run something like:

codefresh run myDynamicChildPipeline --yaml=dynamicallyCreatedStepsFile.yaml

We previously created myDynamicChildPipeline with empty steps like this:

# This is created at runtime via the Parent pipeline.

version: 1.0
steps: {}
stages: []

We tried this approach and we ran into some problems.

One of them related to accessing the credentials for the Google Container Registry, which was a bug in Codefresh that was fixed later. I can’t remember what other problems we ran into.