website logo
Sign Up PricingApp Templates
⌘K
🚀Getting started
Create an account in Microtica
Create an application from template
Import an existing application
Scaling Applications in Microtica
Assign a Custom Domain
⏰Changelog
📚Ready-to-use Templates
Strapi Serverless
Medusa Server
Appwrite
SPA on CloudFront
Amazon EKS
Node.js
n8n Workflow Automation
Next.js
🤖Migrate to Microtica
Migrate from Heroku to AWS
🚦Pipelines
Pipeline Syntax
Steps
Stages
Artifacts
Variables
🔗Integrations
Connect an AWS account
Connect an Existing Kubernetes Cluster
Connect a Container Registry
⚙️Project Settings
Manage teammates
Manage pricing plan and billing
Docs powered by archbee 
6min

Pipelines

Pipelines are core feature in Microtica and the most crucial part of every software delivery automation. Using pipelines you can define the process your source code should go through from your local machine to production.

Pipeline is a composition of multiple steps executed in sequence and/or parallel. Each step performs a specific action such as:

  • Compile and package the code
  • Run unit and integration tests
  • Run code quality checks
  • Build and push Docker images
  • Deploy services on Kubernetes

How Microtica pipelines work

Unlike traditional solutions where you need to spin and manage dedicated VMs to handle execution of pipeline actions, Microtica takes the cloud-native approach to provide software delivery automation using Docker as a runtime for pipeline step execution.

Using Microtica pipelines you don’t have to worry about maintaining complex infrastructure for your automation. Using Docker as a runtime you are able to define more flexible pipelines that can use different frameworks and tools in one pipeline.

In one step you can use node image as a runtime environment to compile your NodeJS application but than in the next step you can use hashicorp/terraform image, that contains pre-installed Terraform CLI, to perform some Terraform operations.

How Microtica pipelines work
How Microtica pipelines work


Each step spins up a new Docker container using the image specified for that step.

The Docker container lives until all actions within the step are completed, after that the container is killed and deleted and everything in the container’s memory will no longer be available.

Microtica provides a Pipeline shared state that preserves the state between steps throughout the pipeline execution.

Pipeline shared state

To preserve state between steps, Microtica uses Docker volumes to achieve that.

Sharing state between steps is useful when you clone the source code from Git in one step and compile and test the code in another step.

Shared state can be accessed from any step within one pipeline on /microtica/shared path. Anything that is written in this folder by any step will be available for all other steps in that pipeline.

Pipeline shared state
Pipeline shared state


The diagram implies that if within the first step you store a file named index.js, the same file will be available in the second step on /microtica/shared/index.js path.

Pipeline artifacts

Artifacts persist the step state even after the step is completed. Pipeline artifacts are usually used as a storage for deployment packages.

Pipeline artifacts
Pipeline artifacts


Each step can define one or multiple artifact packages within microtica.yaml.

microtica.yaml
|


With this spec we are telling Microtica to package everything within the /dst folder store artifact with name primary. The stored artifact can then be used for deployment or downloaded from the Microtica GUI.

Learn more about artifacts and different output configurations from Pipeline Artifacts.

Updated 03 Mar 2023
Did this page help you?
Yes
No
UP NEXT
Pipeline Syntax
Docs powered by archbee 
TABLE OF CONTENTS
How Microtica pipelines work
Pipeline shared state
Pipeline artifacts