Build, Test, Migrate: MySQL Service Containers and Atlas in GitHub CI
Learn how to streamline your CI pipeline with GitHub Actions by spinning up MySQL service containers and running seamless schema migrations using Atlas. This guide walks you through a reproducible setup that eliminates external dependencies and lets your team manage database changes with confidence.

Modern CI workflows thrive on reliability and speed—and that includes your database migrations. In this guide, you'll learn how to spin up a MySQL container in GitHub Actions and pair it with Atlas to automate and validate schema changes across multiple environments.
By containerizing your database and embedding schema validation directly into your CI pipeline, you'll eliminate manual steps, boost collaboration, and ensure your schemas evolve safely as your codebase grows. No external DB setup, no guesswork—just clean, reproducible migrations done right.
Introduction
This guide provides an example workflow that uses the Docker Hub `mysql` image to configure a service container. The workflow connects to the MySQL service, creates a schema, assigns grants to a user, and runs Atlas migration tools against two schemas.

Note: To use service containers, you must use a Linux runner (Ubuntu runner on GitHub-Hosted runners, or, for self-hosted runners, you must use a Linux machine as your runner and Docker must be installed).
Setting up the service container
To use MySQL as a service container within your workflow, define it in your workflow file using the services
key. Here's a basic example:
jobs:
test-migrations:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:8
env:
MYSQL_DATABASE: 'mydb'
MYSQL_USER: ${{ env.DB_USER }}
MYSQL_PASSWORD: ${{ env.DB_PASSWORD }}
MYSQL_ROOT_PASSWORD: ${{ env.ROOT_PASSWORD }}
options: >-
--health-cmd="mysqladmin ping"
--health-interval=10s
--health-timeout=5s
--health-retries=3
ports:
- 3307:3306
This configuration pulls the official MySQL image, sets environment variables for initial credentials, and exposes port 3307 to allow connections from your job steps.
Note: We’ve included health check options to ensure the container isn't prematurely marked as available. Without them, the service would appear ready as soon as the container starts—even if initialization isn’t complete. By defining a health check, the job will wait until the container reaches a healthy state before executing any steps.
Incorporating the service container within our workflow
Once the container is defined, your job can interact with it using standard MySQL client tools or SDKs. You can then run scripts to prepare schemas and grant permissions:
steps:
[...]
- name: Initialize databases
run: |
mysql -h 127.0.0.1 -P${{ env.DB_PORT }} -u root -p${{ env.ROOT_PASSWORD }} \
-e "CREATE DATABASE mydb2; GRANT ALL ON mydb2.* TO testuser;"
# Load baseline schemas
mysql -h 127.0.0.1 -P${{ env.DB_PORT }} -u root -p${{ env.ROOT_PASSWORD }} \
evsession < migrations/mydb/20250101_baseline.sql
mysql -h 127.0.0.1 -P${{ env.DB_PORT }} -u root -p${{ env.ROOT_PASSWORD }} \
paren_db < migrations/mydb2/20250101_baseline.sql
Configuring Atlas using environment variables
Atlas can be configured via environment variables within your job. This simplifies credential management and ensures your workflow remains portable. Here's how you might set it up:
locals {
db_user = getenv("DB_USER")
db_url = getenv("DB_URL")
db_pass = getenv("DB_PASSWORD")
db_name = getenv("DB_NAME")
}
env "local" {
url = "mysql://${local.db_user}:${local.db_pass}@${local.db_url}/${local.db_name}"
format {
migrate {
diff = "{{ sql . \" \" }}"
}
schema {
inspect = "{{ sql . \" \" }}"
}
}
migration {
// URL where the migration directory resides.
dir = "file://migrations/${local.db_name}/"
baseline = "20250101"
}
}
Incorporating Atlas into the workflow
With everything in place, you're ready to use Atlas. Below is a sample step that applies migrations for the different schema:
steps:
[...]
- name: Install Atlas migration tool
run: curl -sSf https://atlasgo.sh | sh
- name: Apply mydb migrations
run: |
DB_NAME=mydb atlas migrate apply --env local -c file://atlas.hcl
- name: Apply mydb2 migrations
run: |
DB_NAME=mydb2 atlas migrate apply --env local -c file://atlas.hcl
Summary
By combining MySQL service containers with Atlas in GitHub Actions, you've created a lightweight, reproducible environment for schema migrations. This setup enables robust CI workflows that stay consistent across teams and environments—without relying on external databases or manually managed scripts.
👉 For reference, the complete GitHub Actions workflow is available below to help you implement this setup seamlessly.
name: Database Migration Tests
on:
pull_request:
env:
DB_USER: 'testuser'
DB_PASSWORD: 'userpassword'
DB_PORT: '3307'
DB_URL: 'localhost:3307'
ROOT_PASSWORD: 'rootpassword'
jobs:
test-migrations:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:8
env:
MYSQL_DATABASE: 'mydb'
MYSQL_USER: ${{ env.DB_USER }}
MYSQL_PASSWORD: ${{ env.DB_PASSWORD }}
MYSQL_ROOT_PASSWORD: ${{ env.ROOT_PASSWORD }}
options: >-
--health-cmd="mysqladmin ping"
--health-interval=10s
--health-timeout=5s
--health-retries=3
ports:
- 3307:3306
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Atlas migration tool
run: curl -sSf https://atlasgo.sh | sh
- name: Initialize databases
run: |
mysql -h 127.0.0.1 -P${{ env.DB_PORT }} -u root -p${{ env.ROOT_PASSWORD }} \
-e "CREATE DATABASE mydb2; GRANT ALL ON paren_db.* TO testuser;"
# Load baseline schemas
mysql -h 127.0.0.1 -P${{ env.DB_PORT }} -u root -p${{ env.ROOT_PASSWORD }} \
evsession < migrations/mydb/20250101_baseline.sql
mysql -h 127.0.0.1 -P${{ env.DB_PORT }} -u root -p${{ env.ROOT_PASSWORD }} \
paren_db < migrations/mydb2/20250101_baseline.sql
- name: Apply mydb migrations
run: |
DB_NAME=mydb atlas migrate apply --env local -c file://atlas.hcl
- name: Apply mydb2 migrations
run: |
DB_NAME=mydb2 atlas migrate apply --env local -c file://atlas.hcl