Fast feedback loops are essential for healthy engineering teams. Slow CI pipelines reduce productivity, delay releases, and make small changes feel expensive. Fortunately, GitHub Actions workflows for Node.js projects can be optimized significantly with a few practical techniques.
This guide walks step by step through improving a basic pipeline, starting with no optimization and progressively adding:
- Dependency caching with
.npm - Conditional execution with
dorny/paths-filter - Parallel job execution
Each step builds on the previous one so you can adopt them incrementally.
The Node.js application itself is irrelevant. The focus is strictly on CI performance.
The initial pipeline
Let’s begin with a minimal workflow that installs dependencies and runs tests on every push.
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npm test
Problems
This pipeline works, but it is inefficient:
- Dependencies are downloaded every run
- Tests run even when irrelevant files change
- Everything runs sequentially
On medium or large projects, this often leads to several minutes per run.
Now let’s improve it step by step.
Step 1: Cache dependencies using .npm
Installing dependencies is usually the slowest part of a Node.js workflow. Even with npm ci, packages must be downloaded every time.
GitHub Actions caching can reuse the local npm cache between runs.
Why cache .npm instead of node_modules
Caching node_modules is unreliable due to:
- OS differences
- native modules
- lockfile mismatches
Caching .npm is safer because it stores tarballs, not built artifacts. npm ci remains deterministic while downloads become much faster.
Updated workflow
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Cache npm
uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
restore-keys: |
npm-${{ runner.os }}-
- run: npm ci
- run: npm test
Result
You can expect dependency installation to drop from minutes to seconds after the first run.
This is usually the highest impact optimization and should be implemented first.
Step 2: Run the pipeline only when needed
Even a fast pipeline wastes time when triggered unnecessarily.
For example:
- README edits
- documentation updatess
These changes rarely require tests. dorny/paths-filter allows you to run jobs only when relevant files change.
Install the filter
We first detect which parts of the repository changed.
- uses: dorny/paths-filter@v3
id: changes
with:
filters: |
app:
- 'src/**'
- 'package.json'
- 'package-lock.json'
docs:
- 'docs/**'
- '**/*.md'
Conditionally run the job
Now we add a condition to the job.
jobs:
test:
if: ${{ needs.changes.outputs.app == 'true' }}
Full example with filtering
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
changes:
runs-on: ubuntu-latest
outputs:
app: ${{ steps.filter.outputs.app }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
app:
- 'src/**'
- 'package.json'
- 'package-lock.json'
test:
needs: changes
if: ${{ needs.changes.outputs.app == 'true' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
- run: npm ci
- run: npm test
Result
The pipeline simply does not run when only documentation or unrelated files change. This saves compute time and gives contributors faster feedback.
Step 3: Run jobs in parallel
Many pipelines still run tasks sequentially:
- lint
- test
- build
Running these one after another increases total time even though they are independent.
GitHub Actions supports parallel jobs by default. Splitting tasks across jobs often cuts runtime by half or more.
Before
- run: npm run lint
- run: npm test
- run: npm run build
All sequential.
After: split into parallel jobs
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
- run: npm ci
- run: npm run lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
- run: npm ci
- run: npm test
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
- run: npm ci
- run: npm run build
Result
Instead of:
- 2 min lint
- 4 min test
- 2 min build
- total: 8 min
You get:
- all in parallel
- total: about 4 min
The longest job determines the total time.
Final optimized structure
By combining:
- npm caching
- path filtering
- parallel jobs
You achieve:
- fewer runs
- faster dependency installs
- shorter wall clock time
This leads to a much more responsive developer experience.