pnpm workspaces: the CI cache that survived the fix and cost me 40 minutes per build
I finished my previous post convinced the monorepo was solid. Tests green, deploy successful, pnpm workspaces configured exactly as the docs say. Went to bed happy.
Next morning I checked the third CI run and saw this in the logs:
Cache not found for input keys: node-modules-cache-abc123
Run pnpm install --frozen-lockfile
...
Progress: resolved 847, reused 0, downloaded 847, added 847
reused 0. Eight hundred and forty-seven packages downloaded from scratch. Forty minutes of build time where it should've been eight.
My thesis, before I get into the details: pnpm's cache in GitHub Actions does not work out-of-the-box with monorepos. Not because pnpm is broken — pnpm is excellent, I'll say that without ambiguity — but because the store-dir in CI behaves differently than it does locally, and most people never configure it explicitly. That invisible difference destroys any cache strategy that doesn't account for it.
The real problem: pnpm store-dir in CI isn't where you think it is
When you run pnpm install on your machine, the global store lives at ~/.local/share/pnpm/store (Linux) or ~/Library/pnpm/store (macOS). Every project on your system shares that store — if a package already exists, pnpm links it with hard links. Instantaneous.
In GitHub Actions, the runner starts clean on every execution. There's no previous store. So pnpm has two possible behaviors:
- Without explicit configuration: pnpm picks a dynamic path for the store — sometimes inside the workspace, sometimes in a temp dir on the runner. The path changes between runners and between runs.
- With an explicit
--store-dir: pnpm always uses exactly that path. You can cache that path withactions/cacheand restore it on the next run.
The problem with case 1 is that actions/cache needs a fixed path to work. If the store path varies, the restore never matches even if the key is identical. The cache exists in GitHub's S3, but it never gets restored because pnpm is looking in a different directory.
This is exactly what pnpm's official CI documentation covers — but it's buried in the advanced configuration section, not in the quickstart that everyone copies.
The YAML before the fix: what everyone was copying
This was the workflow I had, assembled from a handful of tutorials:
# workflow BEFORE — broken cache in monorepo
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: 22
# ⚠️ cache: 'pnpm' here looks like it does something, but it doesn't configure store-dir
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build
run: pnpm run build
The cache: 'pnpm' in setup-node caches node_modules at the root project level. In a monorepo with workspaces, that's not enough: each package has its own node_modules with symlinks pointing back to the global store. If the store doesn't restore correctly, those symlinks point to nothing and pnpm reinstalls everything.
The cache miss in the logs looked like this:
##[group]Cache not found
Key: node-modules-pnpm-store-Linux-abc1234def5678
Restore keys attempted:
node-modules-pnpm-store-Linux-
node-modules-pnpm-store-
Cache Size: ~0 B
##[endgroup]
Cache restored: zero bytes. Every run started from scratch.
The YAML after: explicit store-dir and workspace lockfile hashing
The fix requires three concrete changes:
# workflow AFTER — cache that actually works in a monorepo
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
env:
# Fixed store path — critical so actions/cache always finds the same thing
PNPM_STORE_PATH: ~/.pnpm-store
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: 22
# No cache: 'pnpm' here — we manage it manually below
- name: Get pnpm store path
id: pnpm-cache
run: |
# Force the explicit store-dir so the path is predictable
pnpm config set store-dir $PNPM_STORE_PATH
echo "store-path=$PNPM_STORE_PATH" >> $GITHUB_OUTPUT
- name: Restore pnpm store cache
uses: actions/cache@v4
with:
path: ${{ steps.pnpm-cache.outputs.store-path }}
# Key includes lockfile hash — invalidates when dependencies change
key: pnpm-store-${{ runner.os }}-${{ hashFiles('**/pnpm-lock.yaml') }}
# Broader restore key in case the lockfile changed partially
restore-keys: |
pnpm-store-${{ runner.os }}-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build workspaces
run: pnpm run -r build
- name: Tests
run: pnpm run -r test
The critical changes are in three places:
1. PNPM_STORE_PATH as a fixed environment variable. Without this, every runner picks its own path. With this, the store always lives at ~/.pnpm-store and actions/cache knows exactly what to restore.
2. pnpm config set store-dir before install. Defining the environment variable isn't enough — you have to explicitly tell pnpm to use that path. This is the line missing from 90% of the examples I found.
3. hashFiles('**/pnpm-lock.yaml'). The ** matters. In a monorepo you can have lockfiles per workspace in addition to the root one. With **/pnpm-lock.yaml, the cache key changes if any lockfile in the repo changes. With just pnpm-lock.yaml, you miss changes in nested workspaces.
The gotchas nobody documents
A broad restore-keys can cause more damage than good
With restore-keys: pnpm-store-${{ runner.os }}- you're telling GitHub Actions "if you can't find the exact key, use the most recent cache that matches this prefix." Sounds reasonable. The problem is a partially-restored store (from a different lockfile) can cause subtle conflicts where pnpm thinks a package is installed but it's missing a transitive dependency.
My solution: use the broad restore-key only to reduce initial download time, but always run pnpm install --frozen-lockfile afterwards. The --frozen-lockfile guarantees consistency even if the store is partially stale.
pnpm run -r build doesn't respect dependency order between workspaces by default
If apps/web depends on packages/ui, you need packages/ui to build first. pnpm run -r build runs in parallel by default. The fix:
# Respect the workspace dependency graph order
- name: Build in topological order
run: pnpm run --filter="..." --workspace-concurrency=1 build
# Or better yet, using the --sort flag:
# pnpm run -r --sort build
The --sort flag makes pnpm respect the workspace dependency graph. Without this, in a monorepo with shared packages you'll see import errors for things that don't exist yet because the package you depend on hasn't compiled yet.
The cache is saved at the end of the job, not the beginning
This is actions/cache behavior that burns a lot of people: the cache is persisted when the job finishes successfully. If the job fails on the build step (after installing dependencies), the new store cache doesn't get saved. The next run downloads everything again.
To mitigate this, you can split install into its own job:
jobs:
install:
runs-on: ubuntu-latest
steps:
# Only installs and caches — always finishes successfully if deps are fine
...
build:
needs: install
runs-on: ubuntu-latest
steps:
# Restores the cache from the previous job and builds
...
The actual numbers
In a reproducible scenario with a three-workspace monorepo (apps/web, packages/ui, packages/config) and ~850 total dependencies:
| Configuration | Install time | Total CI time |
|---|---|---|
| No cache (downloads everything) | ~22 min | ~40 min |
cache: 'pnpm' in setup-node (broken cache) | ~20 min | ~38 min |
| Explicit store-dir + lockfile hash | ~1.5 min | ~8 min |
The "broken cache" in the second row is the most treacherous case: the workflow shows the cache step exists, the log says "Cache found" on some runs, but the restore is partial. The time drops by barely 2 minutes because something is restored — just not enough to avoid most of the downloads.
The difference between 38 and 8 minutes is exactly the kind of overhead that accumulates silently. A team of four people doing ten PRs a day is 1,200 minutes of wasted build time per week.
FAQ: pnpm workspaces cache GitHub Actions CI
Why doesn't cache: 'pnpm' in actions/setup-node work well with monorepos?
Because it caches the node_modules in the root directory but not pnpm's global store. In a monorepo with workspaces, each package has its own node_modules with symlinks pointing to the store. If the store doesn't restore correctly, pnpm detects the broken symlinks and reinstalls everything from scratch. The fix is to cache the store directly with actions/cache and an explicit path.
What path does the pnpm store use in GitHub Actions runners?
Without explicit configuration, it varies. On Ubuntu runners it might be at /home/runner/.local/share/pnpm/store or in a temp path inside the workspace. That's exactly why the first rule is to define store-dir explicitly with pnpm config set store-dir before running pnpm install.
What's the right cache key strategy for pnpm in a monorepo?
Use hashFiles('**/pnpm-lock.yaml') with the double-asterisk glob. This includes the root lockfile and any lockfiles in subdirectories. Combined with runner.os to separate caches between Linux and macOS if you run on both. The broad restore-key without the hash works as a fallback but never as the primary key.
Do I need to change anything in pnpm-workspace.yaml for better cache behavior?
Not directly. pnpm-workspace.yaml defines the workspace structure, not store behavior. What does matter is that all packages have their dependencies properly declared in their respective package.json files. If a package uses a dependency that's only in the root without declaring it, pnpm might resolve it locally but fail in CI when the store is partially restored.
Is it worth separating the install job from the build job?
Depends on the size of the monorepo. For repos with more than 500 dependencies and builds that fail frequently (tests, linting) — yes, it's worth it: it guarantees the cache gets persisted even when the build fails. For small repos where install is fast, it's unnecessary overhead.
Does this work the same with pnpm 9 and Node.js 22?
Yes. The store-dir configuration has been stable since pnpm 8. With pnpm/action-setup@v4 and actions/setup-node@v4 the setup is identical regardless of Node version. What changes between pnpm versions are some command flags — --workspace-concurrency was renamed at some point — but the cache logic is the same.
The uncomfortable thing nobody says about pnpm and CI
pnpm is the best option for monorepos — I said it when I compared pnpm vs npm vs yarn with real benchmarks and I stand by it. But it has a CI configuration curve that's genuinely frustrating because the errors are silent. The workflow "works" — CI doesn't explode, tests pass — but the cache is broken and nobody notices until someone actually looks at the timing with some attention.
The previous post about pnpm workspaces in a monorepo with Next.js 16 ended with CI green. This post is what was left unresolved: the cache that survived the initial fix and kept silently costing time on every run. The lesson isn't that pnpm is poorly documented — the official CI docs are clear if you read them completely. The lesson is that "CI working" and "CI working efficiently" are two completely different states, and the second one requires you to watch the numbers, not just the green checkmark.
If you're starting a new monorepo today, copy the fixed YAML directly. Don't use cache: 'pnpm' from setup-node as your only strategy. Configure store-dir before install. Use the **/pnpm-lock.yaml glob for the hash. That's ten extra lines that save thirty minutes per run.
For architectures where CI time matters at scale — and if you're designing distributed systems, it does — these infrastructure details are part of the job. The same rigor I apply to digital signature system design or to analyzing Jakarta EE vs Spring Boot tradeoffs applies here: reasonable defaults are rarely the correct defaults for real-world cases.
Source:
Related Articles
pnpm vs npm vs yarn vs bun: The Real Comparison Nobody Gives You in 2025
I used all four in real projects. One wrecked a monorepo at 3am. Another saved my ass in production. Here's the unfiltered truth about every major package manager in 2025.
Spring Security with Spring Boot Actuator: the authorization model that survived the incident
Locking down Actuator endpoints isn't enough. After the incident, I rebuilt the authorization model from scratch: explicit SecurityFilterChain, separate health groups, roles for /metrics and /env, and real validation with curl. This is what's still standing.
pnpm workspaces in a Next.js 16 monorepo: what the benchmark didn't measure and almost broke my CI
The install-time benchmark I published earlier didn't capture the real cost of pnpm workspaces in CI: silent cache invalidation, dependency hoisting that breaks in App Router, and a specific edge case that can take down your Railway pipeline. Here's what I failed to measure.
Comments (0)
What do you think of this?
Drop your comment in 10 seconds.
We only use your login to show your name and avatar. No spam.