There's a specific kind of financial pain that hits differently than others. It's not the big invoice you see coming — it's the small one that arrives quietly, every month, getting slightly larger each time, while you're just trying to ship software. GitHub Actions billing is that pain. Eight-tenths of a cent per minute sounds like nothing until your team is running tests on every push, every PR, every merge, and you do the math at 11pm on a Tuesday. Suddenly you're paying $150 a month to run code on someone else's computer when you have perfectly good computers sitting right there.
The solution exists. GitHub has supported self-hosted runners for years. The documentation is thorough, technically accurate, and genuinely useful — if you already know what systemd is, why PAT tokens need careful scoping, and how to harden a Linux service against the specific failure modes of long-running background processes. For everyone else, it's a wall of technical prerequisites dressed up as a tutorial.
That's the gap this project fills. Not with a new technology, not with a clever framework — with a shell script that actually respects your time and assumes you'd rather be building your product than becoming a DevOps engineer by accident.
The Real Problem Was Never the Runner — It Was the Setup
When I sat down to solve the GitHub Actions cost problem, the temptation was to write a quick bash wrapper around the official runner installation steps. Twenty minutes, done, move on. But I kept running into the same thing: the failure modes weren't technical, they were human. People would get halfway through, hit an ambiguous error, and give up. Or they'd complete setup but misconfigure the token scope. Or they'd get it working once, not understand why, and be completely lost when something needed updating six months later.
There are plenty of bash wrappers for GitHub runner setup out there. Most of them are 50-line scripts that automate the commands from the docs and call it done. That's useful, but it misses the actual friction point. The problem isn't that the commands are hard to type — it's that developers don't know which commands to run, in what order, with what values, on their specific OS, with their specific setup. A script that assumes you already know all that doesn't save much cognitive load. It's a trap with a friendly filename.
So I built an interactive setup wizard. Progress indicators, validation at each step, clear error messages that tell you what went wrong and what to do about it. The kind of experience you'd expect from consumer software, applied to infrastructure tooling. The v2.4.0 UI overhaul was essentially a full pass at this philosophy — taking working automation and making it actually pleasant to use. It sounds obvious when you say it out loud. It's surprisingly rare in practice.
One Script, Five Operating Systems, Three Installation Modes
The technical challenge that ended up being genuinely interesting was the platform abstraction layer. A single shell script that detects whether it's running on Ubuntu, Debian, CentOS, Rocky Linux, or macOS — and then conditionally routes through apt, yum, or Homebrew without the user having to know or care which one applies to them. Docker environments get their own detection path since they often behave differently from bare-metal Linux despite running the same distribution. The script handles the branching so the human doesn't have to.
Three installation modes cover the main use cases without forcing everyone through the same flow. Interactive mode walks you through everything with prompts and validation — good for first-time setup. Direct mode accepts parameters and runs without interaction — good for scripted or CI-driven installation where you're passing everything as arguments. Service mode installs as a persistent systemd service — good for production environments where you want the runner to survive reboots and recover from failures automatically. This is where most of the Linux-specific complexity lives, and it's where most simpler scripts quietly fall apart.
Credential Security Without External Dependencies
The token handling is where I spent more time than I expected. Storing GitHub authentication tokens on disk is necessary for persistent runners, and doing it carelessly is the kind of thing that ends careers. The implementation uses AES-256-CBC encryption for credential storage, with an XOR fallback for environments where OpenSSL isn't available. The fallback isn't as strong, but it's meaningfully better than plaintext and doesn't require installing anything extra.
The goal was a security baseline that works everywhere, not a perfect solution that fails on half the target environments. It's not a security product — it's a shell script — but it's a shell script that takes the problem seriously rather than writing tokens to a plaintext file and hoping for the best.
Health Monitoring Because Runners Fail and Nobody Wants a 3am Alert
Self-hosted runners fail. Not often, but they do — process crashes, network interruptions, token expiration, the occasional inscrutable state where the runner is technically running but not accepting jobs. The health check system monitors runner process state on a configurable interval and automatically restarts failed instances. For teams running CI/CD at any meaningful frequency, the difference between a runner that recovers itself in 30 seconds and one that pages someone at 2am is significant. The monitoring is intentionally lightweight — it checks what matters and acts on it, without adding complexity that would itself become a failure point.
Multi-runner orchestration handles the case where you want to run several isolated runner instances on a single machine — useful for parallelizing jobs without provisioning additional hardware. The implementation uses the github-runner@.service systemd template pattern, where each runner becomes a separate systemd unit with independent lifecycle control.
- Independent lifecycle control: Start, stop, or restart individual runners without affecting others on the same host
- Systemd template units: The
@.servicepattern means adding a new runner is a single command, not a new config file - Fair resource allocation: Runners share hardware without one instance monopolizing CPU or memory
- Isolated environments: Each runner instance operates independently, reducing the blast radius of a failed job
The Feature That Surprised Me by Mattering Most
The workflow migration engine wasn't in the original plan. It's a YAML parser that reads your existing GitHub Actions workflow files, identifies every runs-on: ubuntu-latest declaration, replaces it with runs-on: self-hosted, and backs up the original before touching anything. It sounds small. In practice, it's the difference between someone fully adopting self-hosted runners and someone setting one up, realizing they have to manually update forty workflow files, and quietly going back to paying GitHub.
Product decisions like this don't come from technical requirements — they come from watching where people actually stop. The runner works. The setup works. And then someone opens their workflows folder and sees forty YAML files and does a quick mental calculation about whether the cost savings are worth the migration effort. The migration engine makes that calculation trivially easy instead of genuinely tedious.
What This Actually Saves
The concrete case for self-hosted runners is straightforward. A team running 500 minutes of CI per day on GitHub-hosted runners spends roughly $120 a month in overages. The same workload on a $20/month VPS costs $20 a month. The installer doesn't change that math, but it removes the setup barrier that keeps teams on the expensive path longer than they need to be. For indie developers and small teams especially, the combination of real cost savings and a setup process that doesn't require a DevOps background is the whole value proposition.
What a Shell Script Can Teach You About Product Design
I'll be direct about what this project is: it's a shell script. It's not a platform, not a SaaS, not a framework with a landing page and a Discord community. It's 471KB of shell code that solves a specific, real problem for a specific audience — developers who are paying too much for CI/CD and want a better option without a DevOps detour.
What I find interesting about it, looking back, is how much product thinking ended up in something that's technically just automation. The decision to build an interactive wizard instead of a silent script. The choice to handle five operating systems instead of just Ubuntu. The migration engine that nobody asked for but everyone who tried it immediately understood. These aren't engineering decisions — they're product decisions that happened to be implemented in bash.
The GitHub Actions billing clock is still ticking at $0.008 per minute. The difference is, now there's an easy way off it — one that doesn't require becoming a different kind of engineer to use.
