The Bottom Line. I'm a solo builder running Booplex. SEO and AI automation are the home base, but I also build mobile apps, custom CMS systems for the niche sites I run, skills for AI coding agents, boilerplates, and the occasional weirder side project. I'm always testing new tools, writing up what I learn, and open to conversations about whatever you're stuck on.
I never really stop messing with new things. It's why Booplex exists, really.
The Brain Dumps on this site are the stuff that made it out of the notebook — the war stories clean enough to write up. But between posts, there's always something on the workbench. Something half-built. Something I'm three days into and still haven't decided if it's genius or a waste of a weekend.
So here's what's actually filling the hours when this blog goes quiet.
What's on the workbench right now

I keep a running list of what I'm chewing on between Brain Dumps. Some of it lives on Booplex itself — free SEO tools, AI systems for content and ops, the Playbook, the monthly "break my own site on purpose" experiment. Some lives outside Booplex — mobile apps, CMS systems, AI coding-agent skills, and a handful of weirder side projects. Here's where each thread stands.
What tools am I building right now?
The Canonical URL Checker is the first tool that lives on Booplex. It's not the first tool I've ever built though — I've been hacking together utilities for years, mostly tucked inside WordPress installs, inside other CMS platforms, inside client stacks where they'd quietly do their job and nobody else would ever see them.
This one was born from a lesson, not a necessity. Canonical checks are literally on the standard SEO audit checklist I run when I'm reviewing someone else's site. I just... didn't run it on my own. Classic cobbler's-kids situation.
Got burned for months weeks longer than I'd like to admit with a canonical tag pointing to localhost. So after the dust settled, I built the thing I should have had all along — and put it out in the open so the next person doesn't have to learn it the way I did.
There are three more in the queue:
- Schema Validator — because the existing ones either miss AI-specific stuff (speakable, hasPart) or cost money to use at any reasonable volume
- LLM Citation Finder — paste your site, it tells you which AI models are actually citing you and how
- Sitemap Drift Checker — catches the slow bleed of stale URLs, orphan pages, and broken canonicals that nobody notices until Google does
Each one starts the same way. I hit a problem. I can't find a tool that fixes it the way I want it fixed. I spend a weekend building it.
Then I write up what I learned while building it. Rinse and repeat.
Taming AI into systems that actually work
Most "AI for business" content stops at the prompt. I'm past that — I build the whole workflow.
Here's what I mean. I used to spend a day and a half every month pulling together SEO reports. Data from Search Console, format it, add commentary, export, send. Not hard — just time I could've spent on the work that actually changes rankings.
So I built a system that does the whole loop: pulls the data through the GSC API, runs it through an AI agent that spots what actually changed (not just "traffic went up"), drafts commentary in my voice, cross-checks the numbers against the source before writing a single line, and lands a review-ready report in my inbox. I still approve it. But the five hours of pulling-and-formatting? Gone.
Another one. I run a content pipeline on this very site. An idea drops into a notion-style inbox, an agent pulls live SERP data for it, drafts a brief, hands it to the writer agent (which runs on my persona profile so the voice stays mine), runs fact-checking against the cited sources, validates the SEO structure, and lands a draft in the admin panel. I step in at two points — once to approve the brief, once to review the draft.
Everything else runs while I'm doing something more interesting.
That's the shape of what I do now. For myself, and for anyone who comes to me with a similar problem:
- Content operations: idea → research → brief → draft → fact-check → SEO validation → published. Built on Claude Code, custom agents, and a publish webhook that lets content flow in from anywhere.
- Business workflows: onboarding sequences that actually finish themselves, report generation that happens overnight, competitor monitoring that surfaces only the stuff that matters, content calendars that update when the data shifts
- The Monday-morning admin: lead routing, CRM cleanup, invoice prep — the stuff that eats your best hours before you've done any real thinking
- Focused AI agents that do one job well. Not "an AI that does everything" — that's where most automations break. Instead: one agent that monitors schema for drift. One that checks canonicals weekly. One that flags content decay before Google does. One that reviews outgoing emails for tone. Small, focused, reliable — and easier to debug when something inevitably goes sideways.
And yes, things go sideways. I've had AI agents hallucinate statistics in drafts, quietly rewrite voice until it sounded:
- a LinkedIn motivational post
- in one memorable case
- validators
- human-review gates that catch the menace before it ships
Not prompt libraries. Not "10 ChatGPT prompts for marketers." Working systems that multiply what one person — or a small team — can actually get done, built so a non-technical teammate can run them without being an AI whisperer.
Which AI tools am I actually using?
Most AI tools I test don't make it into a workflow. Most don't even make it past the second day. But I'm always running at least two or three in parallel to see which one actually earns its spot.
The daily drivers — the ones that earned their spot (as of right now):
- Claude Code — my default passenger seat for anything that touches a codebase. The kind of tool that quietly disappears into the workflow, which is the highest compliment I can pay any piece of software. Latest models from Anthropic are good, but... haha
- Codex — The second pair of eyes I didn't know I needed. Different brain, different instincts. Some days it catches the thing Claude missed. Some days it's the other way around. Having both means I stop arguing with one model about whether it's right.
What's currently on the test bench:
- Hermes Agent — Kicking the tires on this for long-running automations where I want full control of the runtime and zero surprise bills at the end of the month. The "own your whole agent stack" angle is what pulled me in.
- Cursor — Because I still need an IDE.
- Plus a steady stream of new models, agents, and tools I'll spin up for an afternoon to see if they actually earn their spot. Most don't. The few that do end up here later. (Recently dropped: OpenClaw. That's the rhythm.)
No loyalty. Each tool earns its spot by solving a specific problem better than the others. The day one gets overtaken, I swap it out without ceremony. If you're reading this six months after publish, the names probably changed.
The pattern won't.
Writing the thing I wished I had when I started
The SEO Playbook on this site is the long-form version of "what I wish someone had told me in year one." It's a living document. I update it when I learn something new or Google decides to move the goalposts again — which, in 2026, is roughly every Tuesday.
Why do I break my own site every month?
Every month I take something on Booplex — my own site — and intentionally break it to see what happens in Google Search Console, Bing Webmaster Tools, or the AI citation logs. Bad canonical, broken schema, missing hreflang, accidentally-noindex'd page. Then I document the recovery.
It's the closest thing I have to a lab. And honestly, it's how I've kept sharp for a decade. Nothing teaches you SEO faster than breaking your own site and living with the consequences.
What else am I building outside Booplex?
Booplex isn't the only thing I'm pouring hours into. There's a steady rotation of side projects living elsewhere — mobile apps (one of them, JollyTrack, is already live on the App Store), custom CMS systems for the niche sites I run, skills and tooling for AI coding agents, boilerplates I keep refining for my own work, and the kind of stranger side projects I'm not quite ready to talk about yet.
They all share the same DNA. I hit a problem. I can't find the right thing. I build it.
Then I see how far I can push it before something else catches my eye.
The pattern
If there's a through-line to all of this, it's the same loop I've run for years:
- Find a thing I don't understand, or a tool I wish existed
- Try to build it or break it
- Learn what I didn't know by watching it fail
- Write up what I learned so the next person doesn't have to
I've run this loop on black-hat SEO tactics, on early Google algorithm shifts, on WordPress theme hacking, on Next.js quirks, and now on AI-assisted workflows. The technology changes. The loop doesn't.
What I'm curious about next
A few things I haven't quite cracked yet but want to:
- How AI models actually decide what to cite when multiple sites have similar content
- Whether a small, well-structured site can outperform a giant authority site in AI answers (my bet: yes, and I'm running the experiment right now)
- What the next "llms.txt" is — the next small convention that quietly becomes a standard
- How to automate the boring 80% of an agency retainer so the human can focus on the 20% where experience actually pays off
- Where the real ceiling is for AI-run ops — at what point does a 3-person team stop being able to operate like a 20-person one? I haven't found it yet, and I'm actively looking
- How to build automation that a non-technical team member can trust enough to actually use — because the classic pattern of "the founder built it, nobody else touches it" is the most common failure mode I see
None of these are idle wonderings. Each one is already a half-finished experiment somewhere.
If any of this overlaps with something you're stuck on
I don't have a services page. I'm not going to list packages. I'm not great at selling myself and I'm not going to start now.
But here's the thing — I'm genuinely curious about what other people are doing in this space. So the contact form's open for any of the following:
- You need help with something. Cool. Tell me what's broken or what you're trying to build, and we'll figure it out.
- You've got a project idea you want to bring to life. Also cool. Doesn't matter if it's half-formed or if you've been sitting on it for two years — I like the "napkin sketch to working thing" part of the job more than almost anything else. Send me the idea, even if it's messy.
- You're stuck on something and want a second pair of eyes. Also cool. Doesn't have to be SEO — it could be an automation you're trying to ship, an AI workflow that keeps breaking, a weird business ops problem. Send it over. Honestly, I learn more from other people's strange setups than from most paid audits I've done.
- You're running an experiment and want to swap notes. Very cool. This is actually my favorite kind of conversation. Half the stuff I know came from someone on the other side of the world DMing me about something weird they found.
The worst-case scenario is we both learn something. That's kind of the whole point.
Frequently Asked Questions
What does Gabi at Booplex actually do?
I build a lot of different things. Free SEO tools, AI automation systems for SEO and business ops, mobile apps (one is live on the App Store), custom CMS systems for the niche sites I run, skills for AI coding agents, boilerplates I reuse across my own projects, and the occasional weirder experiment. SEO and AI automation are the pillars, but the work spreads wherever the puzzle is.
Is Booplex an agency?
Nope, just me. I've spent 10+ years in SEO and digital marketing, and Booplex is where I publish what I'm building, breaking, and figuring out. There's no team, no service tier list, no "book a strategy call" funnel. If you want to work with me, you message me directly and we go from there.
Which AI tools is Gabi actually using right now?
Daily drivers are Claude Code (codebase work) and Codex (the second pair of eyes I bounce problems against). I'm actively testing Hermes Agent for long-running automations, Cursor for IDE-integrated work, and a steady rotation of new models and agents that come and go. The stack changes every couple of weeks — whatever earns its spot stays.
Can I hire Gabi, or is this just a blog?
Both. The blog is real — Brain Dumps about what I'm building and breaking. But I also take paid work, project ideas, and "second pair of eyes" requests through the contact form. No packages, no rate sheet, no minimum engagement — just send me what you're stuck on and we figure out the right shape together.
What is the SEO Playbook on Booplex?
A long-form, living document that's basically "what I wish someone had told me in year one of SEO." I update it whenever I learn something new or the rules change underneath us — which, in 2026, is basically every Tuesday. Not a course, not gated, no email signup wall. Just a public reference for anyone who wants it.
