How I Built a SaaS With an AI Co-Pilot
There's a particular kind of madness that comes with being a solo developer. You're the architect, the coder, the designer, the DevOps person, the customer support — all rolled into one sleep-deprived human. Six months ago, I started building Loppisjakten, a platform that helps people in Sweden find local flea markets (loppis). What started as a weekend side project turned into a full SaaS with scrapers, social media automation, and thousands of users.
The twist? I didn't build it alone. I built it with an AI co-pilot running 24/7.

The Setup
I'm a web developer based in Sweden, running a small studio called Another Journey. Loppisjakten was supposed to be simple — aggregate flea market listings from various sources, display them on a map, done. But scope creep is a law of nature, and soon I needed scrapers that could handle wildly inconsistent data, an automated social media presence, a subscription system, and an admin dashboard.
That's when I started using Claude Code through OpenClaw. If you haven't encountered it, OpenClaw is essentially an always-on AI agent platform — think of it as giving Claude a persistent workspace where it can read files, run commands, browse the web, and respond to messages across channels. It's not a chat window you open and close. It's a team member that never sleeps.
My setup: Claude Code running on a small server, connected to my repos, with access to deployment pipelines and a Telegram channel where I can just... talk to it. Like texting a colleague. "Hey, the Facebook scraper is broken again" at 2 AM, and it's already looking at the logs before I finish the sentence.
What the AI Actually Does

Let me be specific, because the vague "AI helps me code" narrative is useless. Here's what Claude actually does in my workflow:
Scraping infrastructure. Loppisjakten pulls flea market listings from Facebook groups, websites, and community boards. The Facebook scraper alone has been rewritten maybe four times. Facebook changes their DOM structure like they're running from the law. When something breaks, Claude diagnoses the issue from error logs, proposes a fix, tests it, and deploys — often before I've had my morning coffee. It handles the tedious pattern-matching work of parsing inconsistent HTML that would drive me insane.
Social media automation. The platform auto-generates carousel posts for Instagram and Facebook — pretty images with flea market details, locations, dates. Claude built the entire carousel generation pipeline: taking listing data, generating layouts, handling Swedish characters properly (you'd be surprised how many image libraries choke on å, ä, ö), and scheduling posts. It even writes the captions with appropriate hashtags.
TypeScript cleanup and refactoring. When I was migrating parts of the codebase to stricter TypeScript, Claude handled the mechanical work — adding types, fixing any annotations, updating interfaces — while I focused on architecture decisions. It turned a week-long slog into an afternoon review session.
Bug triage. I pipe error alerts to a channel that Claude monitors. It reads stack traces, checks recent commits for relevant changes, and either fixes the issue or gives me a clear analysis of what's happening. Not every fix is right on the first try, but the triage alone saves me hours.
A Real Example: The FB Scraper Saga
Let me tell you about a particularly fun week. Facebook decided to change how group posts render for non-authenticated users. Our scraper started returning empty results at 3 AM on a Tuesday. By the time I woke up, Claude had already:
- Detected the failure from monitoring logs
- Analyzed the new DOM structure by comparing cached responses
- Updated the selectors and parsing logic
- Run the scraper against test groups to verify
- Committed the fix with a clear message explaining what changed
Was it perfect? Almost. It missed an edge case with events that have multiple dates, which I caught during review. But the core fix was solid, and instead of spending my morning reverse-engineering Facebook's HTML, I spent 15 minutes reviewing a diff.
That's the pattern. The AI handles the 80% that's tedious but tractable, and I handle the 20% that requires judgment, context, or creativity.
What Works and What Doesn't
I'll be honest because there's too much hype and not enough reality in AI discourse.
What works brilliantly:
- Mechanical refactoring and migrations
- Parsing and data transformation (scraping, API response mapping)
- Writing tests for existing code
- Debugging from error logs and stack traces
- Generating boilerplate (API routes, database schemas, component scaffolds)
- Documentation (it's genuinely better at writing README files than I am)
What needs human oversight:
- Architecture decisions — Claude will happily build whatever you ask for, even if it's the wrong approach
- UX and design judgment — it can implement designs, but aesthetic taste requires human eyes
- Business logic edge cases — it doesn't know your users, your market, or why that particular edge case matters
- Performance optimization — it tends toward correct-but-naive solutions unless you're specific about constraints
What straight up doesn't work:
- Asking it to "make it better" without specifics — you'll get generic improvements that miss the point
- Complex multi-system debugging where the issue spans services it can't see
- Anything requiring real-world context it doesn't have (user behavior patterns, business priorities)
The key insight: AI is an incredible force multiplier, not a replacement. I'm still making all the decisions. I'm still the one who knows that Swedish users expect Swish payment integration, or that the map should default to their region, or that the scraper should prioritize fresh listings over completeness. The AI executes faster than I ever could, but it executes my vision.
Why This Is the Future of Indie Dev

Here's what excites me: the economics of software development are changing. Things that previously required a team — monitoring, automation, cross-platform deployment, content generation — can now be handled by one developer with an AI agent.
I'm not talking about replacing developers. I'm talking about enabling one developer to have the output of a small team. Loppisjakten has features that, two years ago, would have required at least 3-4 people working part-time: scraping infrastructure, social media automation, a responsive web app, automated content generation, monitoring and alerting.
Instead, it's me, a Claude instance, and a lot of coffee.
The tools are getting better fast. Six months ago, I spent a lot of time babysitting the AI — reviewing every change, catching more mistakes, providing more context. Now, with better models and a well-configured workspace, I trust it with increasingly complex tasks. Not blindly — I always review — but the review-to-fix ratio keeps improving.
If you're an indie developer considering this approach, my advice is simple: start small, be specific, and always review. Don't hand over your entire codebase and say "make it better." Instead, say "the scraper at src/scrapers/facebook.ts is returning empty arrays — check the logs at X, compare with the last working version, and fix the selector logic." The more specific you are, the better the results.
The future of indie development isn't AI replacing developers. It's developers who know how to work with AI outperforming teams that don't. And honestly? It's a lot more fun than doing everything yourself.
Building something with AI? I'd love to hear about your experience. Reach out through the contact form — especially if you're working on something weird and interesting.