Real Problem, Real Fix
From vibe-coded prototype to a platform you can trust.
How a solo founder used AI tools to ship a working exam prep product in days — and what it took to turn that fragile prototype into a production system that could support real students.
Stack
Next.js · FastAPI · Supabase
AI
Speech-to-text · Bedrock · LLM grading
Cloud
AWS · ECS Fargate · S3 + CloudFront
The context
The rise of AI-assisted development — and its hidden cost.
In the last two years, the way software gets built has changed dramatically. Founders are no longer starting from empty files. With tools like Claude, Cursor, and GitHub Copilot, a single developer can describe a feature in natural language and watch working code appear in seconds.
That's exactly how the first version of MyMeritsGuide came to life. In a few days, the founder had stitched together:
- A Next.js frontend
- A FastAPI backend
- A Supabase database
- Speech-to-text and text-to-speech
- LLM integrations for question generation and grading
For a solo founder using AI-assisted tools, the result was impressive. Students could speak answers, get transcriptions, and receive automated feedback. Early testers were excited. The product clearly worked — at least in the demo.
What we walked into
The hidden problems behind a working demo.
When we reviewed the MyMeritsGuide codebase, nothing was “wrong” in the sense that the app didn't run. The real issue was structural. The system had never been designed to run in production.
As new students started joining, the limits of that first version became clear:
Operational friction
- Frontend and backend in a single, tangled repository.
- No clear deployment pipeline or environment separation.
- Local dev required a fragile sequence of manual steps.
AI without guardrails
- AI integrations occasionally failed without clear logging.
- No retry logic or fallbacks around external services.
- No monitoring or alerts when grading or question generation broke.
Config and secrets
- Secrets and API keys scattered across configuration files.
- Environment differences (local vs “prod”) handled manually.
Founder experience
- Every deploy felt like a small gamble.
- More time spent babysitting infrastructure than improving the product.
The fixes
What we actually changed.
We didn't replace the product the founder had built. We wrapped it in a platform that could support real use.
01 · Containerized development
We reorganized the project into clearer service boundaries and packaged the frontend–backend stack into a Docker-based dev environment. Instead of following a document of setup steps, the founder could run the whole system locally with a single command:
docker compose up02 · Configuration & secrets moved to AWS
Sensitive credentials were moved into AWS Secrets Manager, and application configuration into AWS Systems Manager Parameter Store. That removed config drift between environments and made it possible to rotate keys without touching code.
03 · A boring, reliable production architecture
The frontend became a static Next.js application hosted on Amazon S3 and distributed via CloudFront. The backend runs as a containerized FastAPI service on Amazon ECS Fargate inside a VPC, with container images stored in Amazon ECR. Blue/green deployments allow us to roll out new versions without downtime.
04 · AI workflows with guardrails
Speech-to-text, text-to-speech, question generation via Amazon Bedrock, and rubric-based grading with LLMs stayed in place — but wrapped with retry logic, fallbacks, and logging. We used CloudWatch for logs and metrics so that AI failures stopped being silent.
Have a prototype that's a bit too "vibe coded"?
If this story feels uncomfortably familiar, you're not alone. AI tools make it easy to get to a great demo — our work starts where that demo ends.
