The 4 MCQ input design was the biggest UX decision. Early versions just took company + role + JD. The output was useful but generic. Adding round type (initial screen vs. final round), company familiarity, time available, and biggest skill gap changed the output substantially.
Same company, same role, same JD — a brief for "first-round, 2 hours of prep, weakest on system design" looks completely different from "final round, 3 days, nervous about culture fit." The MCQs force the model to personalize instead of summarize.
On the Gemini choice: mostly cost-driven early on, but Flash's latency worked out well for the use case. Sub-10-second generation matters here — people are often prepping the day before and anything that feels slow breaks the flow.
On the 50+ company pages: originally built for SEO, but they've turned out to be useful as a research layer. They're not wired into the brief generator yet — it runs entirely off the JD + MCQ path — but pulling company-specific context into the generation prompt is the next thing I want to ship.
What I'm still not happy with: the blind spots section is the most variable in quality. It works well when someone flags a specific gap ("I haven't used Kubernetes at scale"). It works less well when the gap is vague ("I'm nervous about the interview"). Working on better MCQ design to force more actionable inputs.
Happy to answer questions about the approach, the framework, or anything else.
Most interview prep is generic. "Tell me about yourself." "Why this company?" Recycled questions you could find on Glassdoor in 30 seconds.
Prepfile runs your job description and a short MCQ through a framework built on Porter's Five Forces and Deming analysis. It reasons about the company's competitive pressures, organizational priorities, and what the role likely exists to solve — then generates a brief mapped to your situation, not a recycled question bank.
How it works: Paste a JD, answer a few multiple-choice questions about your background, get a brief. No signup required for the free tier (3 briefs/week). Pro is $9.99/month for unlimited briefs, history, and resume match — where Gemini reads your resume against the brief and flags gaps before you walk in.
What's under the hood: React + Vite frontend, Express + SQLite backend, Gemini 1.5 Flash for generation. I also built 50+ company-specific prep pages (manually researched, not generated) covering how interview loops actually run at each company. Those aren't wired into the brief generator yet — that's next.
Honest limitations:
Output quality scales directly with JD quality. A detailed posting produces a meaningfully better brief than a 2-line listing.
Smaller or private companies get thinner company-level context — the JD ends up doing more of the heavy lifting.
Stripe billing is integrated but currently blocked on account review, so nobody's being charged yet.
Early stages. Feedback on what's actually useful matters more than kind words — especially from anyone who's been through a recent search. The round expectations section in particular: does it earn its place, or is it noise?
The 4 MCQ input design was the biggest UX decision. Early versions just took company + role + JD. The output was useful but generic. Adding round type (initial screen vs. final round), company familiarity, time available, and biggest skill gap changed the output substantially. Same company, same role, same JD — a brief for "first-round, 2 hours of prep, weakest on system design" looks completely different from "final round, 3 days, nervous about culture fit." The MCQs force the model to personalize instead of summarize. On the Gemini choice: mostly cost-driven early on, but Flash's latency worked out well for the use case. Sub-10-second generation matters here — people are often prepping the day before and anything that feels slow breaks the flow. On the 50+ company pages: originally built for SEO, but they've turned out to be useful as a research layer. They're not wired into the brief generator yet — it runs entirely off the JD + MCQ path — but pulling company-specific context into the generation prompt is the next thing I want to ship.
What I'm still not happy with: the blind spots section is the most variable in quality. It works well when someone flags a specific gap ("I haven't used Kubernetes at scale"). It works less well when the gap is vague ("I'm nervous about the interview"). Working on better MCQ design to force more actionable inputs.
Happy to answer questions about the approach, the framework, or anything else.
Most interview prep is generic. "Tell me about yourself." "Why this company?" Recycled questions you could find on Glassdoor in 30 seconds. Prepfile runs your job description and a short MCQ through a framework built on Porter's Five Forces and Deming analysis. It reasons about the company's competitive pressures, organizational priorities, and what the role likely exists to solve — then generates a brief mapped to your situation, not a recycled question bank. How it works: Paste a JD, answer a few multiple-choice questions about your background, get a brief. No signup required for the free tier (3 briefs/week). Pro is $9.99/month for unlimited briefs, history, and resume match — where Gemini reads your resume against the brief and flags gaps before you walk in. What's under the hood: React + Vite frontend, Express + SQLite backend, Gemini 1.5 Flash for generation. I also built 50+ company-specific prep pages (manually researched, not generated) covering how interview loops actually run at each company. Those aren't wired into the brief generator yet — that's next. Honest limitations:
Output quality scales directly with JD quality. A detailed posting produces a meaningfully better brief than a 2-line listing. Smaller or private companies get thinner company-level context — the JD ends up doing more of the heavy lifting. Stripe billing is integrated but currently blocked on account review, so nobody's being charged yet.
Early stages. Feedback on what's actually useful matters more than kind words — especially from anyone who's been through a recent search. The round expectations section in particular: does it earn its place, or is it noise?