Full-stack / Defense · Industry partner: Leidos
Capstone Project - BAA/RFP Proposal Automation
Full-stack web application that uses generative AI to automate the breakdown of complex government BAAs and RFPs—parsing PDFs, surfacing SHALL MUST REQUIRED requirements, and drafting structured proposal frameworks.
Built with Leidos in collaboration with DARPA's Information Innovation Office (I2O), with a six-person team shipping parsing, RAG-backed generation, and role-based workflows for real proposal cycles.
- Role
- Frontend, ML prompts, writing
- Timeline
- Sept '25 – Mar '26
- Stack
- PDF/OCR, RAG, Claude API, RBAC
- Context
- DARPA BAA HR001126S0001 (I2O)
Tech stack
Architecture
Hosting. Next on Vercel: static shell, serverless PDF + model work, Postgres via Supabase.
- Next.js 16 — Workspaces and admin shells; `app/api` for uploads, parsing, streaming Claude output.
- React 19 — PDF previews, diff UIs, optimistic saves; Suspense isolates slow RAG from chrome.
- TypeScript — Entities and RBAC from DB through responses—no viewer/editor field drift.
- Tailwind 4 — Dense tables, upload wizards, status chips under time pressure; v4 PostCSS wiring.
- react-dropzone — Client pick/drag with type/size gates before streaming to `/api/upload`.
- react-markdown — GFM with constrained components—no raw model HTML (XSS-safe rendering).
- Supabase + SSR — Cookie sessions via middleware; RLS encodes viewer/editor/admin at the database.
- PostgreSQL — Solicitations, extracted rows, embeddings, audit log—versioned SQL migrations.
- OpenAI SDK — Server-only SDK: outlines, rewrites, confidence summaries → markdown/JSON to client.
- pdf2json — Node parsing for SHALL/MUST spans; bounded timeouts and recoverable failures.
- Resend — Invite mail with signed deep links scoped to the right solicitation workspace.
Problem
Broad Agency Announcements and RFPs are dense, interleaved, and version-heavy. Proposal teams spend weeks manually parsing documents to extract obligations, often under fixed schedules where a missed “shall” can disqualify a bid or force a costly rework.
The cost is not only time: inconsistent interpretation across contributors, error-heavy handoffs between capture and technical volume leads, and deadlines that slip because requirement matrices are still maintained in spreadsheets and email threads.
Solution overview
An AI-assisted platform ingests solicitation PDFs, mines prescriptive language, enriches generation with organizational and library context, and produces structured proposal artifacts—while exposing confidence scores so teams know what to validate first.
- →Upload and normalize BAA/RFP PDFs with OCR fallback for poor scans
- →Flag and cluster requirements driven by modal verbs and compliance phrasing
- →Inject org-specific context via a RAG document library
- →Generate proposal sections through the Anthropic Claude API with schema-bound outputs
- →Dashboard KPIs: proposals, awarded, in-review, active, at-risk, avg. confidence (tracked at 73%)
My role
Brady Ransom — front-end development, model training and prompt engineering, and writing for stakeholder-facing materials. I helped ship the Grammarly-style requirement-highlighting experience, tuned extraction prompts against synthetic BAAs, and wired the generation flow so editors could go from parsed requirements to draft sections without leaving the workspace.
Technical architecture
PDF upload → text + structure extraction (OCR fallback)
→ requirement mining (shall / must / required + context windows)
→ org context injection (RAG library + policy snippets)
→ Claude API (structured prompts, schema-validated JSON where applicable)
→ proposal draft assembly + confidence scoring per block
→ dashboard + RBAC (Viewer / Editor / Admin)Key features · 8 use cases
Use case 01
Upload BAA / RFP
Ingest PDFs, recover text from scans, version artifacts.
Use case 02
Inject org context
Attach corporate assets and past performance into RAG slots.
Use case 03
Review & validate
Human-in-the-loop edits on mined requirements and drafts.
Use case 04
Execution plan
Post-award milestone scaffolding tied to program phases.
Use case 05
Multi-org collaboration
Shared workspaces with permission-aware visibility.
Use case 06
Capital management
Admin workflows for allocation views and thresholds.
Use case 07
Timeline view
Cross-program schedule overlays and status lanes.
Use case 08
Confidence analytics
Aggregate scoring; spotlight blocks under review threshold.
Design process
We started from a role matrix—who can upload, who can spend, who can only read—and designed the dashboard and editor flows so permissions failed closed. Low-fidelity wireframes focused on the capture → generate → validate loop before visual polish.
Role / permissions matrix
| Capability | Viewer | Editor | Admin |
|---|---|---|---|
| View proposals & dashboards | View | View | View |
| Upload BAAs / RFPs | — | Edit | Edit |
| Run AI generation | — | Edit | Edit |
| Edit org RAG library | — | Edit | Edit |
| Capital allocation & user admin | — | — | Admin |
Product demo
Screen recording in the same Mac-style browser chrome used on the homepage feed.
Team
Six-person capstone team
- Sammi Li — PM / UI
- Athena Bui — Frontend / research
- Abdul Bdaiwi — UI/UX
- Brady Ransom — Frontend / ML
- Matthew Monahan — Backend
- Madison Min — Frontend / design
Challenges & learnings
- Jira noise: tickets often lacked the systems context needed to sequence PDF/OCR edge cases; we compensated with shared decision logs tied to each sprint demo.
- Stakeholder shifts: mid-project additions to post-award views forced us to freeze an MVP slice for Leidos review while stubbing capital dashboards.
- No live BAA corpus: we combined synthetic solicitations with a curated RAG library and a fine-tuned Anthropic workflow key, explicitly labeling confidence so reviewers knew where hallucination risk clustered.
- Escalation gaps: clearer “blocker SLAs” between our team and Leidos PMs would have shaved integration risk earlier.
Outcomes / impact
- →Functional dashboard prototype with KPIs (totals, awarded, in-review, active, at-risk, avg. confidence 73%)
- →End-to-end AI pipeline from upload to scored draft blocks
- →Role-based access control aligned to Viewer / Editor / Admin personas
- →Presentation-ready demo to Leidos stakeholders
Next steps
- Promote PM-facing UI from static mockups into a clickable staging prototype aligned with engineering components.
- Expand the RAG library with cleared exemplars as they become available under partner guidance.
- Plan for full C3PAO CMMC compliance integration alongside enterprise identity and audit trails.