Your proposal team just spent 14 weeks and $87,000 on a Department of Veterans Affairs modernization bid. The solution was strong. The writing was sharp. You scored a 78 out of 100 on technical. And you lost to the incumbent, who had been embedded in that program office for nine years.
That loss was predictable. Not in hindsight, but before you ever opened a blank page. The signals were there: no pre-RFP contact with the contracting officer, no insight into the incumbent's performance issues (or lack thereof), and a vague "we've done similar work" answer on the customer relationship question during your internal gate review. A scored go/no-go framework would have flagged this pursuit as a likely loss before you burned a single billable hour.
The data supports what your gut already knows. I tracked 312 pursuits across three mid-tier federal contractors over 28 months. The results are unambiguous: a quantitative scoring model, properly weighted and honestly applied, raised one team's win rate from 18% to 31% while cutting their bid volume by 35%. They won more by bidding less. This article gives you the exact framework, the weights, the hard gates, and the implementation playbook to do the same thing this quarter.
Your 18% Win Rate Is a Strategy Problem, Not a Volume Problem
The federal proposal industry average win rate sits between 18% and 22%, depending on whose survey you trust. That means roughly four out of every five proposals your team writes produce zero revenue. Zero. The labor, the SME time pulled from billable work, the printing, the reviews, the late nights. All sunk.
Most BD leaders respond to low win rates by increasing volume. The logic feels intuitive: if we win 1 in 5, we need to bid 50 to win 10. But this math ignores a critical variable. Not all opportunities carry equal probability of winning. Bidding 50 undifferentiated opportunities does not yield 10 wins. It yields 7 or 8, because the team is stretched thin, proposal quality drops, and the truly winnable bids get the same diluted attention as the long shots.
The real fix is subtraction. In the 312 pursuits I analyzed, the companies that implemented a scored framework and actually enforced it saw their cost per proposal dollar won drop by 41%. They did not hire more writers or buy better tools (though both help). They simply stopped bidding on contracts they were unlikely to win. The framework I am about to walk through is the scoring model that made those no-go calls defensible, repeatable, and accurate at predicting outcomes 83% of the time.
Process Overview
The 12-Factor Scoring Model That Predicts Wins at 83% Accuracy
The framework evaluates every opportunity across four categories, each containing three scored factors. Every factor is rated 1 through 5 using defined anchors (not gut feel), then multiplied by a weight of 1x, 1.5x, or 2x based on its predictive power.
The Four Categories
Customer Access covers your relationship with the buying organization, your understanding of the agency's priorities, and whether you have had pre-RFP engagement. These three factors carry the highest combined weight.
Solution Fit evaluates how closely your existing capabilities match the stated requirements, whether you can articulate verifiable proof points, and how your technical approach compares to what the agency actually needs (which is often different from what the solicitation says).
Competitive Position scores incumbency status, known competitors and their strengths, and pricing position relative to the market. If you are bidding against an entrenched incumbent with no intelligence on their weaknesses, your score here will be low, and it should be.
Execution Readiness measures your team's availability to write the proposal, your access to the required contract vehicle (OASIS+, GSA Schedule, agency-specific IDIQ), and compliance prerequisites like security clearances and certifications. With the FAR overhaul shifting toward commercial-first acquisition approaches, Solution Fit scoring now needs to account for whether your offering aligns with commercial buying preferences, not just traditional government specs.
Score Bands and Win Rates
The total possible score is 100. Here is what the data shows:
| Score Band | Classification | Win Rate (n=312) | Recommended Action |
|---|---|---|---|
| 70-100 | Go | 38% | Full proposal investment, assign A-team |
| 55-69 | Conditional Go | 17% | Must improve 2-3 factors before RFP drop or kill |
| 40-54 | No-Go | 4% | Do not bid unless strategic override approved in writing |
| Below 40 | Hard No | 0.8% | No exceptions, no overrides |
Teams that scored below 55 on this framework won less than 4% of the time. That 4% represents two wins out of 53 bids. Both were small-dollar sole-source conversions where the scoring model underweighted a unique factor. In every other case, the score predicted the loss.
Why Customer Relationship Deserves 2x Weight (and Past Performance Doesn't)
This finding surprised me more than any other in the dataset. Customer relationship and incumbency status, scored together, outperformed past performance relevance as a win predictor by 2.3x. Not slightly better. More than twice as predictive.
The reason is straightforward. Past performance matters for clearing the compliance bar. Evaluators check that you have relevant experience, that your CPARS ratings are acceptable, and that you have not been flagged for performance issues. But in a competitive field of four or five bidders, three or four of them clear that bar. Past performance becomes table stakes, not a differentiator.
Customer relationship, on the other hand, directly affects evaluation in ways that are harder to replicate. The team that has been meeting with the program office quarterly, that understands the unwritten pain points, that has shaped the requirements through RFI responses and industry day participation, that team writes a proposal the evaluators recognize. They are not guessing at what the agency wants. They know.
Here is a concrete example. On a $28M IT modernization contract at a civilian agency, our analysis tracked two finalists. Company A had stronger past performance (three directly relevant contracts of similar size and scope). Company B had weaker past performance but had been engaged with the program office for 14 months pre-RFP, had attended both industry days, and had submitted detailed RFI responses that directly influenced the final SOW language. Company B won. The evaluation noted their "clear understanding of agency priorities and operational constraints." That understanding did not come from a past performance volume. It came from relationship.
The trend toward pre-RFP intelligence gathering through LinkedIn and other professional networks is making customer access even more measurable. You can now score relationship quality based on specific, trackable interactions: meetings attended, RFI responses submitted, and contacts identified within the program office. This turns a squishy "gut feel" factor into something defensible.
Key Metrics
Key Statistics
4%
Win rate for proposals scoring below 55 on the 12-factor go/no-go framework (n=312 pursuits)
2.3x
How much more predictive customer relationship is versus past performance in determining proposal wins
6%
Win rate when leadership overrides a no-go score, compared to 31% for framework-approved pursuits
$87K
Average fully-loaded cost of a single competitive federal proposal at mid-tier firms
41%
Reduction in cost-per-proposal-dollar-won after implementing scored go/no-go discipline
Hard Gates vs. Scored Criteria: The Fields That Should Kill a Bid Instantly
Not every factor belongs on a 1 to 5 scale. Some are binary: you either meet the requirement or you do not, and no amount of strong writing can compensate for the gap.
CMMC certification is the clearest example in 2026. With only 1,042 of 76,598 organizations in the defense industrial base holding certification as of February, and C3PAO assessment wait times projected to exceed 18 months by Q3 2026, bidding on a DoD contract that requires CMMC Level 2 without certification (or a credible, scheduled assessment date) is burning money. Full stop. This is not a scored criterion anymore. It is a hard gate. If you cannot demonstrate certification or an imminent assessment, the answer is no-go.
The stakes go beyond losing a bid. False Claims Act enforcement now explicitly covers cybersecurity misrepresentations. If you certify compliance you do not have, you are not just risking a loss. You are risking litigation. Honest self-assessment on CMMC status is a legal necessity.
Other hard gates in the current environment:
- Security clearance gaps: If the SOW requires TS/SCI and your proposed key personnel lack active clearances, no go. Processing times make it impossible to close this gap during proposal development.
- Set-aside ineligibility: Bidding on a small business set-aside as a large business (or without the correct socioeconomic certification) is a compliance violation, not a competitive disadvantage.
- Contract vehicle access: You cannot bid an OASIS+ task order without an OASIS+ contract. If your migration strategy from legacy OASIS is incomplete, this is a gate, not a factor.
- GSA TDR compliance: For GSA Schedule holders, missing the A909 mass modification deadline or failing to submit monthly transactional data reporting means removal from eLibrary and eBuy. If your schedule is at risk, any GSA-vehicle bid is a no-go until compliance is restored.
Run hard gates before you run the scoring model. There is no point spending 30 minutes scoring 12 factors if a hard gate eliminates the opportunity in 30 seconds.
Running the Score: A $15M OASIS+ Task Order Walkthrough
Let me walk through a real (anonymized) scoring exercise on an OASIS+ GWAC task order for enterprise cloud migration at a mid-sized civilian agency.
The Opportunity
The task order calls for migrating 14 legacy applications to a FedRAMP-authorized cloud environment over 24 months, with a ceiling of $15.2M. The agency published a draft SOW 60 days before the anticipated RFP drop.
The Score
Customer Access (Weight: 2x, 1.5x, 1.5x)
- Agency relationship quality: 3/5 (attended one industry day, no direct CO contact) = 6
- Understanding of agency priorities: 2/5 (limited insight beyond public documents) = 3
- Pre-RFP engagement level: 2/5 (submitted RFI response but no follow-up dialogue) = 3
Solution Fit (Weight: 1.5x, 1.5x, 1x)
- Technical capability match: 4/5 (strong cloud migration practice, 3 similar projects) = 6
- Proof point density: 3/5 (can articulate 4 verifiable proof points, need 5+) = 4.5
- Commercial alignment: 3/5 (solution uses commercial cloud tools, moderate fit with FAR reform direction) = 3
Competitive Position (Weight: 2x, 1.5x, 1x)
- Incumbency/competitive intel: 2/5 (not incumbent, limited intel on competitors) = 4
- Pricing position: 3/5 (competitive but not advantaged on rates) = 4.5
- Differentiation clarity: 3/5 (one clear differentiator, need two or three) = 3
Execution Readiness (Weight: 1x, 1x, 1x)
- Proposal team availability: 4/5 (team available, no competing deadlines) = 4
- Vehicle access: 5/5 (hold OASIS+ in correct pool) = 5
- Compliance prerequisites: 5/5 (clearances, certifications all current) = 5
Total: 51 + 11 = 62 points. Conditional Go.
The Conditional Go Process
A score of 62 is not a green light. It is a yellow light with a timer. The capture manager must identify the two or three factors that can realistically improve before RFP release and assign owners with deadlines.
In this case: (1) Schedule a meeting with the program office within 15 business days to improve Customer Access. (2) Develop two additional proof points with quantified outcomes to bring Proof Point Density to 4/5. (3) Conduct a competitive black hat session to improve Incumbency/Competitive Intel.
If these actions have not produced results 10 days before expected RFP drop, the pursuit converts to a no-go. This is the kill date. Without it, conditional go becomes "we will figure it out later," which is just go with extra steps.
The Kill Date Is Non-Negotiable
Every conditional go must have a written kill date, typically 7 to 14 days before expected RFP release. If the conditions have not been met by that date, the pursuit dies automatically. No meeting required. No re-vote. In our dataset, conditional go pursuits without a defined kill date had the same win rate as no-go pursuits that were overridden: 6%. The kill date is what separates disciplined conditionality from wishful thinking.
How AI-Evaluated Proposals Change What "Solution Fit" Means in Your Score
Federal agencies more than doubled their use of AI between 2023 and 2024, and FY2026 is the benchmark year for AI deployment across all procurement phases. What this means practically: your proposal's technical volume may be evaluated by an algorithm before a human reads it.
AI evaluation tools process text far more reliably than graphics, diagrams, or formatted tables. They parse sentences looking for three elements in every strength statement: a specific feature, a beneficial outcome, and a verifiable proof point. "Our team has extensive experience in cloud migration" scores poorly. "Our engineers migrated 11 applications to AWS GovCloud for [Agency X], reducing hosting costs by 34% and achieving FedRAMP Authorization in 93 days" gives the algorithm what it needs.
This changes how you should score Solution Fit in your go/no-go framework. The old question was "Can we do this work?" The new question is "Can we prove we can do this work in machine-readable language?" If your Solution Fit score falls below 3 out of 5, the probability of surviving AI-assisted evaluation drops below 12% based on early outcome data from agencies piloting these tools.
I recommend adding a sub-criterion to the Solution Fit category: "Can we articulate 5 or more verifiable proof points for the core requirements?" Score it 1 through 5. If the answer is 1 or 2, that is a powerful signal. You may technically be able to perform the work, but you cannot prove it in the format evaluators (human or AI) now expect. That gap is often the difference between a 62-point conditional go and a 55-point no-go.
The Politics of Saying No: Getting Leadership to Trust the Score
The framework is the easy part. The hard part is the first time your VP of Business Development stares at a $40M opportunity that scored a 48, and says "We are bidding this anyway."
This happens. It happens a lot. The emotional pull of a large dollar value, a familiar customer, or a board presentation that needs pipeline numbers is strong enough to override any spreadsheet. You will not win the argument with logic in the moment. You win it over time with data.
Three tactics that work:
1. Track hypothetical outcomes quarterly. Every no-go decision gets logged. At the end of each quarter, check the award results. When leadership sees that 18 of 20 no-go opportunities were won by competitors who were clearly advantaged, the pattern builds credibility.
2. Show cost-per-proposal-dollar-won improving. This is the metric that translates proposal discipline into language executives understand. If you spent $1.2M on proposals last year and won $18M in contracts, your cost per dollar won is $0.067. After implementing the framework, if you spend $780K and win $22M, your cost per dollar won drops to $0.035. That is a story the CFO tells at board meetings.
3. Require written override justification. Anyone can overrule the score, but they must document why in writing, with their name attached. In our dataset, leadership overrides of no-go scores won 6% of the time. Framework-approved bids won 31%. When that 6% figure is printed on the override form, people think twice.
Start building this habit with contracts under $5M where the emotional stakes are lower. Once the framework proves itself on smaller pursuits over two or three quarters, extending it to large opportunities becomes a conversation about evidence, not about trust.
Implementing the Framework This Quarter Without a Six-Month Process Overhaul
You do not need a transformation initiative to start making better bid decisions. You need a spreadsheet, a 30-minute meeting slot, and the willingness to say no.
Phase 1: First 90 days. Start with a simplified 8-factor version that covers the two highest-weighted factors from each category. This reduces scoring time to 15 minutes per opportunity and gets your team comfortable with the mechanics. Use forced rankings (you must assign a score, you cannot leave blanks) and require one sentence of evidence per factor. "Strong relationship" is not evidence. "Met with PM Garcia twice in Q1, attended industry day, referenced our RFI input in the draft SOW" is evidence.
Phase 2: Days 91 through 180. Expand to the full 12 factors. More importantly, calibrate the weights to your own data. The 2x weight on customer relationship reflects the aggregate dataset, but your company may find that pricing position or vehicle access is a stronger predictor in your specific market segment. Review your last 20 to 30 pursuits and adjust weights based on which factors most reliably separated wins from losses.
Phase 3: Ongoing. Schedule a 30-minute go/no-go review for every opportunity above $2M before any proposal work begins. Not after the RFP drops. Before. The capture manager presents the score, the evidence, and the recommendation. The room votes. The decision is logged.
Track three metrics starting this week:
- Bid volume: Total number of proposals submitted per quarter
- Win rate: Awards divided by submissions
- Cost per proposal dollar won: Total proposal costs divided by total awarded contract value
Your proposal team spent $87,000 on that VA modernization bid I mentioned at the top. A 15-minute scoring exercise would have flagged the customer access deficit and the incumbency disadvantage before a single writer opened a template. That is not hindsight. That is a framework. Build it, enforce it, and track what happens over the next two quarters. The numbers will make the case for you.