Outcomes-based Contracting: Choosing the Right Metrics

Metrics are the backbone of successful Outcomes-Based Contracting (OBC). Without reliable and relevant measures, schools and vendors risk missing key indicators of success. Incorrect metrics can also erode your relationship with a vendor, compromising collaboration that further impacts their performance and your satisfaction. Selecting the right metrics is critical to ensuring contracts drive real, measurable progress, and foster positive relations and collaboration between schools and vendors.

 Why Metrics Matter in OBC

Traditional approaches to measuring success often rely on standardized test scores, but these are rarely sensitive enough to capture short-term growth or nuanced improvements. In OBC, success depends on a comprehensive approach to metrics that balance academic and non-academic outcomes. OMB metrics need to be:

  • Specific and Measurable: They need to clearly define what success looks like and how it will be measured.

  • Sensitive to Growth: They must detect meaningful changes during the contract period.

  • Related to the Goal: They must directly contribute to the overarching goal of the contract.

  • Within the Vendor’s Influence: They should measure outcomes that the vendor's program or service can directly impact.

 Defining Success Beyond Test Scores

Moving beyond end-of-year standardized tests requires incorporating metrics that reflect the broader goals of student growth and program effectiveness.

 Academic Metrics:

  • Formative Assessments: Frequent, low-stakes checks for understanding that relate directly to the program or intervention.

  • Interim Assessments: Periodic progress evaluations that track progress toward longer-term goals.

  • Student Work: Evidence of student growth or improvement doesn’t have to come in the form of a test, student work could be used to gauge program impact over time.

  • Observational Data: Students can demonstrate academic performance in a variety of ways. Sometimes teacher observational data is the way to measure nuanced changes, particularly with our youngest students.

 Non-Academic Metrics:

  • Attendance and/or Engagement: These can be early indicators of program effectiveness. Afterall, if the student is not participating—or even present—how can we expect them to benefit from the program or service?

  • Social-Emotional Learning (SEL) Measures: Surveys measuring confidence, resilience, and interpersonal skills conducted before and after programs targeting these qualities and skills surface other forms of growth outside of academics.

  • Behavioral Observations and Patterns: Tracking specific behaviors for frequency, referrals, and other patterns are best suited for programs aiming at shifting classroom management or student behaviors.

 The most robust set of metrics are those that get at what is within the control of the vendor and directly related to the program goal from different angles. For example, a district implementing high-impact tutoring should track student attendance, formative assessment results, and SEL survey feedback to evaluate vendor performance.

 Setting Up Effective Data Collection and Tracking for OBC

Metrics are only as valuable as the data being collected. Effective tracking systems should leverage existing data collections that are already routinized as the base. These include commonly gathered data points such as attendance, office referrals, and regular academic assessments. After all, if these existing collections hadn’t highlighted an issue, it’s worth questioning the rationale for contracting a vendor’s program or service.

 But this is just where you start because these metrics will likely not be sensitive enough to get at the specific program or service goals, particularly in the period of time they are delivered. When introducing special or new metrics that go beyond already collected data, here are some key considerations to ensure the data collection process is effective and sustainable:

Consistency: Ensure that data is collected at regular intervals and under similar conditions. Inconsistent data collection can lead to unreliable results, making it difficult to evaluate progress or identify trends. Define clear protocols for how, when, and by whom the data will be gathered.

  • Fidelity: Verify that the data is collected accurately and in a manner that aligns with the intended purpose of the metric. This ensures that the data reflects the true implementation and impact of the program or service. Regular audits can help, but you need to plan for those too!

  • Level of Effort: Evaluate how much time and effort is required to collect the data. If the process is overly burdensome for staff or faculty, it will lead to incomplete or inconsistent data.

  • Training: Provide training to staff involved in data collection to ensure they understand the importance of the metrics, how to gather data correctly, and how it will be used. Clear communication about the purpose and value of the data builds buy-in and helps avoid common pitfalls.

  • Pilot: Test new metrics on a smaller scale before rolling them out system-wide. Piloting allows you to identify potential challenges, refine the process, and assess whether the data provides meaningful insights. For example, you might start with one grade level or a single campus before expanding.

  • Baseline: Establish a clear starting point by collecting baseline data before the program or service begins. This helps quantify the existing conditions and provides a benchmark against which progress can be measured.

 Adding new metrics can provide richer insights into the effectiveness of a program or service, but only when these considerations are thoughtfully addressed. You can also look at any existing data collections to check that they meet these considerations too. By focusing on consistency, fidelity, staff readiness, and do a little experimentation to get a baseline, schools can create a robust data collection system that supports meaningful and actionable OBC evaluations.

 Recommendations for Action

If you are planning on exploring OBC in a future contract, it’s critical to set the stage for success during the pre-RFP (Request for Proposal) phase. By carefully planning metrics and data collection strategies early, you can align all stakeholders, ensure clarity in expectations, and avoid common pitfalls down the road. Here are actionable steps to guide your preparation:

 1. Clearly Define the Goals of the Contract
Before drafting an RFP, articulate the specific outcomes you want the vendor to achieve. These goals should be aligned with your school or district’s strategic priorities and address the challenges identified through existing data.

Example Action: Conduct a goal-setting session with leadership, lead teachers, and relevant staff to clarify priorities, such as improving early literacy rates or reducing chronic absenteeism.

 2. Identify Metrics That Align With Your Goals
Choose 3–5 metrics that are directly connected to the outcomes you want to achieve. Ensure these metrics are specific, measurable, can be implemented with consistency, fidelity, and consider staff readiness. Explore both academic and non-academic metrics and what threshold would be considered success.

 Example Action: For a literacy program, select metrics like growth in reading proficiency as measured by formative assessments, attendance rates for targeted students, and structured observations of student progress on specific literacy skills during instruction. 

3. Gather Baseline Data Before Writing the RFP
Collect and analyze existing data to establish a baseline for the selected metrics. This provides a starting point to measure vendor impact and helps you articulate the current state of performance in your RFP.

 Example Action: Use attendance records, interim assessment data, and teacher feedback surveys to establish benchmarks for current student performance.

 4. Engage Stakeholders Early
Collaborate with internal teams and experts in the area to develop realistic and meaningful metrics. Early stakeholder input ensures buy-in and reduces the risk of misalignment later in the process.

 Example Action: Host a stakeholder meeting to discuss potential metrics and gather feedback from staff and faculty on their feasibility and relevance.

 5. Plan for Data Collection and Reporting
Design a data collection process that is sustainable and aligns with the vendor’s implementation timeline. Consider how frequently data will be collected, who will collect it, and how it will be reported.

 Example Action: Include in your RFP a requirement for vendors to describe how they will support data collection, such as providing tools or training for staff.

 6. Include a Pilot Phase in Your Plan
Incorporate a pilot period in your contract timeline to test metrics and refine data collection before scaling the program. This allows you to identify potential challenges and make adjustments.

Example Action: Specify in your RFP that vendors must implement their program with one grade level or a select group of students during the first quarter for evaluation purposes.

 7. Build Flexibility Into the RFP
Allow for adjustments to metrics and goals during the contract period to account for unforeseen changes, such as policy shifts or external challenges. Flexibility ensures metrics remain relevant and actionable.

 Example Action: Include language in your RFP that allows for post-pilot metric refinement, as well as a process for mid-contract revisions to metrics that incorporates ongoing data reviews and stakeholder feedback.

 Final Thoughts

Choosing the right metrics transforms Outcomes-Based Contracting (OBC) from a transactional model into a collaborative, growth-oriented process. Metrics are not just tools for evaluation—they are the foundation of strong school/vendor relationships and the key to driving meaningful change. By prioritizing actionable, growth-sensitive measures that are sound, specific, and directly connected to the contract’s goals, schools can ensure their contracts foster real progress and lasting improvements for students.

 When metrics are thoughtfully chosen and effectively implemented, they do more than track outcomes—they inspire collaboration, empower educators, and motivate students. By establishing clear goals, leveraging baseline data, piloting new measures, and communicating transparently, schools can create OBC systems that work for everyone involved.

 In short, metrics are the bridge between vision and action, turning aspirations into achievable outcomes and creating the conditions for long-term success. With the right measures in place, schools can build partnerships that truly make a difference.

Previous
Previous

NAEP 2024: Where it Fits into the Assessment Landscape

Next
Next

Rethinking Assessment: Why Metrics Matter for Student Success