Working in healthcare means juggling tight schedules, rising costs, complex regulations, and a constant pressure to improve patient outcomes. It’s easy for well-intentioned improvement efforts to stall — vague goals, messy data, and no one accountable turn good ideas into long meetings and no impact.
This post gives you a practical, five-step playbook for performance improvement that’s built to deliver measurable results, not just action plans. No theory-heavy frameworks — just clear steps you can use with the teams and systems you already have. You’ll get a straightforward path from a sharp aim to reliable measurement, plus tips on running fast tests, locking in gains, and where modern tools like AI can actually help.
- Step 1 — Aim: Define a tight, measurable goal that everyone understands.
- Step 2 — Baseline: Use real-world EHR, claims, and operational data to find the signal and set your starting point.
- Step 3 — Test: Run short PDSA sprints—small changes, quick cycles, documented learning.
- Step 4 — Lock: Standardize what works with checklists, standard work, and control charts.
- Step 5 — Measure & Prove ROI: Track the right outcomes and financial levers so you can show impact and scale what’s effective.
Along the way we’ll call out common blockers — fuzzy problem statements, noisy metrics, lack of ownership — and share practical fixes. We’ll also point out the high-ROI, low-regret places to use automation and AI so you don’t add tech for tech’s sake.
Read on if you want a no-nonsense, repeatable approach to improvement that your clinicians, operators, and leaders can actually use — and that proves results.
What the performance improvement process in healthcare is—and why it stalls
The performance improvement process in healthcare is a structured, iterative approach to changing care delivery so outcomes, safety, experience, and cost all move in the desired direction. At its core it combines a simple improvement logic (a clear aim, measurable evidence that change is occurring, and specific change ideas to test) with rapid learning cycles so teams can test, learn, and scale what works. This is the practical engine that turns strategy into measurable operational results (see Institute for Healthcare Improvement guidance: https://www.ihi.org/resources/Pages/HowtoImprove/default.aspx).
Use the Model for Improvement: clear aim, measures, and change ideas
Start with three questions: What are we trying to accomplish? How will we know a change is an improvement? What changes can we make that will result in improvement? Those answers produce a concise aim statement, a small set of outcome/process/balancing measures, and a short list of change ideas to run through quick PDSA (Plan‑Do‑Study‑Act) cycles. The discipline of writing a one- or two-sentence aim, and linking it to specific, time‑bound measures, prevents vague projects and keeps teams focused on signal rather than noise (practical guidance: https://www.ihi.org/resources/Pages/HowtoImprove/default.aspx).
Aim for the six domains of quality: safe, effective, patient-centered, timely, efficient, equitable
Good aims align to the six established domains of quality: safety, effectiveness, patient‑centeredness, timeliness, efficiency, and equity. Framing improvement efforts against one or more of these domains keeps tradeoffs visible (for example, faster throughput should not degrade safety) and ensures the team is solving for real value. These domains are the organizing goals many health systems and regulators use to judge improvement impact (see the Institute of Medicine/National Academies overview: https://www.ncbi.nlm.nih.gov/books/NBK222274/ and AHRQ summary: https://www.ahrq.gov/talkingquality/measures/six-domains.html).
Typical blockers: fuzzy problem statements, noisy data, no accountable owner
Even well‑intentioned projects stall for predictable reasons:
– Vague aims: “Improve throughput” without a target, timeframe, or measure leads to drifting effort. A crisp aim (who, by how much, by when) is essential.
– Noisy or missing data: teams spend weeks arguing about numbers rather than testing change. Without reliable, timely measures you can’t tell whether a PDSA succeeded.
– No single accountable owner: when responsibility is shared across multiple groups with no clear lead, momentum stalls and decisions are delayed.
– Lack of frontline engagement: changes designed without clinicians’ and staff’s input are hard to adopt and sustain.
– Poor linkage to governance: projects without executive sponsorship or a clear escalation path lose resources when other priorities arise.
These are common, solvable barriers—teams that define a sharp problem statement, secure a small set of trusted measures, name an accountable owner, and engage frontline users move far faster. Practical reviews of improvement programs also highlight capability gaps and data issues as leading causes of failure, underscoring the need to design improvement work with measurement and ownership baked in (common barriers and practical advice: https://www.health.org.uk/publications/quality-improvement-made-simple).
With that foundation—an explicit improvement logic, alignment to quality domains, and an awareness of the usual pitfalls—you’re ready to translate intent into action by setting a sharp, measurable aim and locking a reliable baseline from real operational data so every test of change has a clear signal to follow.
Steps 1–2: Set a sharp aim and baseline using real-world data
Before running tests of change you need two things: a sharp, time‑bound aim that everyone understands, and a trusted baseline that shows where you start. These first steps convert a broad desire to “improve” into a specific, measurable project that can produce reliable learning.
Find the signal: mine EHR, claims, and queue data to spot variation and waste
Look for sources that capture work and outcomes where the problem lives. Electronic health records, scheduling and queue logs, claims and billing flows, and operational systems each reveal different patterns of variation and delay. Map the process end‑to‑end, then extract the smallest number of measures that show where waste, delays, or rework occur. Focus on repeatable events (e.g., appointment flow, test turnaround, authorization cycles) so you can detect changes quickly. Visualize performance over time with simple run charts or control charts to separate common cause variation from real signals worth testing.
Prioritize with impact × effort and align to value-based metrics
Not every opportunity is equally worth pursuing. Use a lightweight impact × effort matrix to rank ideas: estimate expected benefit to patients, staff, or revenue on one axis and the implementation complexity on the other. Prioritize initiatives that are high‑impact and low‑effort, and make sure the chosen aim ties to your organization’s strategic or value‑based metrics so leadership care and resources follow. Ensure frontline teams see the value: improvements that reduce clinician burden or patient wait time are easier to sustain than changes perceived as purely administrative.
Lock the baseline: outcome, process, and balancing measures
Define three kinds of measures and capture a stable baseline period for each. Outcome measures show the end result you care about; process measures show whether the new steps are being done; balancing measures watch for unintended harm or workload shifts. Make the baseline real and reliable: agree on definitions, sampling rules, and a frequency for measurement that produces timely feedback. If data are noisy, simplify the measure or increase sample size rather than delaying testing. Finally, name an owner for the baseline data who is accountable for keeping charts current and accurate.
With a clear aim tied to prioritized opportunities and a trusted baseline in place, the team can move from planning into short, disciplined tests of change that generate real learning and measurable gains—then embed what works so improvements stick.
Steps 3–4: Run PDSA sprints with the right tools, then lock in the gains
Once you have a sharp aim and a trusted baseline, move quickly into small, disciplined tests of change. The objective of PDSA sprints is to learn fast with minimal disruption: plan a narrowly scoped change, run it at the smallest feasible scale, study measured results, and act on what you learned. Repeat short cycles until you see consistent improvement, then scale with safeguards in place.
PDSA done right: small tests, fast cycles, documented learning
Keep each PDSA focused: one change, one population, one clear measure. Limit duration (days to a few weeks), pre-specify success criteria, and document the plan, observations, and decisions in a simple log. Use run charts to display the measure over the cycle and capture qualitative learning from staff and patients. If a test fails, capture why and convert the learning into the next, smaller hypothesis—failure is data, not a setback.
Lean and DMAIC-lite: remove waste, standardize, and fix root causes
Use Lean thinking to strip non‑value steps (hand-offs, duplicate documentation, waiting) and DMAIC‑style root cause work to address process variability. Start with a quick value‑stream map, identify the biggest bottleneck, run targeted countermeasures, and iterate. When a change reduces waste or variation, document the new sequence and measure the impact on both process and outcome metrics before expanding the scope.
Make it stick: standard work, checklists, and SPC run/control charts
Transition winning tests into daily practice by creating clear standard work and simple job aids (checklists, templates, decision trees). Protect gains with statistical process control: switch from ad hoc snapshots to control charts that show whether the process is stable and in control as you scale. Pair checklists with short audits and rapid feedback loops so deviations are corrected quickly and learning is reinforced.
Team and governance: clinical lead + ops lead + data lead
Use a small, cross‑functional improvement team with defined roles: a clinical lead who owns clinical acceptability, an operations lead who manages workflows and resources, and a data lead who owns measure definitions and charts. Give the team a single accountable sponsor in governance who can unblock resources and remove barriers. Meet cadence‑wise: short daily standups during sprints, weekly review of measures, and a monthly governance update to approve scale‑up decisions.
When PDSA cycles are frequent, focused, and governed by clear ownership, improvements accumulate into measurable operational change. With standard work and control charts in place, teams can reliably scale and sustain gains—and then explore how automation and new tools might amplify what’s already working.
Thank you for reading Diligize’s blog!
Are you looking for strategic advise?
Subscribe to our newsletter!
Where AI belongs in the process (high-ROI, low-regret moves)
AI is most valuable when it amplifies improvements you already know how to measure and manage. Rather than being a silver bullet, AI should be treated as a tool in your improvement toolkit—deployed against the highest‑value choke points, validated in short PDSA cycles, and governed with clear guardrails so gains are real, measurable, and sustainable.
Ambient clinical documentation: ~20% less EHR time and ~30% less after-hours work
Start with ambient documentation and digital scribing: these systems reduce the repetitive burden of note entry and let clinicians spend more time with patients. “20% decrease in clinician time spend on EHR (News Medical Life Sciences).” Healthcare Industry Challenges & AI-Powered Solutions — D-LAB research
“30% decrease in after-hours working time (News Medical Life Sciences).” Healthcare Industry Challenges & AI-Powered Solutions — D-LAB research
Practical approach: pilot the scribe on a single clinic or service line, measure clinician EHR minutes and after‑hours work, collect qualitative feedback on accuracy and workflow fit, then iterate. Common vendor examples include digital scribe and copilot tools that integrate with major EHRs—select integrations that minimize clicks and fit local documentation norms.
AI admin assistants: cut no-shows, speed authorizations, 97% fewer coding errors
Administrative AI delivers quick financial and capacity wins. Task automation for appointment reminders, intelligent routing, pre‑authorizations, and coding suggestions reduces no‑shows and denials and improves billing accuracy. In practice, many organizations see large reductions in coding errors and large time savings for administrative staff when automation is focused on well‑defined, rules‑based processes.
Run a short pilot for one use case (e.g., automated outreach to reduce no‑shows) and track leading measures (contact rate, confirmed appointments) and lagging financial measures (revenue recovered, denial reductions) to prove ROI before scaling.
Target choke points: scheduling, denials, documentation, triage
Layer AI where process friction already exists: scheduling engines to optimize capacity, natural‑language triage to route patients appropriately, authorization accelerators to flag required documents, and documentation assistants to reduce rework. Use your baseline charts to pick the choke point with the biggest gap between demand and capacity, then design a narrow PDSA that replaces or augments one step in the flow. Always measure both the downstream outcome (throughput, revenue, wait time) and immediate process signals so you can see benefit early.
Adopt safely: privacy, security, clinician workflow fit, and change management
Safe adoption is non‑negotiable. Establish data governance (who can access PHI and model outputs), validate clinical accuracy with clinician review, and monitor for bias or drift. Keep clinicians in the loop—AI should reduce cognitive load, not add steps—and pair each technical pilot with a concise change‑management plan: training, simple job aids, and a channel for rapid feedback. Finally, instrument performance and safety metrics into your dashboards so you can detect unintended consequences as you scale.
Centered on measurable choke points, these high‑ROI, low‑regret AI moves work best when run as small tests inside your existing improvement cycle: pilot, measure, iterate, then standardize. Once the technical and workflow risks are addressed and benefits are proven, you can move from pilot to scale while keeping a tight focus on the metrics that matter.
Step 5: Measure what matters and prove ROI
Measurement is the bridge between improvement activity and sustained value. Teams that rigorously track both operational and financial impact—not just anecdotes—can prove ROI, secure funding to scale, and make smarter choices about where to invest next. Focus on measures that tie directly to patient outcomes, staff capacity, and hard dollars.
Leading vs. lagging: throughput, wait time, readmissions, denials, patient experience, staff burnout
Use a balanced measurement set. Leading measures (throughput, appointment confirmations, test turnaround time) give early signals that a change is working; lagging measures (readmissions, denied claims, revenue) confirm the downstream impact. Include patient experience and staff‑wellbeing measures—reduced clinician time on documentation or lower burnout scores are meaningful signals that operational gains are sustainable. Track measures on run charts or control charts so you can see trend and stability rather than relying on one‑off snapshots.
Financials that stand up: minutes saved, cases added, denial reduction, cost-to-serve
“No-show appointments cost the industry $150B every year.” Healthcare Industry Challenges & AI-Powered Solutions — D-LAB research
“Human errors during billing processes cost the industry $36B every year.” Healthcare Industry Challenges & AI-Powered Solutions — D-LAB research
Translate operational improvements into financial terms using simple, auditable calculations:
– Minutes saved × clinician or admin cost per minute = labor cost reduction. Capture both gross minutes saved and net clinical capacity gained (minutes that convert to extra patient-facing time).
– Additional cases or visits secured × average contribution margin = incremental revenue. Use conservative assumptions for conversion and payer mix.
– Denial reduction and improved coding accuracy = increased collections. Measure pre/post denial rates, average denial value, and days to resolution.
– Cost-to-serve changes: quantify reductions in non‑value work (authorizations, rework) and the associated overhead. Where possible, reconcile estimated savings against finance records (payroll, collections) to build an auditable ROI story.
Spread and sustain: change packages, coaching, transparent dashboards, and quarterly audits
Proving ROI is only the start—sustainment requires repeatable methods. Create a change package (why the change works, step‑by‑step standard work, training materials, data definitions) so other teams can reproduce results. Deploy coaches or improvement leads to mentor adopters, and publish transparent dashboards showing outcome/process/balancing metrics for stakeholders. Finally, schedule quarterly audits to validate fidelity, recalibrate measures, and surface drift or new failure modes.
When measurement is disciplined—leading signals for fast learning, robust financial calculations for ROI, and a playbook for spread—improvements survive leadership changes and competing priorities. With that proof in hand, teams can confidently target higher‑value automation and advanced tools to amplify what already works.