Healthcare AnswersValue-Based Care

How do ACOs present performance data to clinicians?

Effective ACO performance reporting to clinicians is patient-specific, not aggregate. A unit-level dashboard showing "diabetic A1C control rate: 64%" has 12% clinician engagement. A per-clinician work list showing "your 23 attributed diabetic patients with A1C ≥9 who haven't been seen in 90 days" has 78%+ engagement. The principle: clinicians act on patients, not percentages. Aggregate metrics belong in executive reporting; clinician-facing reporting needs to surface the work, with attribution-aware filtering and dollar-quantified care gap rosters.

What this looks like in Vizier

Stylized dashboard visualization. Data values obscured. Upload your own data to see real numbers.

Why This Happens

Clinicians spend their day making patient-level decisions. A primary care physician seeing 24 patients in a day makes 24 distinct clinical assessments, each requiring an action specific to that patient. When ACO performance data is presented in aggregate — a dashboard showing the practice's HEDIS measure rates — it doesn't tell the clinician what to do during the next visit. The mental work of translating "our diabetes A1C poor control rate is 18%" into "I should check my next patient's A1C" doesn't happen reliably during a busy day. Patient-specific worklists eliminate that translation step. The clinician sees "Mrs. Johnson is here at 2pm, her last A1C was 11.2 from 4 months ago, she's overdue for retinal exam, here's the order set" — and acts on it.

What the Data Usually Hides

The other failure mode is naming-and-shaming peer comparison done badly. A scorecard ranking 35 clinicians by quality metric, sorted with the worst performer at the top, generates resentment without improvement. The same data presented as "your performance distribution against peers, with the specific patients pulling each measure down" produces the opposite effect. The principle is constructive transparency — clinicians see where they stand and what to act on, without being publicly compared in a way that feels punitive. Effective ACO reporting also accounts for panel mix: a clinician with a high-acuity panel performing slightly below peer mean is doing better than a clinician with a low-acuity panel performing at peer mean. Risk-adjusted peer comparison is the gold standard but is rarely done because most analytics tools don't have HCC risk adjustment built in.

How to Fix It

Three formats work. First, the patient-level action list delivered to the clinician's inbox or EHR before each clinic day — "here are the 6 patients in today's schedule with open care gaps, ranked by gap value." Second, EHR-embedded gap alerts during the visit itself, surfacing in the clinician's normal workflow (Epic Care Gaps, SMART on FHIR launches). Third, monthly clinician-facing performance summary that mixes "your distribution vs. peers" with "the specific patients driving the variance" — never a ranking sorted by worst-to-best. Tools that do this well include Aledade's clinician portal, Arcadia's provider engagement module, and Vizier's SMART-on-FHIR-embedded gap roster.

People who asked this also asked...

Your Data. Your Answer.

This is what the data typically shows.

Want to see what your data says?

Ask Your Vizier →