Skip to main content

Overview

While automated evaluators catch many issues, healthcare AI requires human oversight for edge cases, quality assurance, and regulatory compliance. Rubric provides purpose-built interfaces for clinician review.
Clinician Review Interface

How It Works

1

Automated Evaluation

AI outputs are scored by clinical evaluators (triage accuracy, red flags, etc.)
2

Flagging

Cases meeting review criteria are added to the review queue
3

Assignment

Cases are assigned to qualified reviewers based on specialty and availability
4

Review

Clinicians grade cases using streamlined, keyboard-driven interfaces
5

Feedback Loop

Review data feeds back into model improvement and evaluator calibration

Review Triggers

Configure when cases should be flagged for human review:
client.projects.update(
    project_id="proj_abc123",
    review_triggers={
        # Score-based triggers
        "low_confidence": {
            "enabled": True,
            "threshold": 0.70  # AI confidence < 70%
        },
        "low_evaluation_score": {
            "enabled": True,
            "threshold": 80  # Evaluation score < 80%
        },
        
        # Specific failure triggers
        "red_flag_detected": {
            "enabled": True,
            "require_review": True  # Always review
        },
        "under_triage": {
            "enabled": True,
            "require_review": True
        },
        
        # Sampling for QA
        "random_sample": {
            "enabled": True,
            "percentage": 5  # 5% random sample
        }
    }
)

Reviewer Management

Adding Reviewers

# Invite a clinician reviewer
reviewer = client.reviewers.create(
    email="[email protected]",
    name="Dr. Sarah Smith",
    credentials={
        "npi": "1234567890",
        "specialty": "internal_medicine",
        "license_state": "CA"
    },
    permissions={
        "projects": ["proj_abc123", "proj_def456"],
        "can_approve": True,
        "can_escalate": True
    }
)

Reviewer Qualifications

Configure which reviewers can assess which case types:
Reviewer TypeQualificationsCase Types
PhysicianMD/DO + specialtyComplex triage, imaging
NurseRN + triage certificationStandard triage calls
NP/PAAdvanced practiceMid-level complexity
SpecialistBoard certificationSpecialty-specific cases
client.projects.set_reviewer_requirements(
    project_id="proj_abc123",
    requirements={
        "default": {
            "min_credential": "RN",
            "specialties": ["internal_medicine", "emergency_medicine"]
        },
        "high_acuity": {
            "min_credential": "MD",
            "specialties": ["emergency_medicine", "cardiology"]
        }
    }
)

Review Interface

Call Review

For voice triage review, clinicians see:
  • Audio player with waveform visualization
  • Transcript with speaker labels and timestamps
  • AI evaluation scores and extracted symptoms
  • Grading form with keyboard shortcuts
Keyboard Shortcuts:
1/2/3 - Triage accuracy (correct/under/over)
Q/W/E - Safety assessment
Space  - Play/pause audio
→/←   - Skip forward/back 5 seconds
Enter  - Submit review

Note Review

For clinical documentation review:
  • Source material (transcript, prior notes)
  • Generated note with side-by-side comparison
  • Accuracy highlights showing supported vs. unsupported claims
  • Completeness checklist of required elements

Imaging Review

For radiology/pathology AI:
  • DICOM viewer with zoom, pan, window/level
  • AI findings overlay with confidence scores
  • Annotation tools for corrections
  • Structured grading for findings accuracy

Assignment Strategies

Round Robin

Distribute cases evenly across available reviewers:
client.projects.set_assignment_strategy(
    project_id="proj_abc123",
    strategy="round_robin",
    config={
        "max_queue_per_reviewer": 20,
        "respect_specialty_match": True
    }
)

Load Balanced

Account for reviewer availability and workload:
client.projects.set_assignment_strategy(
    project_id="proj_abc123",
    strategy="load_balanced",
    config={
        "consider_availability": True,
        "max_hours_per_week": 10,
        "priority_weight": 0.7  # Prioritize urgent cases
    }
)

Manual Assignment

Assign specific cases to specific reviewers:
client.reviews.assign(
    case_ids=["case_123", "case_456"],
    reviewer_id="rev_smith",
    priority="high",
    due_date="2024-03-20"
)

Review Workflow

Submitting Reviews

# Via SDK (for integrations)
client.reviews.submit(
    case_id="case_abc123",
    reviewer_id="rev_smith",
    grades={
        "triage_accuracy": "correct",
        "safety_assessment": "all_addressed",
        "guideline_compliance": 85,
        "overall_approval": True
    },
    notes="Good call handling. Minor: could have asked about medication history.",
    time_spent_seconds=180
)

Review States

StateDescription
pendingAwaiting assignment
assignedAssigned to reviewer
in_progressReviewer has started
completedReview submitted
escalatedNeeds additional review

Escalation

For complex cases requiring additional expertise:
client.reviews.escalate(
    case_id="case_abc123",
    reason="Unusual presentation - needs cardiology input",
    escalate_to="specialist",
    specialty="cardiology"
)

Metrics & Analytics

Track review program performance:
# Get review metrics
metrics = client.reviews.get_metrics(
    project_id="proj_abc123",
    date_range={"start": "2024-01-01", "end": "2024-03-31"}
)

print(f"Total reviews: {metrics.total_reviews}")
print(f"Average review time: {metrics.avg_review_time_seconds}s")
print(f"Inter-rater agreement: {metrics.inter_rater_agreement}%")
print(f"AI accuracy (per reviewers): {metrics.validated_ai_accuracy}%")

Key Metrics

MetricDescriptionTarget
Review TATTime from flagging to review completion< 24 hours
ThroughputReviews per reviewer per hour10-15
AgreementInter-rater reliability (Cohen’s κ)> 0.8
Override Rate% of AI decisions overturned< 10%

Compliance & Audit

All review activities are logged for compliance:
# Get audit log
audit_log = client.audit.list(
    project_id="proj_abc123",
    event_types=["review.submitted", "review.escalated"],
    date_range={"start": "2024-01-01"}
)

for event in audit_log:
    print(f"{event.timestamp}: {event.actor} - {event.action}")

Next Steps