SHOKRI FRANCIS
RAOOF
Technical Product Owner / Software Developer

ClearSight
Improving Follow-Up Adherence After Diabetic Retinopathy Screening
ClearSight is an AI-assisted adherence platform designed to support public diabetic retinopathy screening programs across the EU. It focuses on what happens after screening: helping patients understand their results, complete the appropriate next step, and reducing avoidable drop-off, without adding workload for clinicians or expanding regulatory scope. ClearSight does not diagnose disease, interpret retinal images, or replace clinical systems. It operates as a lightweight layer between existing screening infrastructure and patients, with the explicit goal of improving follow-up completion in a safe, auditable, and privacy-respecting way.
Context & Problem
Diabetic retinopathy screening programs across Europe achieve high participation rates and generate large volumes of screening data. However, screening alone does not prevent vision loss. The critical failure point often occurs after the screening result is delivered. In practice, a significant proportion of patients do not complete recommended follow-up actions after screening, particularly when results are borderline, non-urgent, or require self-initiated booking. These missed follow-ups can lead to delayed treatment, avoidable vision loss, and increased long-term healthcare costs. The core issue is not diagnostic accuracy or image interpretation. It is a systems-level gap between:
- Screening outcomes
- Patient understanding
- Completed follow-up actions
In many programs, abnormal results are communicated via letters or portals, after which responsibility for follow-up shifts implicitly to the patient. At this point, screening systems often lose visibility into whether action was taken. ClearSight was conceived to address this gap — not by changing how screening is performed, but by improving how results are translated into clear, completed next steps.
Discovery & Stakeholder Evidence
ClearSight's problem framing and constraints are grounded in qualitative discovery across the diabetic retinopathy screening pathway. Interviews were conducted with:
Across these conversations, consistent patterns emerged:
- A screening ophthalmologist involved in clinical governance
- A diabetes clinic consultant responsible for downstream care
- A screening program nurse / coordinator managing operational follow-up
Follow-up failures occur after screening results are issued
During the handover to patient action.
Responsibility for follow-up is diffuse and poorly observable
Clinicians lack closed-loop confirmation and often discover missed follow-up retrospectively.
Patients frequently misunderstand results
Both the urgency of results and who is responsible for initiating follow-up.
Stakeholders explicitly resisted new clinician-facing tools
Dashboards, inboxes, or alerting systems were resisted due to workload and liability concerns.
These findings directly informed ClearSight's scope boundaries, responsibility model, and decision to prioritise clarity, reinforcement, and population-level insight over escalation or automation.
Interviews were used to identify unacceptable system behaviors and responsibility risks, rather than to validate feature desirability.
Why Existing Approaches Fall Short
Most screening programs already communicate results to patients, typically via letters, PDFs, or basic patient portals. While these approaches technically "close the loop," they often fail to support action in practice. Common limitations include:
Ambiguous communication
Results may be clinically correct but difficult to interpret for patients, especially when risk is neither clearly normal nor clearly urgent.
Unclear responsibility
Patients are often unsure whether follow-up is automatic, optional, or their responsibility to initiate.
Static reminders
Reminder systems, where they exist, tend to be one-size-fits-all, policy blind and don't adapt to drop-off risk.
Lack of population visibility
Screening programs may know how many people were screened, but have limited insight into where and why follow-up breaks down.
ClearSight is intended to complement existing screening infrastructure by addressing these specific weaknesses — not by replacing it.
These gaps are rarely due to negligence. They emerge from fragmented systems, unclear ownership, and tools that were not designed to support adherence as a first-class outcome.
Product Intent & Boundaries
ClearSight is deliberately scoped to remain a non-diagnostic, non-clinical system.
What ClearSight Is
- An adherence-focused layer that sits downstream of screening
- A patient-facing system that explains results in plain language
- A tool that presents a single, clear recommended next action
- An analytics surface for screening programs to understand follow-up completion at a population level
- An AI-assisted system that prioritizes explainability, auditability, and safe failure modes
What ClearSight Is Not
- Not a diagnostic device
- Not an image analysis or computer vision system
- Not a clinical decision-making tool
- Not an EHR or patient record system
- Not a clinician task manager or inbox
- Not a replacement for existing screening workflows
ClearSight's success is not measured by how much it automates, but by whether more patients complete the follow-up actions that screening programs already recommend.
These boundaries are intentional. They reduce regulatory risk, avoid shifting clinical responsibility, and ensure the product remains compatible with real-world public screening programs.
Users & Responsibility Model
ClearSight is designed around a deliberately narrow set of users and responsibilities. In a public screening context, ambiguity around "who is responsible for what" is a common source of failure. This section defines those boundaries explicitly.
Primary User: Screening Program Operators
Screening program operators are the primary users of ClearSight, configuring workflows and using aggregated insights to improve follow-up adherence at scale. Their responsibilities include: defining follow-up workflows for different screening outcomes, configuring reminder rules and escalation logic, monitoring follow-up adherence at a population level, identifying where drop-off occurs across cohorts or regions, and exporting aggregated data for care coordination or reporting. Importantly, operators interact with aggregated data only. ClearSight does not create patient-level task lists, inboxes, or case management queues for program staff.
Secondary User: Patients
Patients are the only individual-level users of ClearSight. Their needs are intentionally simple: receive their screening outcome in clear, non-alarming language, understand what the result means at a high level, see one recommended next action, receive reminders until that action is completed or they opt out, and confirm follow-up completion themselves. Patients do not "use" ClearSight as an application. They receive a single, program-defined digital result page and reminders designed to support follow-up action. ClearSight is designed to reduce uncertainty rather than provide medical guidance.
Explicit Non-User: Clinicians
Clinicians are intentionally not operational users of ClearSight. They do not log into the system daily, do not manage patients within ClearSight, do not receive tasks, alerts, or reminders, and are not responsible for follow-up execution via the platform. This is a deliberate design decision. In many screening programs, clinicians are already operating at capacity. Introducing additional systems, inboxes, or follow-up responsibilities risks increasing burnout and creating parallel workflows that are difficult to sustain. ClearSight is designed to support clinicians indirectly by improving follow-up completion rates — without requiring their ongoing interaction.
Responsibility Boundaries
ClearSight does not change clinical responsibility. It makes responsibility visible. Clinical decisions remain with clinicians and existing healthcare systems. Follow-up ownership remains with patients, as defined by the screening program. System configuration and oversight remain with program operators. The platform itself does not assume clinical judgment or care coordination roles. By explicitly separating these responsibilities, ClearSight avoids silently shifting accountability while still addressing a real, systemic gap in screening outcomes.
Patients
Primary users- Receives screening results
- Understands recommended next action
- Initiates follow-up
- Confirms follow-up completion
- Can opt out of reminders
- Receive medical advice
- Manage care pathways
- Interact with clinicians via ClearSight
Screening Program Operators
Configuration & oversight- Define follow-up workflows
- Configure reminder rules
- Monitor aggregate adherence
- Review cohort-level trends
- Manage individual patients
- Send manual reminders
- Own clinical decisions
- Receive patient-level task lists
Clinicians
Explicit non-users- Log into ClearSight
- Receive alerts or inbox items
- Track follow-up completion
- Manage patients within the system
Clinical responsibility remains unchanged
ClearSight enforces clear responsibility boundaries to avoid shifting clinical accountability or creating hidden operational workload.
Solution Overview
ClearSight is designed as a lightweight, downstream layer that translates screening outputs into completed follow-up actions. It does not change how screening is performed or how clinical decisions are made. Instead, it focuses on the narrow but critical gap between results delivery and patient action. At a high level, the system operates in five steps.
Structured Screening Inputs
ClearSight ingests structured outputs from existing screening systems. These inputs are intentionally minimal and standardized to avoid scope creep and regulatory risk. Typical inputs include: screening result category (e.g. no retinopathy, mild findings, referral recommended), screening date, and optional contextual bands (e.g. age band, diabetes duration band, prior screening history). ClearSight does not ingest raw retinal images, free-text clinical notes, diagnoses, or treatment plans.
Risk Tier Assignment
Based on the screening output and limited contextual data, ClearSight assigns the patient to a coarse risk tier (e.g. low, medium, high). These tiers are not clinical judgments. They are used solely to determine follow-up urgency, tailor reminder timing and frequency, and support population-level analysis. Risk tiering is designed to be explainable and auditable.
Single Recommended Next Action
For each patient, ClearSight presents one primary recommended next action. Examples include: "Book an eye clinic appointment within X weeks," "Repeat screening at the next scheduled interval," or "Contact your GP or screening provider." The platform deliberately avoids presenting multiple competing options. Reducing choice overload is treated as a core adherence strategy. ClearSight does not provide medical advice or alternative care pathways.
Reminder & Follow-Up Support
ClearSight supports patients through reminders that are time-bound, risk-aware, and adaptive to likelihood of drop-off. Reminder logic is designed to optimize timing and frequency rather than message content. Messages remain standardized, neutral, and non-alarming. Patients can opt out at any time. If AI-assisted optimization is unavailable or disabled, ClearSight falls back to static, rule-based reminder schedules without loss of core functionality.
Confirmation Loop & Outcome Visibility
Patients can confirm when they have completed the recommended follow-up action. This confirmation closes the loop for the patient, stops further reminders, and feeds aggregated adherence data back to the screening program. Program operators gain visibility into follow-up completion rates, drop-off points, and adherence trends across cohorts. ClearSight does not verify clinical outcomes or treatment details. Its role ends at confirming that the follow-up action occurred.
Screening Output
- Result category
- Screening date
- Contextual bands
Risk Tier Assignment
- Low / Medium / High
Single Recommended Next Action
- Programme-defined workflow
Follow-Up Orchestration
- Reminder timing logic
- Drop-off sensitivity
Programme Messaging System
- SMS / letters / portal notifications
Follow-Up Confirmation
- Patient confirms action
- Reminders stop
Aggregated Adherence Insights
- Completion rates
- Drop-off points
- Cohort trends
ClearSight translates screening outputs into completed follow-up actions by orchestrating programme-defined workflows and closing the loop with confirmation and aggregated insight.
Interface Design
The following screens are illustrative design concepts developed in collaboration with a product designer. They demonstrate how ClearSight's scope, responsibility model, and analytics focus are expressed visually. Metrics shown are representative placeholders used to explore clarity and interpretability, not live production data.
Patient-facing experience
This screen reflects the decision to present screening results in plain, non-alarming language with a single recommended next action. The interface avoids diagnostic interpretation and makes responsibility explicit: the patient is responsible for arranging follow-up. This mirrors the design intent of a digitally readable letter rather than a dashboard.
Follow-up overview
The follow-up overview provides a high-level view of completion rates, median follow-up time, and distribution by screening outcome. This supports programme oversight without creating patient-level operational responsibility.
Risk & drop-off analysis
This view explores where follow-up failure occurs and how risk is distributed across cohorts. These insights are used to inform reminder policy and programme configuration, not to intervene on individual patients.
Policy change and impact review
Policy impact views allow operators to assess the effect of configuration changes (e.g. reminder timing) on adherence outcomes over time. This supports evidence-based programme decisions without expanding the system into case management.
Operator interfaces are intentionally limited to population-level visibility. ClearSight does not provide patient-level task lists, alerts, or case management tools.
Metrics & Impact Model (Pre-Pilot)
ClearSight is designed to improve follow-up adherence rather than screening participation. At this stage, impact is framed using explicit assumptions rather than observed outcomes.
Key Metrics
- North Star: % of patients completing follow-up within the target window
- Leading Indicators: reminder open rate, reminder response latency, follow-up confirmation rate, opt-out rate
- Risk & Quality Metrics: reminder fatigue indicators, false urgency rate, demographic bias in risk tier assignment
Baseline Assumptions
Based on public screening program reports and stakeholder interviews, follow-up completion after abnormal diabetic retinopathy screening is typically estimated at ~65–75%, with significant variation by risk category and region.
Target Outcome
ClearSight is designed to improve follow-up completion within the recommended window by approximately 10–20 percentage points, primarily by reducing missed or delayed follow-up among medium- and high-risk cohorts.
Adherence Funnel
ClearSight explicitly operates between: Screened → Result delivered → Result understood → Action initiated → Follow-up completed. The platform intervenes only between "result delivered" and "action initiated."
These metrics are designed to support program-level evaluation rather than individual clinical decision-making.
AI Scope & Explainability
ClearSight uses AI as a constrained, supporting capability — not as the core product and not as a source of clinical authority. From the outset, AI was treated as a dependency with clearly defined inputs, outputs, and failure modes, rather than as an autonomous system.
What AI Is Used For
- Risk Stratification: AI assigns patients to coarse risk tiers based on structured screening outputs and minimal contextual data — not to make or infer diagnoses
- Drop-Off Prediction: AI estimates the likelihood that a patient will miss or delay follow-up, used to adjust reminder timing and frequency
- Reminder Optimization: AI optimizes when reminders are sent, not what they say. Message content remains standardized, pre-approved, and non-alarming
What AI Is Explicitly Not Used For
- Diagnosing disease
- Interpreting retinal images
- Inferring treatment pathways
- Generating free-text medical advice
- Personalizing message content
- Replacing program-defined workflows
Model Choices & Explainability
ClearSight prioritizes interpretable, auditable models over opaque or highly complex approaches. Model selection favors logistic regression or gradient boosting, clearly defined feature sets, and stable, bounded outputs (e.g. tiers, probabilities). For every AI-influenced decision, the system can surface which input factors were considered, how they influenced the outcome at a high level, and what the AI output was used for. This explainability is designed for two audiences: patients, who need reassurance and clarity, and program operators, who need confidence in system behavior and oversight capability.
AI as an Optional Dependency
AI is not a hard requirement for ClearSight to function. If AI services are unavailable, disabled, or deliberately excluded: risk tiering falls back to rule-based logic defined by the screening program, reminders follow static, predefined schedules, and core patient workflows remain fully usable. This design ensures that AI failure does not block patient communication or follow-up support. It also allows programs to adopt ClearSight incrementally, without committing to AI from day one.
Failure Modes & Safety Considerations
AI systems are probabilistic and can fail in unpredictable ways. ClearSight is designed so that AI failures are visible, contained, and non-blocking. Key safety measures include: bounded outputs (no open-ended generation), explicit confidence thresholds, human-defined workflows that AI cannot override, and monitoring of AI behavior at an aggregate level rather than per-patient intervention. The guiding principle is that AI should improve efficiency and targeting without introducing silent failure modes or shifting responsibility.
While ClearSight can operate safely without AI, its ability to generate population-level insight and inform program optimisation depends on AI-assisted analysis.
Known Failure Scenarios & Escalation Boundaries
ClearSight deliberately avoids automated clinical escalation in early versions. This is an intentional boundary informed by stakeholder concerns around implied responsibility and resourcing. In cases where high-risk patients repeatedly fail to complete follow-up:
- ClearSight treats this as a population-level signal rather than an individual clinical trigger
- Patterns of repeated non-adherence are surfaced to program operators for policy review
- Any escalation beyond reminder reinforcement must be explicitly defined, resourced, and owned by the screening program
This boundary is intended to preserve trust with clinicians, patients, and program operators by avoiding false assurances or silent handoffs.
ClearSight is designed to highlight where escalation may be required, not to silently assume clinical responsibility.
Data, Privacy & GDPR Posture
ClearSight is designed for use within EU public screening programs, where data protection, patient trust, and regulatory clarity are non-negotiable. From the outset, data handling was treated as a product design problem, not a compliance afterthought. Rather than attempting to store or centralize medical records, ClearSight adopts a data-minimization–first posture aligned with GDPR principles and public-sector expectations.
Data Minimization by Design
ClearSight ingests and stores only the minimum data required to support follow-up adherence: a pseudonymized patient identifier, screening result category, screening date, and optional contextual bands (e.g. age range, diabetes duration band, prior screening history). ClearSight explicitly does not ingest raw retinal images, diagnoses or treatment plans, free-text clinical notes, or detailed demographic or identity data. By constraining data inputs early, the platform reduces both regulatory exposure and the blast radius of potential failures. This posture aligns with GDPR principles of data minimisation, purpose limitation, and storage limitation.
Pseudonymization & Identity Separation
All patient-facing workflows operate on pseudonymized identifiers. ClearSight does not act as a system of record for patient identity. Where identity resolution is required (e.g. message delivery), it is handled via controlled interfaces with existing program infrastructure, rather than replicated inside the platform. This separation limits the amount of identifiable data stored, reduces accidental exposure through logs or analytics, and simplifies data retention and deletion.
Consent, Transparency & Opt-Out
ClearSight assumes explicit patient consent as a prerequisite for participation. Patients are informed why they are receiving messages, can opt out at any time, and can complete follow-up without continued platform engagement. Opting out stops reminders immediately, does not block care, and does not require clinician intervention. This ensures that adherence support remains voluntary and proportionate.
Controller / Processor Clarity
ClearSight is designed to operate as a data processor, not a data controller. Screening programs retain ownership of patient data, program operators configure workflows and retention policies, and ClearSight processes data strictly within those defined boundaries. This separation is important not only for GDPR alignment, but also for maintaining trust with public-sector operators and avoiding silent scope expansion.
Retention & Deletion Principles
Data retention is limited by default and driven by purpose. Typical principles include: retaining patient-level data only as long as needed to support follow-up, aggregating and anonymizing data for longer-term reporting, and supporting deletion requests in line with program policies. ClearSight avoids indefinite storage of patient-linked data and does not repurpose data beyond adherence support.
Privacy as a Product Constraint
Rather than treating privacy as a compliance checklist, ClearSight treats it as a design constraint that shapes system architecture. A formal Data Protection Impact Assessment (DPIA) would be required prior to pilot deployment. This posture influenced decisions such as avoiding free-text inputs, limiting personalization, preferring coarse risk tiers over granular scores, and prioritizing aggregate insights over individual tracking. These constraints reduce complexity, improve explainability, and make the system easier to operate responsibly at scale.
Key Product Decisions
ClearSight is intentionally shaped by a small number of high-impact product decisions made before full implementation. These decisions were driven less by feature ambition and more by risk management, system clarity, and long-term operability in a public healthcare context. Rather than deferring difficult trade-offs, ClearSight treats early constraint-setting as a core product responsibility.
Do Not Create Clinician Task Lists or Inboxes
ClearSight does not generate patient-level task lists, alerts, or inboxes for clinicians.
In many screening programs, follow-up breakdowns are attributed to "communication gaps," but introducing new clinician-facing tools often worsens the problem by adding parallel workflows. This risks:
- Increasing clinician workload
- Fragmenting responsibility
- Creating implicit expectations that clinicians will manage follow-up within yet another system
- Reduced ability to intervene on individual cases
- Less perceived control at the clinician level
- No additional daily burden on clinicians
- Clear responsibility boundaries
- Higher likelihood of adoption within existing screening programs
One Recommended Next Action per Patient
ClearSight presents a single primary next action for each patient, rather than multiple options or branching pathways.
Patients receiving screening results are often uncertain, anxious, or unfamiliar with the healthcare system. Presenting multiple actions can increase cognitive load and decision paralysis. ClearSight prioritizes clarity over flexibility, adherence over optionality. The recommended action reflects the screening program's predefined workflow, not system-generated medical advice.
- Reduced personalization
- Fewer alternative pathways surfaced in-app
- Improved follow-up completion rates
- Lower patient confusion
- Simpler, more auditable patient flows
Ingest Structured Outputs Only
ClearSight ingests only structured screening outputs and minimal contextual data.
Allowing raw images, free-text notes, or diagnoses into the system would:
- Significantly expand regulatory scope
- Increase data protection risk
- Blur the boundary between adherence support and clinical decision-making
- Loss of potentially rich clinical detail
- Reduced ability to fine-tune AI models using unstructured data
- Reduced regulatory exposure
- Easier integration with existing systems
- Clear, defensible system boundaries
Population Metrics Over Individual Case Management
ClearSight prioritizes aggregated adherence metrics rather than individual patient tracking for program operators.
Screening programs operate at scale. Their ability to improve outcomes depends on understanding where drop-off occurs, which cohorts are at higher risk, and which interventions are effective overall. Individual case management introduces operational complexity and responsibility shifts that ClearSight explicitly avoids.
- Less granularity for one-off interventions
- Reduced visibility into single-patient journeys
- Actionable insights at the program level
- Better support for policy and workflow adjustments
- Lower operational burden on staff
Treat AI as Optional Infrastructure
ClearSight is designed to function fully even when AI components are unavailable or disabled.
AI services introduce operational risk, dependency complexity, and procurement barriers in public-sector environments. Making AI optional reduces single points of failure, allows incremental adoption, and avoids positioning AI as a prerequisite for value. Fallback behavior is rule-based and program-defined.
- Slower optimization without AI
- Less dynamic reminder behavior in fallback mode
- Greater system resilience
- Easier pilot and rollout
- Clear separation between core functionality and optimization layers
Avoid Early Optimization and Automation
ClearSight deliberately avoids early optimization of workflows, reminders, or AI models.
Before optimizing, the system must be understandable, explainable, and behave predictably. Premature optimization risks locking in incorrect assumptions and increasing rollback costs in a regulated environment.
- Fewer "smart" behaviors early on
- Slower perceived innovation
- Safer iteration
- Easier auditing and adjustment
- Stronger foundation for future complexity
Across all decisions, the guiding question was: "Does this reduce ambiguity and risk, or does it quietly shift responsibility?" When the answer was unclear, decisions consistently defaulted toward restraint.
Current Status & Progress
ClearSight is an early-stage, design-led product initiative. At this stage, the focus has been on problem framing, scope definition, and risk reduction rather than feature completeness or rapid implementation. This work was approached deliberately as a foundation-setting phase, recognizing that in a regulated healthcare context, early decisions around boundaries and responsibility have a disproportionate impact on long-term viability.
What Is Defined Today
The following elements are intentionally locked before deeper implementation: problem scope and non-goals (ClearSight is explicitly positioned as a non-diagnostic, non-clinical, downstream adherence platform), user roles and responsibility boundaries (patients, screening program operators, and clinicians are clearly separated), data boundaries and privacy posture (structured inputs only, pseudonymization by default, explicit consent, and controller/processor clarity), AI scope and failure behavior (AI is constrained to risk stratification and optimization, with rule-based fallbacks and explainability built in), and high-level system flow (from screening output to patient action to aggregated outcome visibility). These decisions are treated as prerequisites for implementation, not artifacts to be retrofitted later.
What Exists in Practice
At the time of writing, ClearSight consists of: a defined product scope and responsibility model, documented product decisions and constraints, drafted end-to-end system flows, initial technical architecture planning, and a structured case study capturing assumptions, trade-offs, and rationale. While user interfaces and production code are not yet complete, the core product intent and system behavior are sufficiently defined to support informed implementation and iteration.
What Is Intentionally Deferred
Certain elements are deliberately postponed until the foundation proves sound: detailed UI design and interaction refinement, reminder optimization tuning and model training, performance metrics and outcome benchmarking, and integrations with live screening program infrastructure. Deferring these elements avoids premature optimization and reduces the cost of revisiting early assumptions.
How Progress Is Evaluated at This Stage
Success at this stage is measured qualitatively rather than through delivery metrics. Key signals include: clarity of responsibility and system boundaries, internal consistency of decisions across product, data, and AI usage, ability to explain and defend trade-offs, and readiness for safe pilot implementation. This framing reflects the reality of early-stage work in regulated domains, where correctness and trust are prerequisites for scale.
At this stage, ClearSight is positioned to support a limited pilot focused on validating assumptions rather than demonstrating scale. This staging reflects feedback from stakeholders who emphasised the cost of revisiting responsibility and data decisions once systems are deployed.
What's Next
Future work on ClearSight is intentionally framed around earned complexity, not feature expansion for its own sake. The next phase focuses on validating assumptions, testing system behavior in realistic conditions, and preparing for a safe pilot, rather than scaling functionality prematurely.
Near-Term Focus: Making the Core Real
The immediate priority is to make the defined system concrete while preserving the boundaries established so far. This includes: prototyping the patient-facing flow from screening result to follow-up confirmation, validating that "one recommended next action" is understandable and actionable across different result categories, implementing the reminder system with rule-based logic before introducing optimization, and ensuring opt-out, fallback, and failure scenarios are fully supported from day one. The goal of this phase is not polish, but confidence: confirming that the core flow works without relying on AI or complex integrations.
Preparing for a Pilot Context
ClearSight is designed to be piloted within an existing public screening program rather than launched as a standalone product. Preparation for such a pilot would include: defining minimal integration points with screening systems (structured outputs only), aligning reminder workflows with program-defined follow-up policies, validating consent, opt-out, and retention behavior with real operational constraints, and ensuring that aggregated adherence metrics answer questions program operators actually care about. This work prioritizes operational fit and trust over scale.
Introducing AI Incrementally
AI-assisted capabilities are treated as optional enhancements, not prerequisites. Only after the core system proves stable would the next steps include: training and validating interpretable risk stratification models, introducing drop-off prediction to refine reminder timing, and monitoring AI behavior at an aggregate level to detect drift or unintended effects. AI would be introduced gradually, with clear rollback paths and rule-based fallbacks always available.
What Is Intentionally Out of Scope (For Now)
Several areas are deliberately excluded from near-term plans: clinician-facing dashboards or task management, patient-level case management by program staff, free-text personalization or generative messaging, automated clinical escalation pathways, and deep integration with EHRs or imaging systems. These exclusions are treated as active decisions, not missing features. Each would materially change the responsibility and regulatory profile of the system.
Long-Term Direction (If Earned)
If ClearSight proves valuable and operable in a pilot context, longer-term evolution could include: deeper population-level analytics to support program planning, cross-program benchmarking using fully anonymized data, and adaptation to adjacent screening programs with similar adherence challenges. Any such expansion would be driven by demonstrated need and institutional trust, not by product ambition alone.
ClearSight is intentionally built as a conservative system that earns complexity over time. Progress is measured not by how much the system does, but by how clearly responsibility is defined and how safely it operates in a real healthcare context. This approach reflects how high-impact systems are built in public and regulated environments — incrementally, transparently, and with restraint.
Technical Context
This section provides high-level technical context for ClearSight. It is included to illustrate architectural intent and delivery considerations, not as an implementation specification. Technology choices were guided by reliability, explainability, and compatibility with EU public-sector environments, rather than novelty or optimization.
System Architecture Overview
ClearSight is designed as a modular, service-oriented system with clear separation between: patient-facing experience, program configuration and analytics, AI-assisted optimization, and external screening system integrations. This separation supports independent evolution of components, clear responsibility boundaries, and safer iteration in a regulated context.
Integration & Data Flow Considerations
ClearSight is designed to integrate with national or regional screening systems via event-based ingestion of structured screening results. Key design considerations include: asynchronous ingestion of screening events rather than polling, idempotent processing to avoid duplicate reminders, upstream identity resolution with ClearSight operating on pseudonymized keys, and explicit rejection of unstructured or free-text payloads. The system is compatible with HL7 / FHIR-style payloads without requiring deep EHR integration.
Frontend
React + TypeScript with clean, restrained UI design focused on clarity and accessibility over visual density. The frontend is intentionally thin: no complex client-side logic, no medical decision-making, and minimal state persistence. This reduces risk and simplifies auditing and testing.
Backend
Node.js + TypeScript with REST-based APIs with explicit versioning and structured logging for operational visibility. Backend services are responsible for enforcing workflow rules, applying configuration defined by screening programs, coordinating reminder logic and confirmation events, and enforcing data minimization and access boundaries.
Reliability & Failure Handling
ClearSight is designed to tolerate partial failures without blocking patient communication. Examples include: retryable message delivery failures, delayed or missing confirmation events, and temporary downstream outages. Reminder workflows are idempotent and resumable, and failure states are observable at an aggregate level for operational review.
Data Layer
PostgreSQL with relational schema and strong data integrity constraints. The data model favors explicit relationships, minimal patient-linked data, and easy deletion and aggregation. ClearSight is intentionally not a document store or medical record repository.
AI / ML Service
Separate Python service using FastAPI and scikit-learn with interpretable models only. AI services are loosely coupled, optional, and replaceable without affecting core workflows. This allows ClearSight to degrade gracefully and avoids hard dependencies on AI availability.
Infrastructure & Deployment
Containerized using Docker with EU-based managed hosting, HTTPS everywhere, and environment separation (dev / test / prod). The system is designed to be deployable by public screening bodies, public–private partnerships, or managed service providers. No reliance on proprietary or opaque infrastructure components.
The stack was chosen to minimize operational risk, favor long-term maintainability, support auditability and explainability, and avoid lock-in and unnecessary complexity. The technology supports the product's constraints — not the other way around.