A clinical documentation workspace for solo clinicians, designed to support drafting, reviewing, approving, and exporting notes with strict human control and auditability.
DocNotes is intentionally not an EHR and does not replace existing clinical systems. It operates alongside them via explicit copy-paste workflows, prioritizing clinician control and accountability over automation.
Problem & Context
Clinical documentation has become an increasing burden for clinicians. In practice, many clinicians now turn to general-purpose AI tools to rewrite or structure notes outside their primary systems.
This creates serious workflow and safety gaps:
No clear separation between drafts and final records
No audit trail for AI usage or document changes
No immutable, approved version of a note
No versioning or amendment history
No clear accountability boundaries
No GDPR-aligned handling of sensitive data
The issue is not simply that AI is being used, but that it is being used outside any workflow designed for clinical accountability. DocNotes was created to address this workflow gap, not to replace clinical systems or automate medical decision-making.
What This Product Is / Is Not
What DocNotes Is
A browser-based clinical documentation workspace
A tool for drafting, reviewing, approving, and exporting clinical notes
An assistive system where the clinician remains fully in control
Designed to operate alongside existing EHRs
Built conservatively to validate safety and usability before broader release
What DocNotes Is Not
Not an EHR or clinical record system
Not a diagnostic or decision-making tool
Not autonomous or self-submitting
Not a patient-facing platform
Not an enterprise or multi-practice system (at this stage)
These boundaries are intentional and actively enforced.
Core Design Principles
DocNotes is shaped by a small set of non-negotiable design principles. Each principle directly constrains what the product can and cannot do.
1
Human-in-the-loop by default
AI is used only to rewrite or structure clinician-provided text. Every output must be reviewed and explicitly approved by the clinician before it becomes part of the record.
2
Draft ≠ Approved
Drafts are editable and transient. Approved documents are immutable. Any change requires an amendment, creating a new version while preserving prior records.
3
Auditability over convenience
Sensitive actions — including AI generation, approvals, exports, and authentication events — are logged. This prioritizes traceability and accountability, even when it introduces friction.
4
Minimal data by design
Only essential patient identifiers are stored. There is no background syncing, no patient portals, and an optional incognito mode for session-only use.
5
Conservative scope
Features that increase clinical, regulatory, or liability risk are intentionally excluded. The product favors clear boundaries over feature completeness.
Primary User Workflow
DocNotes is designed around a single, explicit clinical documentation workflow. The goal is to support clinicians from first draft to final record without collapsing responsibility or control.
End-to-End Flow
Drafting encourages speed and flexibility
AI assists without assuming clinical responsibility
Approval establishes a clear point of accountability
Immutability preserves record integrity
Amendments preserve history without rewriting the past
1
Create a draft
A clinician starts a new draft within a patient context or session. Drafts are explicitly marked as non-final and are treated as working material.
2
Capture rough notes
The clinician enters unstructured or semi-structured notes freely. No assumptions are made about format or completeness at this stage.
3
Optional AI-assisted rewrite
The clinician can request an AI rewrite (e.g. SOAP-style structuring). AI output is always based solely on clinician-provided text and never runs autonomously.
4
Review and edit
The rewritten content is reviewed, edited, or discarded by the clinician. The clinician remains fully responsible for the final content.
5
Explicit approval
Once the clinician is satisfied, the draft is explicitly approved. This is a deliberate action, not an automatic transition.
6
Immutable approved document
After approval, the document becomes immutable. The approved version is locked and preserved as a stable record.
7
Amendments (if required)
Any later change creates an amendment rather than modifying the original. Prior versions remain accessible for traceability.
8
Export and handoff
Approved documents are copied into the clinician's primary system (e.g. EHR). DocNotes does not submit or sync data automatically.
The workflow intentionally mirrors how clinicians already work — while adding structure, traceability, and clear state transitions.
Key Product Decisions
DocNotes is the result of a small number of deliberate, high-impact product decisions. Each decision was made to balance usability, safety, and scope in a healthcare-adjacent context.
1
Separate Drafts from Approved Documents
Decision
Treat drafts and approved documents as fundamentally different states.
Why
In practice, clinicians think in drafts, but systems often blur the line between "working notes" and "the record." This creates ambiguity around responsibility, especially when AI is involved.
Trade-off
Introduced an extra explicit step (approval)
Slowed down the moment where a document becomes final
Outcome
Clear accountability boundary
Immutable approved records
Safer integration of AI assistance without collapsing responsibility
2
AI as Rewrite Assistance Only (No Autonomy)
Decision
Restrict AI usage to rewriting and structuring clinician-provided text.
Why
Allowing AI to infer, suggest, or auto-complete clinical content would:
introduce false authority
blur accountability
increase clinical and regulatory risk
Trade-off
Less automation
Reduced "wow factor" compared to fully generative tools
Outcome
AI supports clinicians without replacing judgment
Every word in the final document remains clinician-owned
AI failures are visible and containable
3
Explicit Approval as a Required Action
Decision
Require a deliberate approval action before a document becomes immutable.
Why
Passive transitions (e.g. autosave = final) are common sources of silent failure. Approval creates a clear, auditable moment of responsibility.
Trade-off
Added friction to the workflow
Required clinicians to consciously "sign off"
Outcome
Clear accountability point
Reliable audit trail
Reduced risk of accidental finalization
4
Auditability as a First-Class Concern
Decision
Log sensitive actions such as AI generation, approvals, exports, and authentication events.
Why
Once AI enters clinical documentation workflows, traceability becomes critical — not for surveillance, but for accountability and learning.
Trade-off
Additional implementation complexity
Some loss of convenience compared to silent systems
Outcome
Clear visibility into how documents are created
Safer operational debugging
Foundation for responsible future expansion
5
Application-Level Protection of Patient Identifiers
Decision
Protect patient identifiers at the application layer rather than relying solely on infrastructure controls.
Why
Patient names and identifiers appear across multiple workflows (lists, drafts, logs). Application-level safeguards reduce the risk of accidental exposure through logs, debugging, or operational tooling.
Design DocNotes to export content via explicit copy-paste rather than automated syncing.
Why
Automatic integration with clinical systems would:
expand regulatory scope
obscure responsibility boundaries
introduce failure modes outside clinician control
Trade-off
Less convenience
Manual handoff step
Outcome
Clear separation of responsibility
Reduced liability surface
Easier adoption alongside existing workflows
Across all decisions, the guiding question was: "Does this increase clarity of responsibility, or does it quietly shift it?" When the answer was unclear, the decision defaulted toward restraint.
Rather than optimizing for feature breadth, I focused on reducing risk while preserving real workflow value.
Constraints & Risk Management
From the outset, DocNotes was treated as a healthcare-adjacent system operating under heightened risk, even as an early-stage MVP. Rather than treating risk as a future concern, constraints were made explicit design inputs that shaped scope, workflows, and technical decisions.
1
Regulatory & Scope Constraints
DocNotes is intentionally positioned outside regulated medical record systems. The product does not store official patient records, does not submit, sync, or modify data in EHRs, does not generate diagnoses, recommendations, or clinical decisions, and all exports require explicit clinician action. These constraints reduce regulatory exposure while keeping the product usable within real clinical workflows.
2
Human Accountability Boundaries
AI assistance introduces ambiguity around authorship and responsibility if not carefully constrained. AI is restricted to rewriting clinician-provided text, all AI output requires explicit review and approval, no document becomes final without a deliberate approval action, and amendments preserve history rather than overwriting prior records. This ensures that clinical responsibility never silently shifts to the system.
3
Data Protection & Privacy
Patient-identifiable information was treated as a primary risk surface. Key measures included application-level protection of sensitive patient identifiers, minimization of stored data to essential fields only, no background syncing or secondary data use, and optional incognito workflows for session-only use. These decisions prioritize data minimization and exposure reduction over convenience.
4
Security & Operational Logging
Rather than implementing broad activity surveillance, DocNotes focuses on security and operational traceability. Logged events include authentication events, access attempts, AI generation requests, and document approvals and exports. Importantly, audit logs never contain plaintext patient names and are designed for operational safety and misuse detection, not monitoring clinicians.
5
Failure Modes & Safe Degradation
AI systems are inherently probabilistic and can fail unpredictably. DocNotes is designed so that AI failures are visible, not silent, failed generations do not block manual workflows, clinicians can always proceed without AI assistance, and the system remains usable even when AI is unavailable. This prevents AI reliability from becoming a single point of failure.
Across all constraints, the guiding principle was: Reduce harm and ambiguity before optimizing for speed or convenience. In a healthcare-adjacent context, clarity, restraint, and explicit boundaries are more valuable than feature completeness.
Delivery & Execution
DocNotes was delivered through incremental, risk-aware execution, with an emphasis on sequencing decisions rather than maximizing feature count. As a solo Product Owner, I treated delivery artifacts as tools for scope control and learning, not ceremony.
Maintain momentum without accumulating hidden risk
Make trade-offs visible early
Iterate safely in a healthcare-adjacent context
Treat "how we ship" as a product decision, not just an engineering one
1
Roadmap & Sequencing
The roadmap focused on reducing uncertainty early, rather than delivering a broad feature set. Initial milestones prioritized establishing a safe core workflow (draft → review → approval), defining non-negotiable boundaries around AI usage, validating that the product remained usable without automation, and introducing auditability and data protection before expansion. This sequencing ensured that later features were built on a stable and defensible foundation, rather than retrofitted onto unsafe assumptions.
2
Backlog & Prioritization
Work was managed through a lightweight backlog that emphasized clear problem statements, explicit acceptance criteria, and scope boundaries and non-goals. Backlog items were ordered to isolate higher-risk changes (AI, authentication, data handling), keep iterations small and reversible, and avoid coupling unrelated concerns. This allowed steady progress while maintaining confidence in system behavior.
3
Acceptance-Criteria–Driven Delivery
To keep implementation aligned with product intent, individual backlog items were written with clear acceptance criteria. A typical ticket included the user problem being addressed, preconditions and constraints, explicit success criteria, and out-of-scope clarifications. This helped prevent scope creep and ensured that "done" meant meeting product intent, not just shipping code.
4
Managing Risk While Building Solo
Certain areas of the product carried disproportionate risk: patient-identifiable data, AI-assisted content generation, and approval and audit flows. These changes were handled in small, isolated increments and validated against acceptance criteria before being merged into the main system. The goal was not process fidelity, but reducing the cost of mistakes.
This delivery approach allowed me to achieve these outcomes while working solo on a healthcare-adjacent product.
Outcomes & Learnings
Because DocNotes is a private, invite-only MVP, outcomes are primarily qualitative and focused on workflow behavior rather than growth metrics. The most meaningful learnings came from observing how clinicians interact with structure, friction, and AI assistance in practice.
1
Workflow Clarity Matters More Than Automation
Decision
Clinicians responded positively to the clear separation between drafts and approved documents.
Why
Even though this introduced additional steps, the explicit state transitions provided meaningful benefits:
Reduced ambiguity about what "counts" as the record
Increased confidence when using AI-assisted rewriting
Made later amendments feel safer and more intentional
Outcome
In clinical contexts, clarity of responsibility outweighs speed
2
AI Is Most Useful When It Is Constrained
Decision
AI-assisted rewriting was most valuable when applied to clinician-written text, scoped to formatting and structure, and easy to discard without penalty.
Why
Attempts to make AI more proactive or "helpful" quickly reduced trust.
Outcome
Validated the decision to treat AI as a supporting tool, not a source of authority
3
Explicit Friction Can Increase Trust
Decision
Features like manual approval, immutable records, and visible audit events introduced friction.
Why
That friction was interpreted as intentional and reassuring, not burdensome. This was especially true once clinicians understood why those steps existed.
Outcome
Deliberate friction can build confidence in healthcare-adjacent contexts
4
Failure Handling Is as Important as Success
Decision
AI failures (e.g. unusable rewrites, interruptions, or timeouts) surfaced early in testing.
Why
Designing the system so that failures were visible, manual workflows always remained available, and AI was never required to proceed prevented frustration and preserved trust.
Outcome
Reinforced the importance of designing around failure modes, not just ideal paths
5
Scope Discipline Prevents Downstream Risk
Decision
Several feature ideas were deliberately deferred or excluded.
Why
Ideas like automatic syncing with clinical systems, background patient data ingestion, and AI-generated content beyond rewriting were resisted early.
Avoided unclear accountability
Avoided expanded regulatory exposure
Avoided complex rollback scenarios
Outcome
Validated the value of explicit non-goals as a delivery tool
6
Product Responsibility Increases After Launch
Decision
Once real users began interacting with the system, the nature of the work shifted.
Why
Prioritization became more conservative, reliability mattered more than new features, and "edge cases" stopped being theoretical.
Outcome
Shipping is not the finish line, especially in healthcare-adjacent products
These outcomes and learnings shaped not just DocNotes, but my understanding of what responsible product development looks like in high-stakes contexts.
What I'd Do Differently
Building DocNotes clarified several areas where earlier decisions could have reduced friction or surfaced risk sooner. None of these are regrets — they reflect how product responsibility evolves once real constraints and usage patterns become visible.
1
Validate Workflow Language Earlier
While the core workflow proved sound, some terminology around drafts, approvals, and amendments required explanation. If starting again, I would test naming and state labels earlier with clinicians, validate whether concepts like "approval" and "amendment" map cleanly to different specialties, and iterate on language before locking workflow states. This would have reduced onboarding friction without changing the underlying structure.
2
Introduce Explicit Boundaries Even Sooner
Many scope decisions were correct, but some boundaries could have been enforced earlier to avoid revisiting them later. In hindsight, I would formalize non-goals sooner and reference them more often during delivery, explicitly document why certain features were excluded (not just deferred), and use constraints as a more visible prioritization tool. This would have reduced second-guessing and made trade-offs clearer earlier.
3
Test Failure Scenarios More Aggressively
AI failure modes became more obvious once workflows were exercised end-to-end. Next time, I would simulate degraded AI behavior earlier (timeouts, unusable output), test "AI unavailable" scenarios as first-class cases, and design fallback states before optimizing happy paths. This would have accelerated confidence in system reliability.
4
Separate "Build" and "Operate" Earlier
As the product moved closer to real usage, the nature of the work shifted from building features to operating a system. If starting again, I would introduce operational checklists earlier, define post-launch responsibilities before launch (not after), and treat reliability and observability as part of the MVP definition. This would have smoothed the transition from development to ownership.
All of these adjustments point to the same learning: In healthcare-adjacent products, clarity and restraint compound faster than feature velocity. The earlier those constraints are made explicit, the easier it becomes to ship responsibly.
What's Next
DocNotes is currently in a private beta phase, with the primary goal of validating workflows, boundaries, and reliability in real clinical use. Future development is intentionally framed around earned complexity, not expansion for its own sake.
1
Validate the Core Workflow Across Repeated Use
Before adding functionality, the next priority is to observe how the draft → approval → amendment workflow holds up over time, identify where friction reinforces accountability versus where it becomes noise, and validate that the mental model works across different documentation styles. Only once this workflow proves consistently understandable does further abstraction make sense.
2
Improve Onboarding and Shared Mental Models
Early usage shows that clarity of intent matters as much as interface usability. Next steps focus on making workflow states (draft, approved, amended) self-explanatory, embedding product boundaries and non-goals directly into onboarding, and reducing the need for external explanation without simplifying responsibility. This is treated as a product communication problem, not a feature gap.
3
Expand AI Assistance Carefully — If It Earns It
AI support may evolve, but only under strict conditions: clinician authorship must remain explicit, failures must stay visible and non-blocking, and added assistance must reduce cognitive load without shifting responsibility. Potential expansion is limited to formatting and structuring support — not inference, interpretation, or automation of clinical judgment.
4
Prepare for a German-Language Version (DACH), If Validation Holds
If the core workflow proves stable and valuable, a natural next step would be to explore a German-language version for the DACH market. This would be driven by clinician demand rather than geographic ambition, the need for precise, domain-appropriate language, and alignment with existing EU data protection expectations. Localization would be treated as a product and safety concern, not a simple translation exercise.
5
Strengthen Operational Readiness as Usage Grows
As real usage increases, operational needs will expand. Potential next steps include clearer separation between operational monitoring and user-visible audit trails, better tooling around failure analysis and recovery, and preparing for compliance discussions only if institutional use becomes a real requirement. These steps would be driven by observed needs, not assumptions.
At this stage, the most valuable work is not scaling features, but reinforcing boundaries, validating responsibility models, and ensuring the product earns its next level of complexity. Growth is treated as a consequence of trust, not a goal in itself.
Technical Context
This section provides high-level technical context for readers interested in how the product was implemented. It is included for completeness and is not required to understand the product decisions above.
Application Characteristics
Web-based application with authenticated, user-scoped workspaces
Explicit separation of draft, approved, and amended document states
AI-assisted rewriting using third-party language models, with human review required for all outputs
Application-level protection of sensitive patient identifiers before persistence, designed to reduce accidental exposure through logs or operational tooling
Security and operational logging for authentication events, access attempts, AI generation requests, approvals, and exports