How DriftLess Works

Frame your project, open the Build Room, run a cycle. One path.

Our promise to you:

We will never let your original vision get lost in the build. We remember why you started, we watch for drift before you feel it, and we give you back control — so you ship what you actually meant to build, not what the iterations accidentally became.

Every feature exists to fulfill this one line: Frame fast. Build smart. Kill drift before it kills your project.

We are the co-founder who never forgets why you started.

The problem we solve

70–80% of side projects die from lost intent — not technical failure.

Silent drift, scope creep, and that sinking feeling when your code no longer matches day one.

We catch it early: live 45% drift alerts and reframe nudges keep your project on track.

What DriftLess Is

A proof system for change acceptance.

One question: Why should we accept this change? Every change links to concrete evidence — problem statement, spec, feedback summary, budget — and we never let execution continue past overspend, scope change, or go-live without your explicit approval.

Discrete build cycles, not a fire hose. Each run has clear boundaries: inputs, five required artifacts, optional diff.

Humans decide; AI proposes. Artifacts are append-only so the audit trail stays intact.

What We Enforce

  • No change without justification — every change ties to evidence
  • You decide; we propose — we generate drafts, you approve gates
  • Discrete runs — clear before/after comparison
  • Immutable evidence — new runs or versions only; we protect the trail

One Build Cycle (One Run)

You talk only to DriftLess. We delegate to a Writer (implementation proposal) and a Reviewer (approve/reject), then synthesize and produce the final artifacts. If we hit an approval gate, we pause and nudge you. No auto-approve.

You → goal, context, constraints
DriftLess → problem statement + dev spec
Writer → implementation proposal
Reviewer → APPROVE / REJECT (with reasoning)
DriftLess → feedback summary + budget ledger
You → review, approve gates if needed, hand off to your coding tool or iterate

What Every Run Produces

Five deliverables — the proof that the change was framed, specified, reviewed, and costed.

  • User / discovery notes — what problem we're solving, for whom
  • Problem statement + success criteria — scope and "done"
  • Dev spec — technical steps for implementation
  • Feedback summary + decision — what was approved/rejected and why
  • Budget ledger — LLM usage (tokens, cost) + optional effort

Append-only. Export any run as an evidence bundle (ZIP) with artifacts and, when applicable, a standard unified diff for your repo.

When We Pause for You

Three conditions. We stop and ask. No auto-approve.

  • Overspend — run cost hits your ceiling
  • Scope change — proposal materially changes intent
  • Go-live — final checkpoint before "done"

Who approved what and when: logged in the Control Room and in the evidence bundle.

Drift Radar: Catch It Before You Feel It

Drift = how far the project has moved from your original framing. We compute a drift score by comparing artifacts across runs — spec and proposal vs baseline.

Live in the UI. At 45% (configurable) we show an amber banner: Time to reframe? This is usually where the vibe dies. Your call — keep pushing or reset now.

Evidence & Hand-Off

Run metadata, artifacts, approvals, and (when applicable) the code diff — stored for audit. We never overwrite; new runs or versions only.

Export one run as evidence bundle: ZIP with five artifacts, metadata, and unified diff.

One-click copy for your coding tool: framing + spec + proposal in one prompt.

Control Room: timeline of runs, drift, approvals. What changed, why, who signed off. One place.

Your Keys, Your Control

  • API keys — you add your own (Anthropic, OpenAI, etc.). Encrypted at rest. We never log or expose them. No proxy; you pay the provider directly.
  • Auth — session-based, signed, time-limited. Optional sign-in with Google or GitHub; minimal data stored.
  • Storage — metadata and artifacts scoped by user and project. No cross-tenant access.
  • Rate limiting — auth and expensive endpoints protected.

How We're Different

Downstream tools generate and edit code. We sit upstream: we frame vision first, quantify drift, and bundle context so your prompts land better. We complement them; we don't compete.

General note tools require discipline; we're proactive — we warn at 45% and nudge reframe.

Traditional PM tools are built for teams; we're solo-first, lightweight, and focused on vision drift, not tickets.

No other tool in 2026 offers this combo: quantified drift scoring, proactive reframe nudges, structured solo cycles, and one-click prompt bundling for your coding tool.

Common Questions

  • $19/mo — worth it? One drifted project can cost weeks of rework or an abandoned repo. We catch drift early so you ship what you meant to build. No LLM markup — you pay providers directly. Try 5 free runs; cancel anytime. The cost of not having it is usually higher.
  • Too much to learn? Solo Preset: zero questions, instant workflow. Not another bloated tool — lightweight co-founder, chat or one-click cycles. Fits your flow.
  • Privacy? Just another wrapper? Your keys: encrypted (AES-256-GCM), per-user, never logged. Built by a solo founder — no VC gloss, no fake urgency. Try 5 free runs; see if we protect your vision. No lock-in.

Onboarding questions explained

When you frame your project, each choice affects how the system enforces governance. Below is what each question and option means in practice. Titles and order match the framing wizard.

Governance in general

These settings affect how the system enforces governance: who can approve scope, resolve incidents, and when execution pauses for your input. Choosing stricter options increases safety and traceability; more automated options increase velocity but require trust in the system.

Q0 — Project type

What kind of project the system is allowed to manage. Proof-of-concept (greenfield only): full freedom to design, no existing code constraints; speed and learning prioritized over long-term optimization.

Q1 — Final decision authority

Who has final authority for shipping to production, increasing scope or complexity, accepting unresolved risks, and closing incidents. Human only = maximum safety; AI Brain = faster but requires trust.

Q1b — Scope changes authority

Who has final authority to approve scope or complexity increases. Prevents silent scope creep without approval.

Q1c — Incident resolution authority

Who has final authority to resolve incidents. Critical for production safety.

Q2 — Role separation strictness

Strict = roles cannot override each other. Guided = roles may propose but must escalate. Flexible = human may override at any time.

Q4 — Internal review loop limit

Maximum number of internal review iterations before escalation is required. Prevents endless back-and-forth and forces escalation when progress stalls.

Q5 — External quality gate loop limit

Maximum number of external quality gate attempts before escalation is required.

Q6 — Escalation action when limit reached

What happens when iteration limits are reached: pause and require human decision, escalate to AI Brain then human, or auto-reduce scope then continue.

Q7 — Forced reflection threshold

After how many consecutive escalations forced reflection is required. Triggers learning outputs and reassessment.

Drift reframe threshold

At what percentage of change from your original frame we suggest a reframe (fresh kickoff). Matches the 45% drift nudge in the Control Room.

Token budget

Optional session token cap. We use it to show usage and warn when you're near the cap. Range 10,000–2,000,000; default 100,000.

Q8 — Mandatory quality gates

Which quality gates (e.g. automated tests, lint, typecheck, AI Reviewer) must be satisfied before a change can be considered shippable. Selected gates cannot be skipped; repeated failure triggers escalation.

Q9 — Scope expansion policy

Policy for expanding scope during execution: never mid-sprint, only with tradeoff, or anytime with escalation. Escalation pauses execution until you approve.

Q10 — Non-negotiable constraints

Which constraints are absolutely non-negotiable (e.g. no architecture changes mid-sprint, no new dependencies without review). Each is enforced mechanically; if violated, the system stops or escalates.

Q14 — Mandatory artifacts

Which artifacts must always exist (e.g. workflow definition, ticket, specification, decision log). Every decision leaves a trace; no invisible work.

Q15 — Traceability requirement

Level of traceability between artifacts: none, partial (links recommended), or strict (every artifact must reference its parent). Strict prevents orphan decisions and enables audits.

Q16 — State visibility

Who must be able to see the state of each artifact: human, AI Brain, specialized AI roles, external quality gates. Affects what the system exposes and to whom.

Q17 — Artifact immutability

Once an artifact reaches a terminal state (e.g. Approved, Closed), may it be modified? Never (immutable), only with explicit escalation, or freely editable. Immutability protects the audit trail.

Q18 — Mandatory learning outputs

What must be produced before a task, ticket, or incident can be closed: e.g. new or updated test, decision log entry, documentation update. Ensures learning over time.

Q19 — Repeated failure handling

If similar failures or incidents repeat, what the system must do: continue as normal, escalate to AI Brain, require human review, or freeze related work until resolved.

Q20 — Accountability enforcement

Who is responsible for confirming that learning requirements are met: human, AI Brain, Reviewer AI, or Auditor AI.

Section G — Pilot/MVP documentation (optional)

Optional questions (Q21–Q26) cover dependencies & environment, configuration approach, data pipeline needs, testing & validation, monitoring & maintenance, and security & compliance. They help pin down pilot scope; you can skip them for a minimal frame.

Technical Posture

API-first. The UI uses the same APIs as scripts or integrations.

Consistent response envelope (success/error, data/details).

Before any execute-phase action (creating work, resolving, closing), a plan lock is required: you lock the plan, then the run can mutate state.

Draft and execution stay separate and auditable.

We omit implementation details here — no API paths, schemas, or prompts. Enough for a technical reader to trust the design: what we enforce, how evidence is produced, how we stay secure. Not a replication blueprint.