Principle 02
Humans always decide.
We build assistants, not autopilots. The line between the two is the most important design decision in any AI product, and most companies are quietly drawing it in the wrong place.
An assistant drafts. An autopilot ships. An assistant retrieves the relevant documents, surfaces the inconsistencies, flags the open questions — and shows its work so the human can act on what it finds. An autopilot decides which answer is right and acts on the decision. Both can be useful. Only one is appropriate for work where the consequences of being wrong land on a person who has to defend the output.
Where the line is.
Our line is precise. AI does the work that's mechanical, retrievable, and reversible. Humans do the work that's judgment-based, contextual, and consequential. That sounds like a slogan, but it cashes out as concrete product decisions.
VTTD will draft an answer to a security questionnaire. It will not submit the questionnaire. It will pull the relevant section of your SOC 2 report. It will not decide which sections of your SOC 2 are appropriate to share with this particular buyer. It will flag a gap where your documentation falls short. It will not make up content to fill the gap. It will not silently skip the question.
The pattern repeats across every product we'll build. AI is the fast layer. Humans are the deciding layer. The interface between them is where we spend most of our design effort, because that interface is the entire product.
What we won't do.
We won't ship features that automate judgment work. "AI will decide which questions are most important to answer first" is judgment. "AI will auto-submit if the confidence score is high enough" is judgment. Both of those would be popular features. Neither will exist in our products.
We won't build agentic workflows that take consequential actions on a person's behalf without that person seeing and approving each action. The compounding-error problem with multi-step agents is real, but the deeper issue is accountability: when an AI agent does five things in sequence and the third one was wrong, who answers for it? In our products, the answer is always a person, because a person approved each step — which is what makes trust the product, not a side effect.
We won't measure success in "tasks automated." We measure success in tasks accelerated. The number of decisions still made by a person should not go down — the time those decisions take should.
How VTTD reflects this.
VTTD never auto-submits. Every answer it drafts is shown to a human reviewer, who edits, approves, or rejects before anything ships to the buyer. The product takes hours of mechanical drafting work and turns it into minutes of judgment work. The judgment is still done by the person whose name is on the questionnaire — that's not a constraint we're working around, it's the product. Read more about VTTD →
The seat that matters stays human. We build the chair.