How we prompt our AI to write like you, not like a bot
The hardest part of AI draft replies is not generating text — it is generating text that sounds like the person sending it.
The most common feedback we received in the first two weeks of xNord's beta was not about urgency classification or archiving accuracy. It was about voice. "The drafts are good but they do not sound like me." That feedback shaped the next three months of development.
Why voice matching is hard
Large language models have a default voice. It is polite, slightly formal, comprehensive, and safe. It uses phrases like "I hope this finds you well," signs off with "Best regards," and tends to address every point in a message even when a short reply would be more appropriate.
That is not how most founders write. Founders tend to write short emails. They use first names. They get to the point in the first sentence. They end with "Best," not "Best regards." They sometimes send a two-word reply to a long email because that is the right response to that email.
Getting a model to write like a specific person requires more than a system prompt that says "write like a founder." It requires actual examples of how that person writes.
The approach we took
xNord uses a multi-signal approach to voice matching. When you connect your Gmail account, the agent has access to your sent email history. We do not store this history, but we use it at triage time to extract patterns.
Specifically, we look at:
- Average email length (word count of sent replies)
- Greeting style (do you use "Hi," "Hey," or nothing?)
- Sign-off style ("Best," "Thanks," "Cheers," initials?)
- Punctuation habits (do you use Oxford commas? do you use em-dashes?)
- Typical response latency (do you reply same-day or next-day? this affects tone)
- Formality level by sender type (are you more formal with investors than with team members?)
These signals are distilled into a short style description that is injected into the system prompt at draft generation time. Not "write like a founder" but "write like someone who sends 3-sentence replies, uses first names, signs off with Best, and does not use exclamation marks."
The prompt structure
We cannot share the full prompt (it is our core IP) but the structure is roughly:
- System role definition: who the agent is and what it is trying to do
- Contextual constraints: urgency of the email, relationship with the sender, thread history
- Style constraints: derived from sent email analysis
- Content constraints: what needs to be addressed, what should not be assumed, what tone is appropriate
- Output format: plain text, no subject line, no markdown, specific sign-off
The style constraints section is the one that changes per user. Everything else is relatively stable.
What still does not work
Voice matching is not a solved problem for us. There are categories of email where the drafts still feel generic:
Long negotiation threads where the history spans months. The model loses context about what has already been agreed, what is still being discussed, and what the current state of the relationship is.
Emails to people the user has never emailed before. Without prior correspondence to reference, the model defaults to a more formal register that may not match the user's style with new contacts.
Highly technical emails. If you are a technical founder who uses precise terminology, the model sometimes softens or generalises language that should be specific.
Where we are going
The next step is explicit style feedback. When you edit a draft, we will treat that edit as a signal. If you consistently change "I wanted to reach out" to "Wanted to follow up," we will learn that pattern and apply it to future drafts. This feedback loop is the most reliable path to genuine voice matching — not better prompts, but more data about what you actually send.