How DMP Works
- The mental model
- Where bytes actually go
- Who operates what
- Does every user need their own node?
- Can the node live on a user’s laptop?
- The three deployment paths
- What a new user actually does
- Trust model — three auth modes
- When NOT to use DMP
- Next steps
This page is for anyone deciding whether DMP fits their problem — engineering leads, platform operators, founders picking infrastructure. It answers the questions you actually ask before reading the protocol spec: what runs where, who operates what, and what’s my deployment story?
If you already know you want to use it, skip to Getting Started.
The mental model
DMP has two kinds of things: nodes and clients.
A node is infrastructure. It’s a small server (one Docker container) that stores signed TXT records and serves them over DNS. That’s it. It has no user accounts, no login flow, no database of users. Think authoritative DNS server, not SaaS backend. Operators run nodes.
A client is a user. Every person messaging over DMP has a small piece of software on their laptop or phone — a CLI, a library, or eventually an app — that holds their private keys and talks to a node. Clients encrypt and sign locally, then hand the resulting ciphertext to a node to publish as TXT records.
USER A's DEVICE DMP-NODE (VPS) USER B's DEVICE
──────────────── ────────────── ────────────────
~/.dmp/config.yaml serves signed TXT records: ~/.dmp/config.yaml
├─ private Ed25519 key ├─ identity records ├─ pinned contacts
├─ username ├─ mailbox manifests ├─ private keys
└─ pinned contacts └─ encrypted chunks └─ received cache
dnsmesh init [HTTP publish API] dnsmesh identity fetch
dnsmesh send bob ─── POST ──► stores ciphertext ◄─── DNS ─── dnsmesh recv (decrypts locally)
Everything on the wire is either signed (the node cannot forge it) or end-to-end encrypted (the node cannot read it). Node operators see ciphertext and public keys. They do not see message bodies. They do not hold private keys.
The useful consequence: the trust you place in a node operator is much smaller than the trust you place in a messaging company. You still trust them to stay online and not drop your records, but not to keep your secrets.
Where bytes actually go
A common confusion: when Alice sends a message to Bob, does her client talk to Bob’s home node directly? No. The CLI POSTs records to its own configured endpoint — Alice’s home node — over HTTPS. Bob later polls his own mailbox via DNS.
Concretely:
- Alice’s send writes records like
slot-N.mb-{hash(bob)}.<alice's mesh_domain>to Alice’s home node. - Bob’s receive queries
slot-N.mb-{hash(bob)}.<bob's mesh_domain>via DNS.
For these names to match, both clients must share the same
mesh_domain. This is the load-bearing fact about current
end-to-end messaging:
| Setup | End-to-end works? |
|---|---|
Alice + Bob both register at the same node, share domain: mesh.gnu.cl |
✅ |
| Alice + Bob in a federated 3-node cluster (anti-entropy syncs records between cluster members over HTTPS) | ✅ |
Alice on dnsmesh.io, Bob on dnsmesh.pro with different mesh_domains, no cluster |
❌ |
The third row is a real gap in current implementation. A future
“DNS-only federation” model — where Bob’s recv walks his pinned
contacts’ domains and polls each via the recursive DNS chain — would
let Alice and Bob each only need HTTPS to their own home, with
“node-to-node” being whatever path the resolver takes (typically
Bob's CLI → public resolver → roots → Alice's domain's authoritative
node → record). That’s not in the code today.
What HTTP and DNS each do
- HTTPS, client-to-own-node: every write goes here. Publishing identity, sending messages, refreshing prekeys, registering for a token. Each user only ever needs HTTPS to one node — their own.
- DNS, anywhere-to-any-node: every read goes here. Fetching identities, polling mailboxes, looking up cluster manifests. No auth. Survives port 53 blocks via DNS-over-HTTPS at public resolvers like 1.1.1.1.
- HTTPS, node-to-node (cluster only): anti-entropy at
/v1/sync/digest+/v1/sync/pull. Required only when the recipient’s mesh is hosted on a cluster.
Visual walkthrough in the How resolution works slide deck.
Who operates what
| Role | What they run | Where |
|---|---|---|
| User | dnsmesh CLI / dmp Python library (eventually an app) |
Their laptop or phone |
| Node operator | dnsmesh-node Docker container |
A VPS, a Raspberry Pi, a Droplet — anywhere reachable on UDP 53 |
A user and a node operator can be the same person — they don’t have to be.
Does every user need their own node?
No. One node serves many users, the same way 1.1.1.1 serves
millions of DNS clients at once. Records are addressed by user-hash
and signed by the user, so users on a shared node cannot spoof each
other even though they share infrastructure.
How many users per node is a capacity question (disk + rate limits), not a protocol question. A single $5/month VPS can comfortably host thousands of low-volume users.
Can the node live on a user’s laptop?
Technically yes, practically no for production identities. The node must be reachable on UDP 53 by anyone who wants to resolve the records it serves. A laptop behind NAT can’t do that. Deployment paths, in order of increasing seriousness:
| Scenario | Node placement | Good for |
|---|---|---|
| Local dev / self-tests | laptop | running examples/docker_e2e_demo.py, offline testing |
| Friends & family | one VPS someone in the group operates | trust-circle deployments, personal infra |
| Publicly reachable identity | one VPS with a public A/AAAA record | being findable at alice@example.com from anywhere on the internet |
| Federated (no single-operator outage) | 3+ VPS nodes stitched via docker-compose.cluster.yml + anti-entropy |
resilient / public infra |
For most users, the answer is: use someone else’s node — a friend’s or a community one. For most teams, the answer is: run one VPS for the team. For anyone who wants real sovereignty: run your own.
The three deployment paths
Path 1 — Join an existing node
For users who don’t want to operate infrastructure.
pip install dnsmeshprotocol # (once published; today: pip install -e .)
dnsmesh init alice --domain dmp.yournode.com
dnsmesh identity publish
The dnsmesh CLI reaches out to dmp.yournode.com over the HTTPS publish
API, hands it a signed IdentityRecord, and the node stores it as a
TXT record. The user shares their address (alice@dmp.yournode.com)
with contacts. Done.
Operator hands users a bearer token if the node has publish auth enabled (recommended). That’s the only secret they need.
Path 2 — Run your own single node
For operators who want a node for themselves, a team, or a community.
docker run -d --name dnsmesh-node \
-p 53:5353/udp \
-p 8053:8053/tcp \
-e DMP_OPERATOR_TOKEN=$(openssl rand -hex 32) \
-v dnsmesh-data:/var/lib/dmp \
ovalenzuela/dnsmesh-node:latest
Point a DNS A record at the VPS’s public IP. Front the HTTP port with TLS (Caddy, nginx, Cloudflare) — see the production compose recipe that does this automatically.
Now anyone the operator hands dmp.yourdomain.com + a bearer token
can onboard via Path 1.
Path 3 — Federated cluster
For higher availability: three nodes that sync records to each other via anti-entropy, so losing one doesn’t lose anyone’s messages.
docker compose -f docker-compose.cluster.yml up -d
End-to-end walkthrough lives in
examples/cluster_e2e_demo.py
and the operator guide is at
Cluster deployment.
Same client experience — users don’t know or care how many nodes sit
behind dmp.example.com.
What a new user actually does
Concrete flow, from scratch:
- A node exists somewhere. Either the user’s, a friend’s, or a
community’s. Let’s say
dmp.example.com. - Install the CLI.
git clone https://github.com/oscarvalenzuelab/DNSMeshProtocol.git cd DNSMeshProtocol && pip install -e . - Create a local identity. This generates an Ed25519 signing
keypair and an X25519 encryption keypair from a passphrase and
stores the private halves in
~/.dmp/config.yaml. The private keys never leave the laptop.dnsmesh init alice --domain example.com - Publish the public keys. The CLI signs an
IdentityRecordlocally and POSTs it to the node. The node stores it as a TXT record that any DNS client can now resolve.dnsmesh identity publish - Share the address. Tell friends: I’m
alice@example.com. - Friends fetch and pin. Their client resolves the TXT record,
verifies the Ed25519 signature, stores the public keys as a
contact.
dnsmesh identity fetch alice@example.com --add - Exchange messages.
dnsmesh send bob "hello"on Alice’s side → encrypt locally, chunk, publish chunks as TXT records keyed by a shared hash both sides can derive.dnsmesh recvon Bob’s side → resolve the chunks, verify, decrypt locally.
No account creation on the node side at any point. Users self-publish signed records; the node is a dumb filestore.
Trust model — three auth modes
The node’s /v1/records/* publish API runs in one of three modes,
picked via DMP_AUTH_MODE:
-
open(default when no token is configured). No auth, no TokenStore, no registration endpoints. Dev / trusted-LAN only.dnsmesh identity publishworks unauthenticated. Suitable for running a single-user node on your own laptop. -
legacy(implicit whenDMP_OPERATOR_TOKEN/DMP_HTTP_TOKENis set andDMP_AUTH_MODEisn’t). Single shared bearer token. Everyone onboarded to the node holds the same secret. Fine for a team or small community where key holders trust each other; the token leaks the moment you hand it to a stranger. This is the pre-M5.5 behavior, still supported. -
multi-tenant(opt-in viaDMP_AUTH_MODE=multi-tenant). Per-user publish tokens. Each user has their own bearer, minted either by the operator (dnsmesh-node-admin token issue) or self-service (the user runsdnsmesh register, proves key control with a signed challenge, and the node mints a token bound to their subject). Publish requests are scope-checked:- Alice’s token can POST
dmp.alice.example.com— her identity record — but notdmp.bob.example.com. - Any user’s token can POST chunks + mailbox deliveries (the “deliver to anyone’s inbox” SMTP analogy), rate-limited per-token.
- Neither can POST into the operator namespace (cluster
manifests, bootstrap records) — that still requires
DMP_OPERATOR_TOKEN. Registration and the admin CLI are covered in the User Guide and the Deployment — Multi-tenant node guide.
- Alice’s token can POST
Scaling guidance:
- Self-host / one-user node →
open. - Team / friends / one-trust-zone community →
legacy. - Public community node, or you want per-user audit / rate-limit /
revocation →
multi-tenant.
Anonymity property of multi-tenant: the shared-pool writes
(chunks + mailbox deliveries) do not log subject or token hash in
the durable audit. An operator compelled to hand over their
database cannot, from it alone, reconstruct which user delivered to
whom. Full sender anonymity against a powerful observer still needs
Tor / a mixnet — that’s M6 territory.
When NOT to use DMP
- You need guaranteed delivery under seconds. DNS caching and anti-entropy gossip give propagation delays in the single-digit seconds range — fine for messaging, wrong for trading signals.
- You need metadata privacy against the node operator. Message bodies are encrypted, but who talks to whom (traffic analysis) leaks to the node. Mix networks and onion routing exist for a reason; DMP is not one.
- You’re not comfortable with a non-certified-yet protocol. DMP is pre-audit. The crypto primitives are standard (X25519, Ed25519, ChaCha20-Poly1305, Argon2id), but the composition has not been reviewed by a third party. See SECURITY.md.
- You need push notifications, read receipts, typing indicators. DMP is a transport. Those are application features someone has to build on top.
Next steps
- Try it hands-on: Getting Started
- Deploy a node: Deployment
- Day-to-day CLI + library use: User Guide
- Protocol internals: Protocol