How DMP works

The visual version. No prior knowledge assumed.

← / → arrow keys to navigate · F for fullscreen · O for grid view

The problem

You want to send an end-to-end encrypted message to someone. Today the typical answer is "use a messaging app" — Signal, WhatsApp, iMessage.

But every one of those:

  • Has a company in the middle that can be subpoenaed, hacked, or shut down.
  • Requires a phone number or account on their service.
  • Goes dark if the company decides to.

DMP solves the same problem without any of those.

The big idea: use DNS as a mailbox

DNS is everywhere. Every laptop, every phone, every server already speaks it. It's how example.com turns into an IP address.

DMP repurposes DNS TXT records as a place to store signed, encrypted messages.

  • The server holding the records can't read them (encrypted).
  • The server can't forge them (signed).
  • Any DNS resolver in the world can read them (open by design).
Result: the "server" is just a DNS host. You don't need to trust it the way you trust a messaging company.

Two channels: HTTPS for writes, DNS for reads

flowchart LR
    A["Alice's CLI"] -- "writes (HTTPS,
token required)" --> Node["a DMP node"] B["any reader"] -- "reads (DNS,
no auth)" --> Node
  • Writes = publishing your identity, sending a message, refreshing prekeys. Uses HTTPS. Token-authenticated.
  • Reads = fetching someone's identity, receiving messages, looking up cluster manifests. Uses DNS. No auth.
Important nuance coming up: when Alice sends a message to Bob, Alice's CLI doesn't write to Alice's home node. It writes directly to Bob's home node. Next slide.

Where messages actually go (the truth)

sequenceDiagram
    participant A as Alice CLI
    participant ANode as Alice's home node
(serves mesh.gnu.cl) participant DNS as DNS (any resolver) participant B as Bob CLI A->>ANode: HTTPS POST
slot-N.mb-{hash(bob)}.mesh.gnu.cl Note over ANode: stored in sqlite
served on UDP 53 B->>DNS: TXT? slot-N.mb-{hash(bob)}.mesh.gnu.cl DNS->>ANode: forwards ANode-->>DNS: ciphertext DNS-->>B: ciphertext B->>B: decrypt + verify Alice's signature

Both parties share the mesh domain. Alice writes to her own home node only; Bob reads via DNS from the same zone. Each user only ever needs HTTPS to one node — their own.

The HTTP restriction model

Each user only needs HTTPS to their own home node. Nothing else.

  • Alice's CLI talks HTTPS to Alice's home only. To send anyone a message, she POSTs to her own node.
  • Bob's CLI talks HTTPS to Bob's home only (for publishing his identity, sending messages of his own).
  • To receive, Bob just queries DNS — works from any network with port 53 / DoH.
"Node-to-node" is the recursive DNS resolver chain. When Bob's CLI queries a name in Alice's zone, the resolver walks roots → registrar → Alice's authoritative node and pulls the record. No HTTPS handshake between nodes is required for this path.

Limitation: same mesh today

The current implementation expects Alice and Bob to share mesh_domain. Both their CLIs are configured with the same domain: mesh.gnu.cl, so the mailbox name they each derive matches.

Three current paths to messaging:

SetupWorks today?
Both users register at the same node, share the same mesh domain
Both users on a federated 3-node cluster (HTTPS anti-entropy syncs between nodes)
Different home nodes, different mesh domains, no cluster

A future "DNS-only federation" — where Bob's recv walks his pinned contacts' domains and polls each via DNS — would lift the third case without requiring inter-node HTTPS.

"Does the CLI need HTTP?"

Both, but for different things:

You're doing...ChannelNetwork requirements
dnsmesh init
setting up a config
none yet nothing — local only
dnsmesh register
getting a token at a node
HTTPS outbound 443 to the node
dnsmesh identity publish
dnsmesh send
HTTPS outbound 443 to your home (and recipient's node, for sends)
dnsmesh identity fetch
dnsmesh recv
DNS UDP 53 (or 1.1.1.1 / 8.8.8.8 — almost always reachable)

What if I'm on a restricted network?

NetworkRead messages?Send messages?
Home / coffee shop wifi
Hotel captive portal (after login)
Corporate firewall, HTTPS allowed, DNS blocked outboundvia DoH
Network blocking port 53 entirelyuse DNS-over-HTTPS at 1.1.1.1
Air-gapped (no internet at all)
Reading is the easy half. Even when DNS port 53 is blocked, public resolvers like Cloudflare offer DNS-over-HTTPS at 1.1.1.1. Sending requires reaching the destination's HTTPS port — same as accessing any normal website.

The pieces

flowchart LR
    CLI["dnsmesh CLI
your laptop / phone
(holds private keys)"] HTTP["HTTP API
port 443"] DNS["DNS server
UDP port 53"] Store[("sqlite
store")] World["any DNS resolver
anywhere"] CLI -- POST /v1/records --> HTTP HTTP --> Store DNS --> Store World -- TXT query --> DNS

A DMP node is one process. It speaks HTTPS for writes, DNS for reads, and writes/reads the same sqlite file. No separate database, no auth server, no message broker.

End-to-end: Alice publishes, Bob fetches

sequenceDiagram
    participant A as Alice
    participant ANode as Alice's home node
    participant DNS as DNS (anywhere)
    participant B as Bob
    A->>ANode: POST /v1/records (HTTPS)
signed identity record Note right of ANode: stored in sqlite
served on UDP 53 B->>DNS: TXT? alice@her-domain DNS->>ANode: forwards query ANode->>DNS: signed record DNS->>B: signed record B->>B: verify Ed25519 signature
pin Alice's pubkey

What "DNS query" actually does

sequenceDiagram
    participant Bob as Bob
    participant R as 1.1.1.1
(public resolver) participant DO as DigitalOcean
(owns gnu.cl) participant Node as dnsmesh.io
(authoritative for mesh.gnu.cl) Bob->>R: TXT? id-xxx.mesh.gnu.cl alt cached at resolver R-->>Bob: cached value else cache miss R->>DO: TXT? id-xxx.mesh.gnu.cl DO-->>R: try dnsmesh.io for this R->>Node: TXT? id-xxx.mesh.gnu.cl Node-->>R: signed record R-->>Bob: signed record end

Most queries hit a cache. Cold queries walk the chain. Either way, Bob's CLI verifies the signature locally — no trust in resolvers or paths required.

Why dnsmesh init uses 1.1.1.1 by default

dns_resolvers:
  - 1.1.1.1     # Cloudflare
  - 8.8.8.8     # Google (failover)
  • Skip stale negative cache. Your ISP might still cache "name doesn't exist" from before a delegation was set up.
  • Failover. If one resolver stops responding, the other catches it.
  • It works on networks blocking port 53. 1.1.1.1 also speaks DNS-over-HTTPS on port 443.

Privacy-first deploys: dnsmesh init --no-default-resolvers, then point at your own resolver in ~/.dmp/config.yaml.

Two ways to query

Through a public resolver

dig @1.1.1.1 \
  alice@her-domain TXT

What everyone else sees. Goes through caching.

Direct to the node

dig @her-node.example \
  alice@her-domain TXT

Source of truth. No caching. Use this when something looks stale.

Triage tip: if @node returns the record but @resolver doesn't, you're seeing a stale cache somewhere upstream. Wait, or query a different resolver.

What if the node goes offline?

Reads (DNS): graceful degradation

sequenceDiagram
    participant Bob as Bob
    participant R as Resolver
    participant Node as Node (down)
    Bob->>R: TXT? alice's identity
    alt within cache TTL
      R-->>Bob: still resolves
    else cache expired
      R->>Node: TXT? (timeout)
      R-->>Bob: no answer
    end
  

Cached reads survive for one TTL window (default 60s). Cold reads start failing as caches expire. Visible outage = one TTL plus a few minutes of negative-cache lag.

Node offline — writes fail immediately

sequenceDiagram
    participant Alice
    participant Node as Node (down)
    Alice->>Node: POST /v1/records
    Node--xAlice: timeout
    Note right of Alice: send fails
no built-in retry
operator handles it

No retry queue. Pending sends do not survive. Records already published are persisted on disk and come back when the node restarts.

Node-down survival cheat sheet

StateSurvives?
Records already published (sqlite on disk)
Resolver-cached records (within TTL)
Pending sends from clients
Live HTTP requests in flight
Heartbeat / discovery liveness✗ (until restart)

Federated cluster: surviving a node failure

flowchart LR
    Client["sender's CLI"]
    Client -- "HTTPS POST
fan to majority" --> A Client -- "HTTPS POST
fan to majority" --> C subgraph Cluster A["node-a"] B["node-b
(down)"] C["node-c"] A <-- "HTTPS
anti-entropy" --> C end

Three nodes anti-entropy-sync the same record set every ~30s, over HTTPS (/v1/sync/digest + /v1/sync/pull) — never over DNS. Writes from senders fan to a majority of cluster nodes for durability. Reads from the world union across all nodes. When the dead node returns, it pulls everything it missed since its last sync watermark.

Things people actually ask

  • "dig @node works but my CLI doesn't" — your CLI is using a different resolver. Check dns_resolvers in ~/.dmp/config.yaml.
  • "How long until everyone sees my publish?" — typically 1–3 minutes. Worst case 15 min for slow caches.
  • "Can I read messages on a network without HTTP?" — yes, DNS-only is enough for reads. Sends require HTTPS to your home node.
  • "Should I run my own resolver?" — privacy-conscious users yes. dns_resolvers: ['127.0.0.1'] after running unbound or similar locally.

Ready to try?

pip install dnsmesh
dnsmesh init alice --endpoint https://dnsmesh.io
dnsmesh register --node dnsmesh.io
dnsmesh identity publish

Getting started · How DMP works (text) · Protocol spec · GitHub

← back to docs