DocuSign is the default answer for sending a contract, and at $25 per user per month for the Standard plan on annual billing ($45 month-to-month), it’s a default that gets expensive quickly for anyone who only needs a handful of envelopes a week. The good news is that the underlying flow (PDF, fillable fields, signature, audit trail, sealed copy) is well-trodden ground in open source. DocuSeal and Documenso both ship as Docker images and run happily on a small VPS. For ordinary U.S. business contracts, ESIGN and UETA say records and signatures cannot be denied legal effect solely because they are electronic, and a well-run self-hosted instance can support that as long as intent, consent, signature-to-record association, retention, and any required disclosures are handled. None of this is legal advice; jurisdictions differ, consumer-disclosure rules add their own requirements, and some document types (wills, certain real estate instruments, court orders) are excluded outright. Talk to a lawyer if any of that is in play.
This guide walks the full path from zero. A domain, a $7/month box, free Cloudflare tier, and roughly an evening’s work gets you a signing service at sign.yourdomain.com with TLS, no public ports, and encrypted offsite backups. We picked DocuSeal here for the cleaner template builder and lower memory footprint, but Documenso is a strong alternative and most of the steps below are identical between the two.
Pick the Stack and Confirm It Fits
Before buying anything, decide whether self-hosting actually meets your requirement. The threshold question isn’t whether e-signature is generally enforceable (in the U.S. it is, under ESIGN/UETA, with the exclusions noted in the intro) but whether you need a specific compliance attestation on the platform itself. For standard business contracts, NDAs, statements of work, and engagement letters between sophisticated parties, an audit trail that captures intent, consent, signature, IP, timestamp, and the sealed final document is generally enough. For consumer-facing flows, healthcare data, financial-advisory clients, EU personal data, or anything with a regulator in the loop, you’ll likely need SOC 2, HIPAA, or GDPR attestation on the platform. Both DocuSeal and Documenso offer those on paid tiers; the self-hosted OSS image does not carry them by default.
The other early decision is the trust display. Out of the box, signed PDFs are valid but show as “self-signed” in Adobe Reader. If you need the green “signature is valid” banner that recognized Adobe Approved Trust List (AATL) certificates produce, that’s a separate purchase later (around $200/year from a recognized CA). Most counterparties never look, and the cryptographic audit trail is identical either way, but worth knowing it exists.
Domain and Cloudflare Account
Buy a domain at the registrar of your choice. Most TLDs land in the $10–15/year range, with .app at $14/year through Cloudflare Registrar a clean default since .app is on the HSTS preload list and forces HTTPS at the browser level. .com is around $10/year, .io runs higher at $30+. Buying through Cloudflare puts the domain on Cloudflare DNS automatically, which makes the tunnel step later trivial; if you buy elsewhere, point the nameservers at Cloudflare after registration.
Create a Cloudflare account if you don’t have one, turn on 2FA, and add the domain. The free tier covers everything in this guide: DNS, Universal SSL, the Cloudflare Tunnel connector itself (which exposes the app without inbound ports), the Zero Trust free plan (which covers small teams under 50 users if you later put Access policies in front of admin paths), and R2 storage up to 10 GB. The only money you’ll spend on Cloudflare is the domain itself unless you exceed the free tier limits, which for a signing service handling under a thousand envelopes a month, you won’t.
Tip: Use a dedicated email address for the registrar account that isn’t your personal one. WHOIS privacy at most registrars (including Cloudflare) is on by default now, but the email on file is also the recovery channel for the domain itself, and you don’t want it tied to an account that might churn.
The VPS
The published specs from each project are the place to start. DocuSeal’s on-premises requirements page walks four scenarios by document size and signer count; for the common case (single-use templates, one signer, average document size 500 KB to 5 MB), the floor is 1 vCPU and 1 to 2 GB of RAM, scaling to 2 vCPU and 4 GB once you’re routinely processing 100 MB documents. Disk scales with how many signed documents you keep on the box: roughly 12 GB for 10,000 small signed documents, 125 GB for 5 MB documents, and well into the terabytes for 100 MB documents at scale. Documenso’s requirements are similar at the application layer (1 GB RAM minimum, 2+ GB recommended), but it adds a hard dependency on PostgreSQL 14+ and benefits from S3-compatible object storage for documents in production. That changes the sizing math; plan for the database and the app, not just the app.
The default recommendation is Hetzner CPX11 in a US region (Ashburn or Hillsboro) at $6.99/month: 2 vCPU, 2 GB RAM, 40 GB NVMe, 20 TB included egress. AMD-based, fast disk, and Hetzner’s network is well-peered. That sizes comfortably into DocuSeal’s typical-document scenario with headroom for cloudflared, an Uptime Kuma sidecar, or a small backup script. EU regions are cheaper still (the same class CX22 is closer to $5/month) and work fine if your signers are international, but US-side latency matters when most counterparties are in North America.
Equivalent alternatives:
- DigitalOcean Basic droplet, 2 GB RAM, $12/month
- Vultr Cloud Compute, 2 GB RAM, $10/month
- Linode/Akamai Nanode 2 GB, $12/month
- OVH VPS Starter, around $7/month
Pick Debian 13 or Ubuntu 24.04 LTS; both are well-supported, and Docker installs cleanly on either.
Tip: Size up if you’re running more than the signing service on the same box. A 4 GB box (Hetzner CPX21 at ~$10/month) gives you room for Documenso’s Postgres dependency, larger document processing, an Uptime Kuma container for monitoring, or other Docker workloads. The marginal cost of one tier up is small; the marginal cost of having to migrate later is annoying. If you know up front that you’ll handle 100 MB+ documents (heavy contracts with embedded media), jump straight to 4 vCPU and 8 GB.
Other deployment targets
A VPS isn’t the only option. The same Docker compose stack runs anywhere Docker runs, and the Cloudflare Tunnel approach removes the public-IP and port-forwarding hurdles that historically made non-VPS deployments painful.
Home, homelab, or NAS. This is the path closest to most readers, and probably the most underrated. Cloudflare Tunnel means you don’t need a static IP and you don’t open any ports on your home router, which sidesteps most of the historical pain. It does not necessarily get you out of your residential ISP’s TOS, though: some clauses prohibit running services regardless of traffic direction, not just inbound port forwarding. Read the agreement, and if this is going to be counterparty-facing for real work, factor that in. Hardware that works:
- Synology or QNAP NAS with Container Manager / Container Station. DocuSeal idles inside the container budgets these boxes already publish for Docker workloads. The NAS can serve as the primary host or as the local-mirror target for VPS-side backups, but treat it as one location regardless: if the app and the backup tarballs live on the same NAS, a disk or controller failure takes both out. Encrypted offsite (R2, B2, Wasabi) is still mandatory.
- Mini-PC (Beelink, Minisforum, Intel NUC, or a refurb ThinkCentre Tiny). $150–$400 buys silent, low-power hardware (10–15 W idle) with 16–32 GB of RAM and an NVMe slot. This is the sweet spot if you want the option to run Postgres, Documenso, Immich, or other homelab services on the same box.
- Raspberry Pi 4 or 5. Works for DocuSeal on SQLite at low volume. Tight on RAM for Documenso (Postgres + Next.js wants more headroom than a Pi gracefully gives). Fine as a “test it before you buy a VPS” rig.
- Old laptop or desktop running Debian or Proxmox. Free if you have one in a closet. Power draw is the only real cost.
Pros: zero monthly hosting bill, full data residency on hardware you physically control, same Tailscale-and-Cloudflare-Tunnel setup works unchanged, more compute headroom than a $7 VPS for the same money over six months.
Cons: home internet uptime is whatever your ISP delivers (usually 99%+ but with occasional multi-hour outages); home power outages take you offline unless you’ve added a UPS; if the box dies you’re the one replacing it, not Hetzner. For a small private signing service, those tradeoffs are usually fine. For anything counterparty-facing where downtime damages a relationship, the VPS is more predictable.
Cloud (AWS, Azure, GCP). Worth a nod for completeness. The same architecture can be translated to a major cloud, but it isn’t a one-line lift-and-shift. AWS ECS on Fargate maps reasonably well: long-running tasks, multiple containers per task, persistent volumes via EFS, and a long-lived cloudflared sidecar all fit the model. Google Cloud Run and Azure Container Apps are designed around request-driven, scale-to-zero workloads, which is a different operational model than a sticky cloudflared connector and a stateful signing service want; you’d typically rework that into Cloud Run jobs plus a separately-hosted connector, or move to GCE/AKS for a more compose-like shape. Managed Postgres (RDS, Cloud SQL, Azure Database for Postgres) plugs into either DocuSeal or Documenso cleanly and is a real win on the database operations side. The downside: per-service pricing across compute, networking, storage, and egress adds up fast for a service that runs near-idle, IAM and VPC setup is an afternoon, and the operational complexity is wildly disproportionate to the workload. The case for cloud is when you’re already there for other reasons (existing org account, integration with other services in that account, regulatory mandate to use a specific cloud) or when a regulated client specifically requires AWS GovCloud, Azure Government, or GCP Assured Workloads. Otherwise, a VPS or home box is the right answer.
Harden the VPS Before Anything Else
This is the step most people skip and regret. A fresh VPS with port 22 open to the world starts seeing brute-force SSH attempts within minutes of boot. The fix is not to leave SSH on a custom port and hope (it doesn’t work). The fix is to remove SSH from the public internet entirely.
The setup that works:
- Install Tailscale on the VPS (free tier, up to 100 devices). This puts the box on a private mesh network reachable only from your other Tailscale devices.
- Configure SSH to listen only on the Tailscale interface, or use UFW to block port 22 on the public interface (
ufw deny 22) while allowing it from the Tailscale subnet. - Disable password authentication (
PasswordAuthentication noin/etc/ssh/sshd_config). Keys only. - Disable root login (
PermitRootLogin no). - Default-deny inbound on UFW. The only thing that needs to reach the box from the public internet is outbound traffic from the cloudflared container, and outbound is allowed by default.
- Add fail2ban as belt-and-suspenders for any service that does end up exposed.
- Turn on unattended security upgrades.
The result is a box where the public internet sees no open ports at all. SSH is reachable only from your Tailnet, the signing service is reachable only through the Cloudflare tunnel. Port scans return nothing. Brute-force tooling has nothing to brute-force.
How AI can help
Hardening is one of the highest-leverage uses of an AI assistant because the steps are well-documented but tedious to chain together correctly. Ask Claude Code or equivalent to write the full hardening script for your distro (Debian 13 or Ubuntu 24.04 are the easy paths), including UFW rules, sshd_config edits, fail2ban defaults, and a verification command at the end that prints the listening ports and confirms password auth is off. Have it run lynis or a similar audit afterward and explain each warning. Cross-check the script against a second model before running it; one bad UFW rule applied while connected over SSH locks you out.
Important: Test the Tailscale-only SSH path from a second device before closing public SSH. The classic foot-gun is enabling UFW deny on port 22, getting disconnected mid-session, and discovering you never confirmed Tailscale was actually working. Know your provider’s out-of-band recovery options before you need them. Hetzner has a Rescue System (boots into a minimal Linux you can use to fix /etc/ssh/sshd_config or UFW rules), a VNC console for live screen access to the running OS, and ISO mounting if you need to boot a custom recovery image. DigitalOcean’s “Recovery Console”, Vultr’s “View Console”, and Linode’s “Lish” cover the same ground. Bookmark the path for whichever provider you picked. Locking yourself out of a freshly hardened box at 1 AM is a story you only tell once.
Cloudflare Tunnel
The tunnel is what makes the no-public-ports approach work for the signing service itself. Instead of opening port 443 on the VPS and pointing DNS at it, a small daemon (cloudflared) on the VPS opens an outbound connection to Cloudflare’s edge. Cloudflare receives the public traffic, validates it, and forwards it back through that outbound connection. The VPS exposes nothing.
In the Cloudflare Zero Trust dashboard, create a tunnel, name it, and copy the tunnel token (a long ey... string). Then add a public hostname routing sign.yourdomain.com to the internal address http://docuseal:3000. Cloudflare auto-creates the DNS record and Universal SSL provisions a certificate within a minute. The tunnel will show “Inactive” until cloudflared connects in the next step.
The cloudflared connector itself can run two ways: as a systemd service installed directly on the host (cloudflared service install), or as a Docker container alongside DocuSeal in the same compose file. We’re going with the Docker route in Step 6 because it keeps the entire stack defined in one docker-compose.yml, restarts cleanly with the rest of the services, and isolates the connector from the host. The systemd path is fine if you’d rather not run cloudflared in Docker, but the compose-file approach is what most production setups land on.
Tip: While you’re in the Cloudflare dashboard, raise the minimum TLS version on the zone to 1.2 or 1.3. Under your domain, go to SSL/TLS → Edge Certificates → Minimum TLS Version and set it to TLS 1.2 (the broadly compatible floor) or TLS 1.3 (stricter, modern browsers only). Older clients negotiating TLS 1.0 or 1.1 are usually bots or scanners; legitimate signers in 2026 are all on 1.2+ regardless of platform. While you’re there, also enable Always Use HTTPS if it isn’t on, and turn on HTTP Strict Transport Security (HSTS) with a moderate max-age once you’re confident the cert is healthy. None of these are required for the tunnel to work; they tighten the public-edge profile so the only thing reaching DocuSeal is modern, encrypted traffic.
Docker Compose: DocuSeal + Cloudflared + SMTP
The stack is two containers on one Docker network: DocuSeal itself, and the cloudflared sidecar that handles the public tunnel. The compose file is short:
services:
docuseal:
image: docuseal/docuseal:2.5.1
restart: unless-stopped
environment:
HOST: sign.yourdomain.com
FORCE_SSL: "true"
SECRET_KEY_BASE: ${SECRET_KEY_BASE}
SMTP_ADDRESS: smtp.resend.com
SMTP_PORT: "587"
SMTP_USERNAME: resend
SMTP_PASSWORD: ${SMTP_API_KEY}
SMTP_AUTHENTICATION: plain
SMTP_FROM: "Your Org <noreply@yourdomain.com>"
volumes:
- ./docuseal:/data/docuseal
networks: [stack]
cloudflared:
image: cloudflare/cloudflared:2026.3.0
restart: unless-stopped
command: tunnel --no-autoupdate run
environment:
TUNNEL_TOKEN: ${CF_TUNNEL_TOKEN}
networks: [stack]
depends_on: [docuseal]
networks:
stack:
Pin both images to specific versions (check the DocuSeal releases and cloudflared releases pages); :latest will eventually pull a breaking change at the worst possible time. There’s a real argument for letting cloudflared float since its protocol with Cloudflare’s edge does occasionally change and the connector ships with auto-update logic, but pinning plus a scheduled review every couple of months is the more conservative path and matches how DocuSeal is being handled. Generate SECRET_KEY_BASE with openssl rand -hex 64 and put all three secrets in a .env file with chmod 600.
For SMTP, Resend is the cleanest default: free tier is 3,000 emails/month and 100/day, plenty for any small operation, and the API key flow is simpler than wrestling with Gmail’s app passwords. MailerSend and Brevo are similar. If you already have a Google Workspace plan at Business Standard tier or above, the SMTP relay service is included, and you can authenticate with an app password against smtp.gmail.com:587. Set up SPF, DKIM, and DMARC DNS records for whichever provider you pick before you send the first envelope; signatures from a domain without DKIM land in spam at most counterparty providers.
docker compose up -d, watch the logs for “Connection registered” on the cloudflared side, and the tunnel goes Healthy in the dashboard.
How AI can help
SMTP is the part of this step where AI saves the most time. The DKIM/SPF/DMARC trio is well-documented but easy to misconfigure (one missing dot in a TXT record and signed envelopes land in spam). Hand the AI your domain and chosen provider, and have it generate the exact DNS records, then walk you through reading the headers on a test send to confirm dkim=pass and spf=pass. It can also draft the compose with healthchecks, resource limits, and log rotation if you want belt-and-suspenders.
Tip: The first DocuSeal startup takes 30 to 60 seconds while it initializes the SQLite database and runs migrations. If you hit the URL before that finishes, you’ll see a 502 from the tunnel. Watch the logs, not the browser.
Backups
This is where most self-hosted setups quietly fail. The signing service is a database of legally significant documents; losing it because the VPS disk died is the kind of mistake that ends a business relationship. The minimum viable backup is three places: local on the VPS, offsite to object storage, and a mirror somewhere you control physically.
A note on the database before backups. DocuSeal ships with embedded SQLite by default, which is fine for small private use (one operator, modest envelope volume, no API/embedding workload) and is what this guide assumes. DocuSeal’s own documentation recommends PostgreSQL for heavier production workloads, embedded use, and high-throughput API integrations, and Documenso requires Postgres outright. If your use case is closer to “real production” than “small private,” read the Postgres note below before writing the SQLite backup script; the full Postgres deployment path is its own guide and out of scope here.
For SQLite, the snapshot is simple. SQLite has a built-in atomic backup command (.backup) that produces a consistent file without stopping the container. Tar the snapshot together with the attachments directory, encrypt the tarball before it leaves the VPS, push it to Cloudflare R2 over its S3-compatible API using rclone, and prune locally. R2’s free tier is 10 GB storage and 1 million Class A operations per month, which for daily tarballs of a small signing service runs free indefinitely; beyond the free tier it’s $0.015/GB/month with no egress fees, which is cheap.
A resilient small setup looks like this:
- Cron on the VPS at 03:00: snapshot SQLite, tar with attachments, encrypt, push to the R2 bucket via rclone (S3 API), prune local copies older than 14 days.
- Cron on a local Linux box at 03:30, pulled over Tailscale with rsync and a dedicated SSH key: mirror the VPS’s backup directory, prune local copies older than 90 days.
- Lifecycle rule on the R2 bucket: expire objects past 90 days so offsite storage doesn’t grow unbounded.
Three places, three different failure modes. The VPS dies and R2 has yesterday’s tarball; R2 has a billing problem and the local mirror is current; the local box dies and both VPS and R2 are intact.
Encrypt the offsite copy. Signed contracts are sensitive by nature, and an unencrypted tarball sitting in object storage is one credential leak away from a problem. Three reasonable approaches: restic (deduplicating, encrypted, talks to S3-compatible backends directly, snapshot-aware), rclone crypt (transparent encryption layer on top of any rclone remote), or age (small standalone tool, encrypt the tarball before rclone uploads). All three are sound; restic is the most full-featured for backup specifically. Whichever you pick, the encryption key lives off the VPS. If the key is on the box that gets compromised, encryption bought you nothing. Store the key in a password manager or a secondary device, and rehearse the restore-with-key flow before you trust it. A backup you can’t decrypt is the same as no backup.
Postgres note for more serious setups. Switch DocuSeal (or use Documenso, which requires this anyway) to PostgreSQL via the standard DATABASE_URL environment variable, and replace the SQLite snapshot step with pg_dump or, for point-in-time recovery, a WAL archiving setup with pgBackRest or WAL-G. Postgres also opens the door to managed alternatives: Neon and Supabase both offer free tiers with built-in backups, and at that point your application data is the only thing the VPS-side script needs to handle. Other offsite destinations worth considering: Backblaze B2 (S3-compatible, $0.006/GB/month), Wasabi (flat $6.99/TB/month), or a second Cloudflare R2 bucket in a different region for redundancy without leaving the same vendor.
The discipline that matters more than the script: actually run a restore test once, end to end including the decryption step. Pull a recent tarball from R2, decrypt it, and untar it into a scratch directory. Then verify the database:
- SQLite:
sqlite3 path/to/db.sqlite3 ".tables". You should see DocuSeal’s tables (users, templates, submissions). - Postgres: restore the dump into a throwaway database (
createdb docuseal_restore_test && pg_restore -d docuseal_restore_test path/to/dump.pgdump, orpsql -d docuseal_restore_test -f path/to/dump.sql), then connect (psql docuseal_restore_test) and run\dtto list tables. Drop the throwaway database when done.
If the tables come back, the backup works. If not, fix it now, not the day you need it.
How AI can help
The AI can write the full backup script, the cron line, the rclone config, and the lifecycle rule for R2 in one pass; what it's especially good at is the restore script and the verification harness. Ask it to write a restore-test.sh that does the full untar-and-verify dance and prints a clear pass/fail at the end. Have it write a separate script that lists the most recent backup in each location (local, R2, mirror) with timestamps, so you can run one command and confirm all three are current. Schedule that as a weekly check. AI also handles the bind-mount permission gotcha well; DocuSeal often runs as a different user inside the container than your VPS user, and the script needs setfacl or equivalent to read the database. Let it diagnose and fix that the first time it fails rather than guessing.
Important: A backup script that hasn’t been restored from is not a backup. It’s an aspirational tarball.
First Run, Templates, and Self-Test
Hit https://sign.yourdomain.com in a browser. The first-run wizard creates the admin account, sets the org name, and drops you into the dashboard. Turn on 2FA on the admin account immediately. Send a test email through the Settings panel and confirm it arrives with dkim=pass and spf=pass in the headers (Gmail’s “Show original” view shows this).
Build your first template. Upload a PDF (an NDA, an engagement letter, a generic contract for testing), drop in fields for signer name, signer email, signature box, and date. Save. From the template view, send for signature to your own email. Open it on your phone for realism, sign, complete. The completed PDF should appear in the Submissions list with the audit trail attached.
That’s the full loop. From here, you build templates as actual gigs require them, not in advance. Don’t pre-build twenty templates you’ll never use.
Tip: When you’re debugging a failure during the self-test, retry with a brand new document each time rather than editing the one that just failed. Templates and submissions carry state (cached PDF renders, partially completed submission records, in-flight email threads) that can stay broken even after the underlying pipeline is fixed. A fresh document is the cleanest signal that the system actually works end to end. Once the new-document path is green, then circle back to the failing artifact and verify it works there too.
Optional Upgrades and What You’re Skipping
What you have at this point is functional and, for ordinary U.S. business contracts handled correctly, supports the enforceability framework that ESIGN and UETA establish: PDFs are sealed with cryptographic signatures, the audit trail records IP, timestamp, user agent, and signature events, and email delivery is authenticated. That’s the destination for most everyday business use; it’s not a substitute for legal review on regulated or high-stakes transactions.
The optional upgrades, in rough order of value:
- AATL signing certificate (around $200/year). The cryptographic content of the signature is identical, but Adobe Reader displays “Signature is valid” in green instead of the self-signed warning. If counterparties complain about the warning, this is the fix. If nobody’s complained, skip it.
- DocuSeal Pro tier ($20/user/month at the entry level, hosted by DocuSeal). Unlocks bulk send, conditional fields, in-document Stripe payment fields, and SAML SSO. The OSS image already covers most workflows, so this is for specific feature gaps.
- Compliance attestations (SOC 2, HIPAA, GDPR DPA). Both DocuSeal and Documenso offer these on paid plans. If a regulated client requires a specific attestation on the platform itself, this is unavoidable; the self-hosted OSS image cannot inherit it.
- Uptime monitoring. UptimeRobot free tier (50 monitors, 5-minute interval) is the zero-effort option. Uptime Kuma is the self-hosted alternative: open source, runs as a small Docker container alongside DocuSeal, and gives you status pages, multi-channel notifications (Slack, Discord, Telegram, email, webhooks), and 20-second check intervals if you want them. If you’re already running one VPS, the marginal cost of adding Kuma is negligible and you keep monitoring data on infrastructure you control. Either way, get an HTTPS check on the signing URL so you hear about a tunnel failure before a counterparty does.
- Stripe payment links (free, transaction fees 2.9% + $0.30). If you need payments tied to signing but the OSS image doesn’t unlock the inline field, send a Stripe payment link in the same email as the signing request. Two clicks instead of one, but no platform upgrade required.
Automating the Whole Deployment with Ansible
Everything above can be captured in an Ansible playbook, and that’s worth doing once you’ve gone through it manually. The first time, type each command yourself; you learn what each piece does and why. The second time, codify it, because you’ll either build a second box (staging, second region, migration) or have to rebuild the first one after something goes sideways. The manual path is for understanding; the automated path is for repeating.
A reasonable shape for the playbook:
- Pre-task role: timezone, swap file, unattended security upgrades,
apt update/apt upgradebaseline. - Hardening role: UFW rules,
sshd_configedits, fail2ban defaults, Tailscale install and join, the verification command from Step 4. - Docker role: install Docker CE from the official repo, add the deploy user to the
dockergroup. - App role: clone or template
docker-compose.ymland.env, rundocker compose up -d, wait for healthcheck. - Backup role: drop in
backup.shandrestore-test.sh, install rclone with the R2 remote, configure the encryption tool you picked in Step 7, install the cron entries.
Variables for hostname, domain, secrets pulled from Ansible Vault, email aliases, and tunnel token live in group_vars/ so the playbook is reusable across machines without leaking credentials into the repo. Run with ansible-playbook -i inventory site.yml --check first as a sanity pass, then drop --check to apply. Treat check mode as a lint-and-shape check, not a guarantee: first-run tasks like adding the Docker apt repo, joining Tailscale, and bringing the compose stack up don’t fully model under --check because they depend on state that hasn’t been created yet. Re-running on a successfully provisioned box is idempotent; nothing changes unless the playbook says it should.
The same approach works against a fresh Hetzner box, a wiped homelab mini-PC, or a NAS that exposes SSH. Cloud setups are typically Terraform for the infrastructure (VPC, instance, security group) plus Ansible for the in-instance setup; on a single VPS, Ansible alone is plenty.
How AI can help
This is where AI is genuinely transformative for a one-person operation. Hand the AI your existing manual setup notes (this guide, the actual commands you ran, the compose file, the backup script) and ask it to produce a complete Ansible playbook with proper roles, idempotent tasks, and Vault-encrypted secrets. It will get the YAML right, pick the right modules over hand-rolled command: calls (the relevant ones for this stack are community.general.ufw for firewall rules, ansible.builtin.apt for package installs, community.docker.docker_compose_v2 for bringing the compose stack up, and the Tailscale install via the official apt repo), and structure the variables sensibly. For backup encryption, the playbook installs your chosen tool (restic, rclone with crypt, or age) and the encryption is handled inside the backup script itself rather than as an Ansible step. Run ansible-lint on the result and feed the warnings back to the AI for a clean second pass. For the test loop, ask it to write a Vagrant or Multipass cloud-init config that spins up a throwaway VM matching your VPS distro so you can run the playbook end to end without touching production. The combination of a documented manual path plus an AI-generated playbook plus a throwaway VM target is what makes "I'll automate this someday" actually happen.
What You Spent
Recurring (monthly billing, prorated annually where useful):
- Domain: $10–15/year ($14 for
.appat Cloudflare Registrar as the example) - VPS: $6.99/month for the recommended Hetzner CPX11 ($84/year)
- Cloudflare (DNS, Tunnel, R2): $0 within free tier
- SMTP: $0 (Resend free tier) or included with existing Workspace Business Standard+
Total floor: roughly $90–100/year, all-in, for a private signing service that handles hundreds of envelopes a month without touching a paid plan anywhere.
Optional:
- AATL signing cert: ~$200/year
- DocuSeal Pro (hosted): $20+/user/month
- Uptime monitoring: $0 (UptimeRobot free tier or self-hosted Uptime Kuma)
- Stripe transaction fees: 2.9% + $0.30 per transaction (only if you charge through it)
For comparison, the hosted-platform market as of 2026 (all prices below are the annual-billing rate per user; month-to-month is typically 50–80% higher and worth confirming on the vendor’s page):
- DocuSign: Personal $10/month, Standard $25/user/month, Business Pro $40/user/month. Envelope caps apply (e.g., 100/user/year on Standard).
- PandaDoc: Starter $19/user/month, Business $49/user/month, Enterprise on quote. Free e-sign tier exists for basic use.
- Dropbox Sign (formerly HelloSign): Essentials $15/user/month, Standard $25/user/month.
- Adobe Acrobat Standard / Pro: Acrobat Standard $14.99/user/month for individuals, Acrobat Pro $19.99/user/month, with team and business tiers higher. E-signature is bundled into the Acrobat product family rather than sold as a standalone “Acrobat Sign” plan in most regions.
- SignNow: Business $20/user/month, Business Premium $30/user/month.
- DocuSeal Pro (hosted): Pro $20/user/month, Pro Plus higher tiers, Enterprise on quote.
- Documenso (hosted): free individual tier, Teams $40/month billed yearly.
The math on self-hosting starts working at one user and widens fast. Two users on Standard DocuSign annual billing is $600/year; the same volume on a self-hosted box is the same $90 to $100, with no envelope caps. The hosted plans pay for the SLA, the legal team’s compliance attestations, and the support contract. If those aren’t load-bearing for what you’re doing, the open-source path is the better deal.
Toolkit Reference
The user-facing tools that appear across this guide, and the concrete spots where an AI assistant earns its keep.
Tools and Services
- DocuSeal / Documenso
- The two open-source e-signature platforms. DocuSeal runs on SQLite or Postgres; Documenso requires Postgres.
- Cloudflare
- Domain registration, DNS, Universal SSL, free Tunnel connector, free Zero Trust plan, R2 object storage with 10 GB free tier.
- Tailscale
- Private mesh network for SSH and backup pull, free tier up to 100 devices.
- Hetzner Cloud
- $6.99/month CPX11 VPS recommendation. Equivalents at DigitalOcean, Vultr, Linode, OVH.
- Resend (or Workspace SMTP relay)
- SMTP for outbound signing emails. 3,000/month free tier.
- rclone + restic (or age)
- S3-compatible upload and at-rest encryption for offsite backups.
- Uptime Kuma or UptimeRobot
- Self-hosted or hosted uptime monitoring on the signing URL.
- Ansible
- Optional infrastructure-as-code layer. Captures the full hardening + Docker + compose + backup setup in a replayable playbook.
Where AI Earns Its Keep
- Architecture review
- Sanity-check the stack pick (DocuSeal vs Documenso, SQLite vs Postgres, region, VPS sizing) against your actual document volume and compliance needs before deploying.
- Hardening script review
- Generate the UFW + sshd_config + fail2ban + unattended-upgrades script for your distro, then cross-check against a second model before running. One bad UFW rule applied over SSH locks you out.
- DNS record verification
- Generate the exact SPF, DKIM, and DMARC TXT records for your domain and chosen SMTP provider, then walk you through reading the headers on a test send to confirm
dkim=passandspf=pass. - Backup and restore harness
- Write the snapshot/encrypt/upload script, the restore-test script (untar, decrypt, verify tables), and a weekly health check that confirms all three backup locations are current.
- Browser self-test
- Drive a headless browser through the full sign-and-submit flow on mobile and desktop after every redeploy or version bump, and report the audit trail fields captured for each submission.
- Ansible playbook generation
- Convert your working manual setup into a complete idempotent playbook with roles, Vault-encrypted secrets, and a Vagrant or Multipass test target. Re-runs cleanly across rebuilds, second boxes, or migrations.