Server-Side Tailscale: ShowShark Remote and the Tailnet Bridge - Part 2 of 3
Part 2 of 3: Private Networking for a Personal Media Server
Part 1 covered the architecture of ShowShark's remote connectivity and the Cloudflare Tunnel incompatibility that shaped our deployment strategy. This post digs into the server side: the ShowShark Remote app that manages Headscale, the TailscaleKit integration that joins the server to a tailnet, and the TailnetBridge that forwards remote connections to the local WebSocket server.
ShowShark Remote: A SwiftUI Wrapper for Headscale
Headscale is a single Go binary. It reads a YAML configuration file, opens a SQLite database, and listens for Tailscale client connections. Running it is as simple as ./headscale serve. Managing it over time is less simple: you need to generate API keys, monitor connected nodes, watch logs for problems, and restart the process after configuration changes.
ShowShark Remote wraps all of this in a native macOS app. The Headscale binary ships inside the app bundle, and the app manages its lifecycle as a child process.
Process Lifecycle
The core of ShowShark Remote is HeadscaleProcessManager, an @Observable class that owns a Foundation.Process. Starting Headscale means configuring the process with the bundled binary path, attaching pipes for stdout and stderr, and calling launch():
func start() throws {
let binary = Bundle.main.url(forResource: "headscale", withExtension: nil)!
// Ensure the binary is executable (code signing can strip permissions)
try FileManager.default.setAttributes(
[.posixPermissions: 0o755],
ofItemAtPath: binary.path
)
let proc = Process()
proc.executableURL = binary
proc.arguments = ["serve", "--config", configPath]
proc.standardOutput = stdoutPipe
proc.standardError = stderrPipe
try proc.run()
writePIDFile(proc.processIdentifier)
}
Two details matter here. First, setAttributes with POSIX permissions: code signing and Gatekeeper can strip the executable bit from bundled binaries, so we restore it before every launch. Second, the PID file: we write the process ID to disk so that on the next app launch, we can detect and kill orphaned Headscale processes left behind by a crash or force quit.
func killOrphanedHeadscale() {
guard let pidString = try? String(contentsOfFile: pidFilePath),
let pid = pid_t(pidString) else { return }
// Check if process is still alive
if kill(pid, 0) == 0 {
kill(pid, SIGTERM)
// Grace period, then force kill if needed
DispatchQueue.global().asyncAfter(deadline: .now() + 5.0) {
if kill(pid, 0) == 0 {
kill(pid, SIGKILL)
}
}
}
try? FileManager.default.removeItem(atPath: pidFilePath)
}
Shutdown is equally careful. The app registers for NSApplication.willTerminateNotification in its initializer and calls a synchronous stop method that sends SIGTERM, waits up to five seconds for graceful exit, then escalates to SIGKILL. Process.waitUntilExit() blocks the calling thread to ensure the child is fully terminated before the parent exits.
Log Streaming
Headscale writes structured logs to stdout and error messages to stderr. ShowShark Remote captures both through Pipe objects and streams them into a SwiftUI view:
stdoutPipe.fileHandleForReading.readabilityHandler = { [weak self] handle in
let data = handle.availableData
guard !data.isEmpty,
let line = String(data: data, encoding: .utf8) else { return }
DispatchQueue.main.async {
self?.logOutput.append(contentsOf: line.split(separator: "\n").map(String.init))
if self?.logOutput.count ?? 0 > 1000 {
self?.logOutput.removeFirst(self!.logOutput.count - 1000)
}
}
}
The circular buffer (capped at 1,000 lines) prevents unbounded memory growth. In the UI, a ScrollViewReader with auto-scroll keeps the latest log entries visible, and the entire log buffer is copyable to the clipboard for debugging.
Configuration Generation
Headscale needs a YAML config file specifying the database path, listen address, DERP relay map, and other settings. Rather than shipping a static config and asking the user to edit it, ShowShark Remote generates one programmatically:
enum HeadscaleConfigGenerator {
static func generateIfNeeded(
serverURL: String,
listenAddr: String,
supportDir: String
) throws -> Bool {
let configPath = supportDir + "/config.yaml"
guard !FileManager.default.fileExists(atPath: configPath) else {
return false // Already exists; no regeneration
}
let config = """
server_url: \(serverURL)
listen_addr: \(listenAddr)
database:
type: sqlite
sqlite:
path: \(supportDir)/headscale.db
noise:
private_key_path: \(supportDir)/noise_private.key
derp:
paths:
- \(supportDir)/derp.yaml
dns:
magic_dns: false
base_domain: showshark.local
log:
level: info
"""
try config.write(toFile: configPath, atomically: true, encoding: .utf8)
return true
}
}
The DERP map is a static file containing Tailscale's public relay servers. These relays provide a fallback path when direct peer-to-peer connections fail (both devices behind symmetric NATs, for example). We ship a hardcoded DERP configuration to avoid an external dependency on Tailscale's DERP discovery endpoint at startup.
All state lives in ~/Library/Application Support/ShowShark Remote/: the SQLite database, Noise encryption keys, and configuration files. The Unix socket path uses NSTemporaryDirectory() for sandbox compliance.
The Dashboard
The UI is a standard NavigationSplitView with four tabs: Dashboard, Nodes, Logs, and Settings.
┌──────────────────────────────────────────────────────────┐
│ ShowShark Remote │
├──────────┬───────────────────────────────────────────────┤
│ │ │
│ Dashboard│ Headscale Server │
│ Nodes │ ● Running URL: headscale.example.com│
│ Logs │ │
│ Settings │ Controls │
│ │ [Stop] Auto-start: [✓] │
│ │ │
│ │ Summary │
│ │ Nodes: 4 Online: 3 │
│ │ │
└──────────┴───────────────────────────────────────────────┘
The dashboard polls the Headscale REST API every 30 seconds for node counts and online status. A three-second delay after process startup gives Headscale time to bind its listen address before the first API call. The Nodes tab shows each connected device with its tailnet IP, namespace, and last-seen timestamp. Settings handles API key generation (via the Headscale CLI) and config file management.
TailscaleKit: Building the xcframework
TailscaleKit is an xcframework built from Tailscale's libtailscale, which is itself a Go library compiled with cgo for Apple platforms. The build script produces three slices: iOS arm64, iOS Simulator (arm64 + x86_64), and macOS arm64.
TailscaleKit.xcframework/
├── ios-arm64/
│ └── TailscaleKit.framework/
├── ios-arm64_simulator/
│ └── TailscaleKit.framework/
└── macos-arm64/
└── TailscaleKit.framework/
The framework exposes a Swift-friendly API over the underlying C functions. The important types are Configuration (hostname, state directory, auth key, control URL), TailscaleNode (the running node instance), and LogSink (a protocol for receiving log output from the Go runtime).
Starting the Server's Tailnet Node
When ShowShark Server launches, RemoteConnectivityManager orchestrates the tailnet connection. The startup flow is deliberately failure-tolerant; remote connectivity is a bonus feature, never a prerequisite for local operation.
func start(coordinationURL: String?, preAuthKey: String?, serverPort: UInt16) async {
guard let preAuthKey, !preAuthKey.isEmpty else {
logger.info("[tailnet] No pre-auth key available; skipping remote setup")
return
}
// Resolve coordination URL (stored value or fetch from Workers API)
let resolvedURL: String
if let url = coordinationURL, !url.isEmpty {
resolvedURL = url
} else {
resolvedURL = try await fetchCoordinationURL()
}
// Start the tailscale node
try await node.start(coordinationURL: resolvedURL, preAuthKey: preAuthKey)
// Read the assigned tailnet IP
let ip = await node.ipAddress // e.g., "100.64.0.1"
// Start the bridge
if let handle = await node.tailscaleHandle {
let bridge = TailnetBridge(tailscaleHandle: handle, localPort: serverPort)
try await bridge.start()
await state.setBridge(bridge)
}
}
The LiveServerTailscaleNode wraps TailscaleKit with hostname sanitization (the macOS device name, lowercased and stripped of non-alphanumeric characters) and a log bridge that routes Go-level debug output through AppLogger with a [tailnet] prefix:
let config = Configuration(
hostName: sanitizedHostname, // e.g., "curtiss-mac-studio"
path: stateDirectory, // ~/Library/Application Support/ShowShark Server/tailscale/
authKey: preAuthKey,
controlURL: coordinationURL,
ephemeral: false // Persistent node; survives restarts
)
let logSink = TailscaleLogSink()
let tsNode = try TailscaleKit.TailscaleNode(configuration: config, logSink: logSink)
try await tsNode.up()
let addrs = try await tsNode.addrs() // Returns IPv4 + IPv6
The call to up() blocks until the node has successfully connected to the coordination server and joined the tailnet. After that, addrs() returns the assigned tailnet IP addresses. State is persisted to the tailscale/ directory, so on subsequent launches the node reconnects without re-provisioning.
TailnetBridge: Forwarding Remote Connections
Here is the central problem the server side must solve: the WebSocket server listens on localhost, but remote clients connect to the tailnet IP. Something needs to bridge the two.
Remote client WebSocket server
(100.64.0.2) (localhost)
│ │
│ connect to 100.64.0.1 │
▼ │
┌──────────────────────────────────────────────────────┐│
│ TailnetBridge ││
│ ││
│ tailscale_listen() on tailnet interface ││
│ │ ││
│ ▼ ││
│ tailscale_accept() ──► new connection ││
│ │ ││
│ ▼ ▼│
│ socket(AF_INET) ──► connect(localhost) ◄── WebSocket│
│ │ Server │
│ ▼ │
│ TaskGroup: │
│ Task 1: read(tailnet) → write(local) │
│ Task 2: read(local) → write(tailnet) │
│ │
│ When either direction hits EOF: │
│ shutdown(both, SHUT_RDWR) → unblock other task │
└───────────────────────────────────────────────────────┘
TailnetBridge uses TailscaleKit's C API directly because the Swift IncomingConnection type lacks write support. The bridge creates a listener on the tailnet interface, accepts incoming connections, and for each one opens a local socket to the WebSocket server and forwards data bidirectionally.
The Accept Loop
The accept loop runs as a detached Task, but the actual tailscale_accept() call is dispatched to a GCD queue. This is important: tailscale_accept is a blocking C function, and calling it directly from a Swift async context would consume one of the cooperative thread pool's limited threads. GCD's thread pool is unbounded (well, practically so), making it safe for blocking I/O:
func acceptLoop() async {
while !Task.isCancelled {
let connFD: Int32 = await withCheckedContinuation { continuation in
DispatchQueue.global().async {
var fd: Int32 = -1
let result = tailscale_accept(self.listenerFD, &fd)
continuation.resume(returning: result == 0 ? fd : -1)
}
}
guard connFD >= 0 else {
// Accept error; back off briefly before retrying
try? await Task.sleep(for: .seconds(1))
continue
}
Task { await bridgeConnection(tailnetFD: connFD) }
}
}
Bidirectional Forwarding
Each accepted connection spawns a bridge task that opens a local socket and starts two forwarding loops in a TaskGroup:
func bridgeConnection(tailnetFD: Int32) async {
// Connect to localhost WebSocket server
let localFD = socket(AF_INET, SOCK_STREAM, 0)
var addr = sockaddr_in(/* localhost, localPort */)
connect(localFD, &addr, socklen_t(MemoryLayout<sockaddr_in>.size))
await withTaskGroup(of: String.self) { group in
group.addTask {
self.forwardData(from: tailnetFD, to: localFD)
return "tailnet→local"
}
group.addTask {
self.forwardData(from: localFD, to: tailnetFD)
return "local→tailnet"
}
// When the first direction finishes, unblock the other
if let finished = await group.next() {
shutdown(tailnetFD, SHUT_RDWR)
shutdown(localFD, SHUT_RDWR)
}
// Wait for the second to finish
for await _ in group {}
}
close(tailnetFD)
close(localFD)
}
The forwardData function is a tight loop: read up to 64 KB from the source file descriptor, write all bytes to the destination (handling partial writes), repeat until EOF or error. Both read and write happen on GCD queues via withCheckedContinuation to keep the cooperative thread pool free:
func forwardData(from sourceFD: Int32, to destFD: Int32) {
let buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: 65536)
defer { buffer.deallocate() }
while true {
let bytesRead = read(sourceFD, buffer, 65536)
if bytesRead <= 0 { break }
var totalWritten = 0
while totalWritten < bytesRead {
let written = write(destFD, buffer + totalWritten, bytesRead - totalWritten)
if written <= 0 { return }
totalWritten += written
}
}
}
The shutdown(SHUT_RDWR) call in the task group is the key to clean teardown. When one direction hits EOF (the remote client disconnected, or the local server closed the connection), shutdown on both file descriptors causes the blocking read in the other direction to return immediately with an error. Both tasks complete, the task group finishes, and the file descriptors are closed.
Why Not Just Listen on All Interfaces?
A reasonable question: why not configure the WebSocket server to listen on 0.0.0.0 and let remote clients connect directly? Two reasons.
First, the tailnet interface is a userspace network stack inside TailscaleKit. It does not appear as a system network interface. NWListener and bind() cannot listen on it; the only way to accept connections on a tailnet IP is through TailscaleKit's own tailscale_listen / tailscale_accept API.
Second, even if it were possible, the bridge architecture provides clean separation. The WebSocket server does not need to know whether a connection is local or remote. It accepts connections on localhost exactly as before; the bridge handles everything else. This meant zero changes to the existing networking code.
Credential Flow: From Apple ID to Tailnet
Before the server can join a tailnet, it needs credentials: a coordination URL and a pre-auth key. These come from the control plane API (Cloudflare Workers), which acts as the intermediary between ShowShark and the Headscale coordination server.
┌─────────────────────┐ ┌───────────────────────┐ ┌──────────────────┐
│ ShowShark Server │ │ Workers API │ │ Headscale │
│ │ │ (Cloudflare) │ │ (Coordination) │
│ 1. Sign in with │ │ │ │ │
│ Apple │ │ │ │ │
│ │ │ │ │ │ │
│ 2. Provision ──────────► │ 3. Create namespace ─────► (ss-<hash>) │
│ tailnet │ │ (per-account, │ │ │
│ │ │ deterministic) │ │ │
│ │ │ │ │ │
│ │ │ 4. Create pre-auth ───────► reusable, │
│ │ │ key for server │ │ non-ephemeral │
│ │ │ │ │ │
│ 5. Receive ◄────────── │ 5. Return URL + key │ │ │
│ credentials │ │ │ │ │
│ │ │ └───────────────────────┘ └──────────────────┘
│ 6. Start tailscale │
│ node with key │
│ │ │
│ 7. Store credentials│
│ in UserDefaults │
│ for next launch │
└─────────────────────┘
The namespace is deterministic: a SHA-256 hash of the Apple account ID, prefixed with ss-. This means provisioning is idempotent; calling it twice for the same account returns the same namespace and reuses the existing pre-auth key. The key is reusable and non-ephemeral with a long expiration, so the server node persists across restarts without re-provisioning.
On subsequent launches, RemoteConnectivityManager reads the stored coordination URL and pre-auth key from UserDefaults and starts the tailscale node directly, skipping the Workers API entirely.
Minting Client Credentials
When a client logs in to the server (locally or remotely), the login handler mints a fresh pre-auth key for that specific client device:
func handleLoginRequest(_ request: LoginRequest, /* ... */) async {
// Validate password, register device...
// Gather remote connectivity details
let remoteHostname = await coordinator?.getRemoteHostname()
let remoteIP = await coordinator?.getRemoteIP()
// Mint a pre-auth key for this client
let clientCreds = await mintClientTailnetCredentials(
deviceIdentifier: request.deviceIdentifier,
deviceName: request.deviceName
)
// Build the login response
var response = LoginResponse()
response.success = true
response.serverName = serverName
if let hostname = remoteHostname { response.remoteHostname = hostname }
if let ip = remoteIP { response.remoteIp = ip }
if let creds = clientCreds {
response.coordinationUrl = creds.coordinationURL
response.tailnetPreauthKey = creds.preAuthKey
}
// Send response to client
sendResponse(response, correlationID: correlationID)
}
The Workers API deduplicates key creation per device: if the same device identifier requests a key twice, it receives the same key. This prevents key proliferation from repeated logins. Keys are scoped to the user's Headscale namespace, so a client's key can only join its owner's tailnet.
Fire-and-Forget by Design
Every remote connectivity operation on the server side is wrapped in error handling that logs failures but never throws. If the coordination server is unreachable, the server continues operating locally. If the tailscale node fails to start, the WebSocket server still listens on localhost. If key minting fails for a client, the login response is sent without tailnet credentials, and the client connects normally over the local network.
This is a deliberate architectural choice. Remote connectivity is valuable but optional; it must never degrade the core experience of local streaming.
What is Next
Part 3 covers the client side: how it receives and stores tailnet credentials, the SOCKS5 proxy trick that routes NWConnection through the tailnet transparently, and the platform abstraction layer across iOS, macOS, tvOS, and visionOS.