Client-Side Tailscale: SOCKS5 Proxy Routing and Platform Abstraction - Part 3 of 3
Part 3 of 3: Private Networking for a Personal Media Server
Part 1 covered the architecture and the Cloudflare Tunnel problem. Part 2 covered the server side: ShowShark Remote, the TailscaleKit node, and the TailnetBridge. This final post covers the client: how it receives tailnet credentials, routes connections through the tailnet transparently, and abstracts the platform differences across iOS, macOS, tvOS, and visionOS.
The Client's Problem
The client has an NWConnection that speaks WebSocket to the server. On the local network, this connection goes directly to the server's IP address. For remote access, it needs to go through the tailnet instead. But NWConnection has no concept of tailnets; it speaks TCP and UDP to IP addresses.
TailscaleKit solves this by exposing a local SOCKS5 proxy. When the tailscale node is running, it listens on a loopback address and proxies any connection through the tailnet's WireGuard tunnel. The client just needs to tell NWConnection to use this proxy, and the rest happens transparently.
┌──────────────────────────────────────────────────────────┐
│ ShowShark Client │
│ │
│ NWConnection │
│ (WebSocket to 100.64.0.1) │
│ │ │
│ │ proxy settings │
│ ▼ │
│ SOCKS5 proxy (loopback) │
│ │ │
│ │ tunneled through │
│ ▼ │
│ TailscaleKit │
│ (userspace WireGuard) │
│ │ │
└───────┼───────────────────────────────────────────────────┘
│
│ encrypted WireGuard
│ over the internet
│
┌───────┼───────────────────────────────────────────────────┐
│ ▼ ShowShark Server │
│ │
│ TailnetBridge (tailscale_accept) │
│ │ │
│ ▼ │
│ localhost (WebSocket server) │
└────────────────────────────────────────────────────────────┘
Receiving Credentials
The server sends tailnet credentials to the client as part of the normal login response. Four fields were added to the LoginResponse protobuf message:
message LoginResponse {
bool success = 1;
string server_name = 2;
// ... existing fields ...
string remote_hostname = 9;
string remote_ip = 10;
string coordination_url = 11;
string tailnet_preauth_key = 12;
}
The client persists these credentials in its SavedServer model alongside the hostname, password, and other connection metadata:
struct SavedServer: Codable, Identifiable {
let id: UUID
var hostname: String
var password: String
// Remote connectivity (populated from LoginResponse)
var remoteHostname: String?
var remoteIP: String?
var coordinationURL: String?
var tailnetPreAuthKey: String?
var hasRemoteDetails: Bool {
remoteHostname != nil || remoteIP != nil
}
}
When the client receives a LoginResponse with non-empty tailnet fields, it stores them and triggers the tailnet join in the background:
if !loginResponse.coordinationURL.isEmpty,
!loginResponse.tailnetPreauthKey.isEmpty {
await onTailnetCredentialsReceived?(
loginResponse.coordinationURL,
loginResponse.tailnetPreauthKey
)
}
The callback is wired up in ConnectionViewModel, which passes the credentials to TailnetClient:
await connectionManager.setOnTailnetCredentialsReceived { coordinationURL, preAuthKey in
guard !tailnetClient.isConnected else { return }
try? await tailnetClient.joinTailnet(
coordinationURL: coordinationURL,
preAuthKey: preAuthKey
)
}
The guard prevents redundant joins. Once the client is on the tailnet, it stays on; subsequent logins to the same server skip enrollment entirely.
TailnetClient: Lazy Initialization
TailnetClient is an @Observable class that manages the TailscaleKit node lifecycle. It uses lazy initialization to avoid startup overhead; the tailscale node is only created when actually needed:
@Observable @MainActor
class TailnetClient {
private var node: (any TailscaleNodeProtocol)?
private let nodeFactory: () -> any TailscaleNodeProtocol
var isConnected = false
init(nodeFactory: @escaping () -> any TailscaleNodeProtocol) {
self.nodeFactory = nodeFactory
}
private func ensureNodeCreated() -> any TailscaleNodeProtocol {
if let existing = node { return existing }
let newNode = nodeFactory()
node = newNode
return newNode
}
func joinTailnet(coordinationURL: String, preAuthKey: String) async throws {
let tsNode = ensureNodeCreated()
try await Task.detached { [tsNode] in
try await tsNode.joinTailnet(
coordinationURL: coordinationURL,
preAuthKey: preAuthKey
)
}.value
isConnected = true
// Persist auth state for reconnection without re-enrollment
UserDefaults.standard.set(true, forKey: tailnetAuthStateKey)
UserDefaults.standard.set(coordinationURL, forKey: tailnetCoordinationURLKey)
}
}
The Task.detached call is important. TailnetClient is @MainActor (for SwiftUI observation), but joinTailnet on the underlying protocol is nonisolated and calls into Go code that blocks. Detaching ensures the main actor is not blocked during node startup.
After a successful join, the auth state is persisted to UserDefaults. On subsequent app launches, ensureConnected() can reconnect to the tailnet using stored credentials without needing the server to provide them again.
The SOCKS5 Proxy Connection
When the client decides to connect remotely, it asks TailnetClient for the SOCKS5 proxy details and passes them to ClientConnectionManager:
func connectRemotely(to server: SavedServer) async {
// Ensure the tailnet node is running
if let coordinationURL = server.coordinationURL,
let preAuthKey = server.tailnetPreAuthKey {
try? await tailnetClient.joinTailnet(
coordinationURL: coordinationURL,
preAuthKey: preAuthKey
)
} else {
try? await tailnetClient.ensureConnected()
}
// Get the loopback SOCKS5 proxy details
let proxy = try await tailnetClient.getLoopbackProxy()
// Returns: (address: "127.0.0.1", port: Int, credential: String)
// Connect through the proxy
try await connectionManager.connectViaTailnet(
to: server,
proxyHost: proxy.address,
proxyPort: proxy.port,
proxyCredential: proxy.credential
)
}
Inside ClientConnectionManager, the SOCKS5 proxy is configured on the NWConnection's parameters:
func connectViaTailnet(
to server: SavedServer,
proxyHost: String,
proxyPort: Int,
proxyCredential: String
) async throws {
let config = SOCKSProxyConfig(
host: proxyHost,
port: UInt16(proxyPort),
credential: proxyCredential
)
let client = WebSocketClient(
host: server.remoteIP ?? server.remoteHostname ?? "",
socksProxy: config
)
try await client.connect()
}
NWConnection handles the SOCKS5 handshake internally. From this point forward, all WebSocket traffic flows through the proxy, through TailscaleKit's WireGuard tunnel, across the internet, and into the server's TailnetBridge. The WebSocket layer does not know or care that it is being proxied; the connection looks and behaves identically to a local one.
Retry with Backoff
There is a timing gap between when joinTailnet returns and when the SOCKS5 proxy is ready to accept connections. The tailnet node may report itself as "up" before the WireGuard routing table is fully established. The client handles this with exponential backoff:
for attempt in 1...5 {
do {
let proxy = try await tailnetClient.getLoopbackProxy()
try await connectionManager.connectViaTailnet(
to: server,
proxyHost: proxy.address,
proxyPort: proxy.port,
proxyCredential: proxy.credential
)
break // Connected successfully
} catch {
if attempt < 5 {
try? await Task.sleep(for: .seconds(attempt * 3))
} else {
throw error
}
}
}
In practice, the first or second attempt usually succeeds. The backoff is a safety net for slow network conditions or the first connection after a cold start.
Connection Strategy
The client uses a simple decision tree to choose between local and remote connections:
User taps "Connect" on a server
│
▼
Does the server have a local hostname?
│
┌────┴────┐
yes no (remote-only server)
│ │
▼ ▼
Try local connectRemotely()
connection
│
▼
Success?
│
┌─┴──┐
yes no
│ │
▼ ▼
Done Does the server have remote details?
│
┌────┴────┐
yes no
│ │
▼ ▼
Show "Connect Show generic
Remotely" error alert
button
Remote-only servers (those created through credential sharing with no local hostname) skip the local attempt entirely and go straight to the tailnet. Servers that were originally discovered locally but have remote credentials fall back to remote when the local connection fails; the user sees a "Connect Remotely" button in the error alert rather than a dead end.
Platform Abstraction
TailscaleKit compiles for iOS and macOS. For tvOS and visionOS, canImport(TailscaleKit) is used to gate the implementation behind a protocol:
protocol TailscaleNodeProtocol: Sendable {
nonisolated func joinTailnet(
coordinationURL: String,
preAuthKey: String
) async throws
nonisolated var isConnected: Bool { get async }
nonisolated func getLoopbackProxy() async throws -> (
address: String,
port: Int,
credential: String
)
}
On iOS and macOS, LiveTailscaleNode wraps TailscaleKit.TailscaleNode directly:
#if canImport(TailscaleKit)
import TailscaleKit
final class LiveTailscaleNode: TailscaleNodeProtocol, @unchecked Sendable {
private let nodeRef = NodeRef()
private actor NodeRef {
var node: TailscaleKit.TailscaleNode?
var connected = false
func setNode(_ n: TailscaleKit.TailscaleNode) {
node = n
connected = true
}
}
nonisolated func joinTailnet(coordinationURL: String, preAuthKey: String) async throws {
let stateDir = /* ~/Library/Application Support/ShowShark/tailscale/ */
let hostname = sanitizeHostname(deviceName)
let config = Configuration(
hostName: hostname,
path: stateDir,
authKey: preAuthKey.isEmpty ? nil : preAuthKey,
controlURL: coordinationURL,
ephemeral: false
)
let logSink = TailscaleLogSink()
let tsNode = try TailscaleKit.TailscaleNode(configuration: config, logSink: logSink)
try await tsNode.up()
await nodeRef.setNode(tsNode)
}
}
#endif
On tvOS and visionOS, the TailscaleKit xcframework includes slices for these platforms as well, built from the same libtailscale source. The build script produces arm64 slices for each platform, and the same LiveTailscaleNode implementation is used across all four. The #if canImport guard is there as a safety net in case a platform build ever fails to include the framework, but in normal operation all platforms use the live implementation.
Hostname Sanitization
Each device's tailnet hostname is derived from its local device name. Headscale hostnames must be lowercase alphanumeric with hyphens, so the client sanitizes the name:
func sanitizeHostname(_ name: String) -> String {
name.lowercased()
.replacing(/[^a-z0-9-]/, with: "-")
.replacing(/--+/, with: "-")
.trimmingCharacters(in: CharacterSet(charactersIn: "-"))
}
"Curtis's iPad Pro" becomes curtis-s-ipad-pro. This keeps node names readable in the ShowShark Remote dashboard without requiring the user to configure anything.
The Actor-Wrapped Node
Thread safety for the underlying TailscaleKit node is handled by an inner actor, NodeRef. The TailscaleNodeProtocol methods are nonisolated (they do not require the calling actor's isolation), but they access the node through await nodeRef.getNode(). This pattern lets the protocol be called from any isolation domain while keeping the mutable node reference protected:
@MainActor nonisolated actor
TailnetClient LiveTailscaleNode NodeRef
│ │ │
joinTailnet() │ │
│ │ │
├── Task.detached ────────────►│ │
│ joinTailnet() │
│ │ │
│ ├── TailscaleKit.up() ──► │
│ │ (blocking Go call) │
│ │ │
│ ├── await nodeRef ──────────►│
│ │ .setNode(tsNode) setNode()
│ │ │
│◄──── .value ────────────────┤ │
│ │ │
isConnected = true │ │
The Full Picture
Putting it all together, here is what happens when a user on an iPhone opens ShowShark while away from home and taps on a server they previously connected to locally:
1. Client checks: server has no local hostname reachable
→ falls back to remote connection
2. Client reads stored credentials from SavedServer:
coordinationURL, tailnetPreAuthKey
3. TailnetClient.ensureConnected()
→ node already has persisted state from last join
→ TailscaleKit reconnects to coordination server
→ WireGuard tunnel established to server's node
4. TailnetClient.getLoopbackProxy()
→ returns (127.0.0.1, port, credential)
5. NWConnection configured with SOCKS5 proxy
→ connects to server's tailnet IP
→ proxy routes through WireGuard tunnel
6. TailnetBridge (server) accepts connection
→ bridges to localhost WebSocket server
7. WebSocket handshake completes
→ normal ShowShark session begins
→ video starts streaming
The user sees the same loading screen they see on a local connection, followed by their media library. The tailnet negotiation, SOCKS5 proxying, and bridge forwarding are invisible. If the peer-to-peer WireGuard connection succeeds (which it does in most network configurations), latency is comparable to a direct connection; there is no relay server in the path.
If direct connectivity fails (both devices behind symmetric NATs, for example), Tailscale falls back to DERP relays. This adds latency, and for video streaming the difference is noticeable. But it is a graceful degradation rather than a failure; the user still gets their content, just with slightly higher buffering times. The adaptive bitrate controller (covered in a previous post) adjusts the encoding bitrate downward to match the available throughput, keeping playback smooth.
Security Model
A few notes on the security properties of this architecture.
All traffic between client and server is encrypted with WireGuard (Curve25519 key exchange, ChaCha20-Poly1305). The coordination server never sees the content; it only facilitates key exchange and peer discovery. Pre-auth keys are scoped to a single Headscale namespace (one per Apple account), so devices belonging to different users cannot join each other's tailnets. The Workers API deduplicates keys per device, preventing key proliferation from repeated logins.
The coordination server itself is authenticated via the Noise IK protocol, which provides mutual authentication between the tailscale nodes and the coordination server. The server's identity is verified by its static public key, which is established during the initial provisioning step.
Device revocation is handled through the Workers API: expiring the pre-auth key and deleting the Headscale node. Once revoked, the device's WireGuard keys are no longer accepted by the coordination server, and it cannot rejoin the tailnet without a new pre-auth key.
Wrapping Up the Series
Over these three posts, we have covered the full stack of ShowShark's remote connectivity:
- Part 1: The architecture, the Cloudflare Tunnel incompatibility (POST-based WebSocket upgrades, non-standard Upgrade headers), and the enrollment flow
- Part 2: ShowShark Remote (Headscale process management), the server's TailscaleKit node, and the TailnetBridge (bidirectional socket forwarding from tailnet to localhost)
- Part 3: Client-side credential storage, SOCKS5 proxy routing through NWConnection, platform abstraction, and the connection strategy
The result is a system where remote access is a property of the app itself, not of the network it runs on. No VPN apps to install, no ports to forward, no configuration screens to navigate. The server joins a private network on startup; the client joins when it first logs in; from then on, they can always find each other.
The Cloudflare Tunnel detour was a useful lesson in understanding Tailscale's protocol design. The TS2021 protocol's POST-based HTTP upgrade is a genuine optimization (saving one RTT on the Noise handshake), but it means the coordination server cannot hide behind any proxy that enforces strict RFC 6455 compliance. Knowing why that constraint exists made the Nginx alternative feel less like a compromise and more like the right tool for the job.