Connector Failures
How to troubleshoot common Connector issues including offline status, Resource connectivity, and performance
Connectors are the critical gateways to your private Resources. If a Connector is offline, misconfigured, or unhealthy, access to all Resources within its designated Remote Network will fail unless another working Connector is available on that Remote Network.
This page covers three common failure scenarios:
- The Connector is offline or flapping between online and offline status.
- The Connector is online but cannot reach one or more Resources.
- The Connector is online but performance is poor or connections are unreliable.
Connector is Offline or Flapping
A Connector that appears offline or repeatedly cycles between online and offline states is typically unable to communicate with the Twingate Controller or Relay infrastructure. All Resources on the affected Remote Network will be impacted.
Common Symptoms
- An entire set of Resources in a specific Remote Network (e.g., “AWS US-East-1 VPC”) becomes unreachable for all users or disappears from the Client.
- The Admin Console shows a Connector’s status as Offline or flapping between Online and Offline.
- Connector logs contain repeating errors such as
Invalid token,failed to get an access token, orGone, code 410. - Connector logs show
Failed to preconnect a relay listenerwith aConnection timed outerror.
Check the Admin Console
The Connector details page is the best starting point for diagnosing health issues. Navigate to the Remote Network, click on the suspect Connector, and review its details.
- Status - If the Connector is Offline, the host is likely down or has lost internet connectivity.
- Time Offset - Twingate’s authentication protocol is sensitive to clock skew. If the Time Offset value is greater than 5 seconds in either direction, the Connector’s system clock is out of sync with global time. Authentication tokens will be rejected by the Twingate Controller, resulting in
Invalid tokenerrors and a flapping status.
Clock drift is a common cause of flapping Connectors
If the Time Offset value is out of range, confirm the Connector’s host machine is running a time synchronization service. chronyd is generally recommended over the older ntpd. This is a host-level configuration, not a Twingate configuration.
Check Connector Host Configuration
- Confirm the Connector was installed with the correct tokens. If tokens were regenerated in the Admin Console, reconfigure the Connector with the new tokens.
- Verify that only one instance of the Connector is running with a given set of tokens. Running multiple Connectors with the same tokens will cause conflicts and connection failures.
- Make sure the Connector software is up to date. Much older versions may have incompatibilities or be blocked.
- Verify that the host machine meets the hardware and OS requirements for running a Connector.
Check Connector Logs
The logs provide the ground truth for what a Connector is experiencing. Enable detailed logging by setting TWINGATE_LOG_LEVEL=7.
For systemd deployments, view logs with:
journalctl -u twingate-connector -fFor Docker deployments, use:
docker logs <CONTAINER_NAME> -fReplace <CONTAINER_NAME> with the name of the Connector container.
Look for these error patterns:
| Error | Likely Cause |
|---|---|
Invalid token | Clock drift on the Connector host. Check the Time Offset in the Admin Console. |
too many open files | The host system’s file descriptor limit (ulimit) is too low and needs to be increased. |
Failed to preconnect a relay listener | An outbound connectivity problem from the Connector host. It may be unable to reach Twingate’s Relay infrastructure due to a firewall rule or a lack of a public IPv4 address. |
Verify Outbound Connectivity
The host machine running the Connector must have outbound internet access to the Twingate infrastructure. If outbound access is restricted, confirm the following are permitted:
- Outbound TCP port 443 (communication with the Twingate Controller and Relay infrastructure)
- Outbound TCP ports 30000-31000 (connections with Twingate Relay infrastructure when peer-to-peer is unavailable)
- Outbound UDP and QUIC for HTTP/3
Connector Cannot Reach Resources
A Connector may appear online and healthy in the Admin Console but still fail to forward traffic to one or more Resources. Users will see connection errors in the Client, and Connector logs will contain entries such as failed to connect or could not be reached for the affected Resource addresses.
Common Symptoms
- The Connector shows as Online in the Admin Console, but users cannot access specific Resources.
- The Client displays a connection error when attempting to reach a Resource.
- Connector logs contain
failed to connectorcould not be reachederrors referencing the Resource’s IP address or FQDN. - Some Resources on the same Remote Network work while others do not.
Verify Network Reachability from the Connector Host
The Connector can only forward traffic to Resources that are routable from its host machine. SSH into the Connector host (or use docker exec for container deployments) and test connectivity to the Resource’s address and port directly.
Test TCP connectivity to a Resource:
nc -zv <RESOURCE_ADDRESS> <PORT>Test DNS resolution of a Resource FQDN:
nslookup <RESOURCE_FQDN>Replace <RESOURCE_ADDRESS>, <PORT>, and <RESOURCE_FQDN> with the values from the Resource configuration in the Admin Console.
If these tests fail from the Connector host itself, the issue is in the network path between the Connector and the Resource, not in Twingate.
Check Network Segmentation
A common misconfiguration is deploying a Connector in a VPC, VNet, or subnet that has no route to the subnet where the Resource resides. Verify the following:
- The Connector host and the Resource are in the same VPC or VNet, or that peering, transit gateways, or other routing is configured between them.
- Route tables on the Connector’s subnet include a route to the Resource’s subnet.
- For on-premises deployments, confirm the Connector host can reach the Resource’s IP address range from its local network.
Resource addresses are resolved from the Connector
The configured address of a Resource is resolved from the Connector’s perspective. If the Resource uses an FQDN, the Connector’s host must be able to resolve that name. If it uses an IP address, that IP address must be routable from the Connector’s network.
Check Firewalls and Security Groups
Even when the Connector and Resource are on the same network, a firewall or cloud security group may block traffic between them. Verify the rules for your environment:
- AWS - Confirm the security group attached to the Resource’s instance allows inbound traffic from the Connector’s private IP address (or its security group) on the required ports.
- Azure - Confirm the network security group (NSG) on the Resource’s subnet or NIC permits traffic from the Connector.
- GCP - Confirm VPC firewall rules allow ingress from the Connector’s IP address on the required ports.
- On-premises - Confirm any firewalls between the Connector and the Resource permit the necessary traffic on the required ports.
Check Application-Level IP Filtering
Some services restrict connections to a specific set of allowed IP addresses. If the Resource runs a service with IP-based access controls, the Connector’s private IP address must be included in the allowlist. Common services to check:
- SSH - Review
AllowUsers,AllowGroups, or host firewall rules (e.g.iptables,ufw) on the target machine. - PostgreSQL - Check
pg_hba.conffor host-based authentication rules that restrict connections by source IP address. - RDP - Verify Windows Firewall rules and any Network Level Authentication (NLA) restrictions.
- Web applications - Check application-level IP filtering, reverse proxy ACLs, or WAF rules.
To find the Connector’s private IP address, run the following on the Connector host:
hostname -IVerify the Resource Configuration
Confirm that the address configured for the Resource in the Admin Console matches what is reachable from the Connector. If the Resource was defined using an FQDN, verify that the Connector’s host can resolve that name. If it was defined using an IP address, verify the address is correct and routable.
Also confirm that any port restrictions configured on the Resource match the ports the target service is listening on. By default, Twingate forwards traffic on all TCP and UDP ports, but if port restrictions have been applied to the Resource, only the specified ports will be forwarded.
Connector Is Online but Performance Is Poor
If the Connector is online and healthy but connections are slow or unreliable, the issue is likely in the network path between the Client and the Connector rather than in the Connector itself.
Check peer-to-peer connectivity
Twingate routes traffic directly between the Client and the Connector whenever possible using peer-to-peer connections. If peer-to-peer cannot be established, traffic is relayed through Twingate’s Relay infrastructure, which may increase latency. Verify that both the Client’s network and the Connector’s network support the UDP connectivity required for peer-to-peer.
Common causes of poor performance include:
- Peer-to-peer connections are not being established, forcing traffic through a Relay. See the peer-to-peer troubleshooting guide for diagnostic steps.
- The Connector is geographically far from the Resources it serves. Deploy Connectors as close as possible to the target Resources, ideally in the same region, VPC, or subnet.
- The Connector host is resource-constrained (CPU, memory, or network bandwidth). Review the hardware recommendations and consider scaling up the host or deploying additional Connectors on the same Remote Network for load balancing.
ICMP (ping) troubleshooting
If ping to Resources behind a Connector fails while other connections (SSH, HTTP, etc.) succeed, verify that the Connector’s host operating system allows outbound ICMP traffic. This is a host-level setting and is not controlled by Twingate.
Last updated 6 hours ago