Service Reliability

How Reliable is Twingate’s Infrastructure and Service?

Service reliability is a core aspect of information security. This document describes how Twingate ensures service reliability in relation to availability, performance, and scalability. Twingate provides superior reliability over traditional solutions due to how we have architected and deployed our infrastructure and software.

Availability

Service availability is crucial when our service is required to access mission critical network resources. We ensure a very high degree of service availability by:

  • Using a world-class infrastructure provider - we use Google Cloud Platform (GCP) to host our core components. The technology that powers GCP is used by Google to support its own applications, which are used by billions of people. Read more about GCP.

  • Implementing a fault tolerant, redundant infrastructure - our service is provided from multiple data centers which mirror each other’s capabilities. If an availability issue arises with one data center, the other data centers will automatically pick up the load.

  • Using multiple geographically separated data centers - this mitigates the risk of environmental and other location-specific disasters.

  • Providing transparency into service status - customers can monitor service status at status.twingate.com.

  • Providing resilience against DDOS attacks - we implement certain measures to mitigate the risk of DDOS attacks.

  • 24/7 monitoring - we use a variety of automated tools to monitor our services 24/7 and alert us of any service availability issues

Performance & Scalability

Twingate aims to make life for administrators and users easier, which means that the performance of Twingate should not hinder productivity. Related to this is scalability: as usage of Twingate increases, performance should scale with it and not deteriorate.

We ensure that Twingate is performant, and maintains this characteristic even as usage by individual customers or our whole customer base grows, by:

  • Removing the problem of backhauling - instead of having all traffic routed through a central gateway that may be geographically distant to origin and destination endpoints, traffic routed via Twingate takes a more direct route, leading to decreased latency for users and bandwidth use for organizations. Twingate clients automatically and intelligently connect with Twingate controllers and relays that offer the best performance depending on where the user is physically located at the time and what resource they need to connect to.

  • Supporting split tunnelling - any user traffic that an organization chooses not to route through Twingate will bypass Twingate altogether and be handled by a user’s device independently. This reduces sending traffic unnecessarily through additional hops.

  • Load balancing - Twingate handles load balancing at numerous levels. Twingate’s controllers and relays are located in a variety of locations and geographic regions. As part of infrastructure planning, we aim to distribute them strategically to reduce latency and provide for load balancing in regions where higher traffic loads are expected. For example, in higher traffic regions, we may add additional controllers and relays and balance loads among them. Latency is further reduced by hosting controllers and relays with the same IaaS providers that customers use (e.g. within AWS, Azure and GCP). On the customer side, customers can install multiple connectors within the same network, and Twingate will automatically handle load balancing between connectors for access requests into that particular network.

  • Handling scaling for customers - traditional network access security models require organizations to deploy and maintain their own security infrastructure, such as VPN gateways. Scaling that up adds to overheads disproportionately, and ties up resources that can be deployed for other initiatives. Twingate unburdens IT departments from needing to worry about scaling.

  • Distributing authorization processing - instead of being centralized in one location which acts as a bottleneck, authorization processing workloads are distributed - such as at the Twingate client level - which helps with overall performance.

Last updated 2 years ago