Deploying Twingate to GKE
Keith Hubner
•
Jun 12, 2022
Please note, this guide includes creating resources which will bring additional cost to your GCP subscription.
This guide assumes you have already deployed a private GKE cluster. For more information on setting this up please visit the official Google Cloud Documentation.
Setting up the Twingate subnet
The following command will create the network for the Twingate connector container. For the purpose of this guide I have deployed the connector in the same VPC network as the GKE cluster but within a management subnet.
Note you will need to replace project, range, network and region with those relevent to your environment. For the purpose of this guide I have named the new subnet “management‚.
gcloud compute networks subnets create management --project=twingate-347715 --range=10.0.0.0/24 --network=gke-private-demo --region=europe-north1 --enable-private-ip-google-access
After a few moments the management subnet should have been created:
NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE IPV6_CIDR_RANGE EXTERNAL_IPV6_CIDR_RANGEmanagement europe-north1 gke-private-demo 10.0.0.0/24 IPV4_ONLY
To enable the creation of the container and communicate with Twingate services, your management subnet will need access to the internet. How you do this may vary but for the purpose of this guide I have deployed a Cloud NAT gateway for the management subnet to use.
Once the networking is in place, we can deploy the connector into this new management subnet.
Deploying the connector
Back on the Twingate admin portal, within the new network, click “deploy connector‚ on an existing connector:
You can then click the generate tokens button, and copy the two values given:
Make a note of these values as we will need them to create the container instance.
Creating the container instance
We will be deploying our container using Google Cloud Compute. This can be done either by following the Twingate guide or adapting the gcloud command below.
Remember to replace the values below with your own, most noteably the TENANT_URL, ACCESS_TOKEN and REFRESH_TOKEN. It is recommended to name the container the same as the connector name in Twingate.
gcloud compute instances create-with-container black-wallaby --zone=europe-north1-a --machine-type=e2-small --network-interface=subnet=management,no-address --image=projects/cos-cloud/global/images/cos-stable-97-16919-29-16 --boot-disk-size=10GB --boot-disk-type=pd-balanced --boot-disk-device-name=tactful-lobster --container-image=twingate/connector:1 --container-restart-policy=always --container-env=TENANT_URL=https://mynet.twingate.com,ACCESS_TOKEN=123456ABCB,REFRESH_TOKEN=1239876YGTH
Once the container is running you should see your connection status updated:
Setting up the Twingate resource
Now the connector is established and communicating with Twingate. We can use this to connect to the Kubernetes private endpoint.
You can view the IP address of the private endpoint via the cluster information page in the GCP web console:
Once we have this endpoint we can add this as a resource to Twingate:
Once this has been added you should see the resource turn green, showing the connector can communicate with it:
Testing your connection
First let’s check we can’t get to the cluster at the moment. Ensure your Twingate client is closed and any other VPN or private route you may be using is disabled. If you don’t have the cluster config already, you can run the following command to add the cluster config to your local KUBECONFIG file:
Again these values are based on the ones used in this guide and may vary depending on your setup.
gcloud container clusters get-credentials CLUSTER_NAME
Now test the connection:
kubectl get nodes
Unable to connect to the server: dial tcp 172.16.0.2:443: connect: network is unreachable
Now open your Twingate client. You should also see that there is an additional authentication required, more information on this can be found in the Twingate documentation.
Follow through the authentication steps then run the same command to test the connection:
kubectl get nodes
You should now get a response from the Kubernetes API:
NAME STATUS ROLES AGE VERSIONgke-gketest-default-pool-2bd94f93-7hl1 Ready <none> 118m v1.21.10-gke.2000
If you have trouble connecting, make sure your management network which contains your Twingate connector is allowed access to the control plane:
You now have secure access to your private Kubernetes API.
Rapidly implement a modern Zero Trust network that is more secure and maintainable than VPNs.
Deploying Twingate to GKE
Keith Hubner
•
Jun 12, 2022
Please note, this guide includes creating resources which will bring additional cost to your GCP subscription.
This guide assumes you have already deployed a private GKE cluster. For more information on setting this up please visit the official Google Cloud Documentation.
Setting up the Twingate subnet
The following command will create the network for the Twingate connector container. For the purpose of this guide I have deployed the connector in the same VPC network as the GKE cluster but within a management subnet.
Note you will need to replace project, range, network and region with those relevent to your environment. For the purpose of this guide I have named the new subnet “management‚.
gcloud compute networks subnets create management --project=twingate-347715 --range=10.0.0.0/24 --network=gke-private-demo --region=europe-north1 --enable-private-ip-google-access
After a few moments the management subnet should have been created:
NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE IPV6_CIDR_RANGE EXTERNAL_IPV6_CIDR_RANGEmanagement europe-north1 gke-private-demo 10.0.0.0/24 IPV4_ONLY
To enable the creation of the container and communicate with Twingate services, your management subnet will need access to the internet. How you do this may vary but for the purpose of this guide I have deployed a Cloud NAT gateway for the management subnet to use.
Once the networking is in place, we can deploy the connector into this new management subnet.
Deploying the connector
Back on the Twingate admin portal, within the new network, click “deploy connector‚ on an existing connector:
You can then click the generate tokens button, and copy the two values given:
Make a note of these values as we will need them to create the container instance.
Creating the container instance
We will be deploying our container using Google Cloud Compute. This can be done either by following the Twingate guide or adapting the gcloud command below.
Remember to replace the values below with your own, most noteably the TENANT_URL, ACCESS_TOKEN and REFRESH_TOKEN. It is recommended to name the container the same as the connector name in Twingate.
gcloud compute instances create-with-container black-wallaby --zone=europe-north1-a --machine-type=e2-small --network-interface=subnet=management,no-address --image=projects/cos-cloud/global/images/cos-stable-97-16919-29-16 --boot-disk-size=10GB --boot-disk-type=pd-balanced --boot-disk-device-name=tactful-lobster --container-image=twingate/connector:1 --container-restart-policy=always --container-env=TENANT_URL=https://mynet.twingate.com,ACCESS_TOKEN=123456ABCB,REFRESH_TOKEN=1239876YGTH
Once the container is running you should see your connection status updated:
Setting up the Twingate resource
Now the connector is established and communicating with Twingate. We can use this to connect to the Kubernetes private endpoint.
You can view the IP address of the private endpoint via the cluster information page in the GCP web console:
Once we have this endpoint we can add this as a resource to Twingate:
Once this has been added you should see the resource turn green, showing the connector can communicate with it:
Testing your connection
First let’s check we can’t get to the cluster at the moment. Ensure your Twingate client is closed and any other VPN or private route you may be using is disabled. If you don’t have the cluster config already, you can run the following command to add the cluster config to your local KUBECONFIG file:
Again these values are based on the ones used in this guide and may vary depending on your setup.
gcloud container clusters get-credentials CLUSTER_NAME
Now test the connection:
kubectl get nodes
Unable to connect to the server: dial tcp 172.16.0.2:443: connect: network is unreachable
Now open your Twingate client. You should also see that there is an additional authentication required, more information on this can be found in the Twingate documentation.
Follow through the authentication steps then run the same command to test the connection:
kubectl get nodes
You should now get a response from the Kubernetes API:
NAME STATUS ROLES AGE VERSIONgke-gketest-default-pool-2bd94f93-7hl1 Ready <none> 118m v1.21.10-gke.2000
If you have trouble connecting, make sure your management network which contains your Twingate connector is allowed access to the control plane:
You now have secure access to your private Kubernetes API.
Rapidly implement a modern Zero Trust network that is more secure and maintainable than VPNs.
Deploying Twingate to GKE
Keith Hubner
•
Jun 12, 2022
Please note, this guide includes creating resources which will bring additional cost to your GCP subscription.
This guide assumes you have already deployed a private GKE cluster. For more information on setting this up please visit the official Google Cloud Documentation.
Setting up the Twingate subnet
The following command will create the network for the Twingate connector container. For the purpose of this guide I have deployed the connector in the same VPC network as the GKE cluster but within a management subnet.
Note you will need to replace project, range, network and region with those relevent to your environment. For the purpose of this guide I have named the new subnet “management‚.
gcloud compute networks subnets create management --project=twingate-347715 --range=10.0.0.0/24 --network=gke-private-demo --region=europe-north1 --enable-private-ip-google-access
After a few moments the management subnet should have been created:
NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE IPV6_CIDR_RANGE EXTERNAL_IPV6_CIDR_RANGEmanagement europe-north1 gke-private-demo 10.0.0.0/24 IPV4_ONLY
To enable the creation of the container and communicate with Twingate services, your management subnet will need access to the internet. How you do this may vary but for the purpose of this guide I have deployed a Cloud NAT gateway for the management subnet to use.
Once the networking is in place, we can deploy the connector into this new management subnet.
Deploying the connector
Back on the Twingate admin portal, within the new network, click “deploy connector‚ on an existing connector:
You can then click the generate tokens button, and copy the two values given:
Make a note of these values as we will need them to create the container instance.
Creating the container instance
We will be deploying our container using Google Cloud Compute. This can be done either by following the Twingate guide or adapting the gcloud command below.
Remember to replace the values below with your own, most noteably the TENANT_URL, ACCESS_TOKEN and REFRESH_TOKEN. It is recommended to name the container the same as the connector name in Twingate.
gcloud compute instances create-with-container black-wallaby --zone=europe-north1-a --machine-type=e2-small --network-interface=subnet=management,no-address --image=projects/cos-cloud/global/images/cos-stable-97-16919-29-16 --boot-disk-size=10GB --boot-disk-type=pd-balanced --boot-disk-device-name=tactful-lobster --container-image=twingate/connector:1 --container-restart-policy=always --container-env=TENANT_URL=https://mynet.twingate.com,ACCESS_TOKEN=123456ABCB,REFRESH_TOKEN=1239876YGTH
Once the container is running you should see your connection status updated:
Setting up the Twingate resource
Now the connector is established and communicating with Twingate. We can use this to connect to the Kubernetes private endpoint.
You can view the IP address of the private endpoint via the cluster information page in the GCP web console:
Once we have this endpoint we can add this as a resource to Twingate:
Once this has been added you should see the resource turn green, showing the connector can communicate with it:
Testing your connection
First let’s check we can’t get to the cluster at the moment. Ensure your Twingate client is closed and any other VPN or private route you may be using is disabled. If you don’t have the cluster config already, you can run the following command to add the cluster config to your local KUBECONFIG file:
Again these values are based on the ones used in this guide and may vary depending on your setup.
gcloud container clusters get-credentials CLUSTER_NAME
Now test the connection:
kubectl get nodes
Unable to connect to the server: dial tcp 172.16.0.2:443: connect: network is unreachable
Now open your Twingate client. You should also see that there is an additional authentication required, more information on this can be found in the Twingate documentation.
Follow through the authentication steps then run the same command to test the connection:
kubectl get nodes
You should now get a response from the Kubernetes API:
NAME STATUS ROLES AGE VERSIONgke-gketest-default-pool-2bd94f93-7hl1 Ready <none> 118m v1.21.10-gke.2000
If you have trouble connecting, make sure your management network which contains your Twingate connector is allowed access to the control plane:
You now have secure access to your private Kubernetes API.
Solutions
Solutions
The VPN replacement your workforce will love.
Solutions