I’ve recently worked with a customer that had an Azure Virtual Network Gateway in place to connect their remote workers (using native P2S functionality, supported under subnet 192.168.17.0/24) to both Azure and on-premises networks. Pretty usual setup, to be fair. Let’s say:
- Virtual Network Gateway
- Name: vpngwA
- Virtual Network: vnetA (172.16.0.0/16)
- Local Network Gateway
- Name: lngwA
- Remote Network: 192.168.0.0/24 (using S2S connection named “s2sA”)
This worked well for them, until they have exhausted the 128 connections limit for their protocol type (SSTP, by the way), due to the COVID-19 pandemic forcing the need for remote work to be almost exclusive.
With this background in mind, I’ve started working on workarounds that could avoid downtime (since changing protocol used in P2S VPN forces you to redeploy all profiles to your users, again), so I tried to just create a new Virtual Network (named, let’s say: vnetB with CIDR 10.150.21.0/24) and peer both Azure VNets.
The thing is that, peering doesn’t extend routing configuration to Point to Site clients, as I was expecting it to do. Even if you force it (by using routes.txt or modifying the OpenVPN client configuration file), it will not work. The same applies to VNet-to-VNet connection.
So, the solution turned out to be, instead of peering Azure VNets, create a S2S VPN between them, like you usually do with your on-premises infrastructure (see scenario). This way, you can have your new VNet’s P2S clients (under network 10.101.50.0/24, for example) reach both your other Azure VNet named vnetA and its on-premises networks, through your existing S2S connection.
Hope you read this post and don’t waste time as I did!