On my latest project, the client has deployed several ISEs and was unable to successfully implement NSGs or the Azure firewall to secure traffic. As a result, these ISEs aren’t currently using either.
We’ll be reviewing an internal ISE. Here, the primary concern is in regards to outbound traffic, but the inbound traffic is of course important. Since the internal-based ISE doesn’t have an outside listener, the surface area on the outside is minimal. While the documentation leads one to believe that it would be possible to use a fairly complex NSG with a mix of service tags and static addresses to address both inbound and outbound traffic lockdown, successfully implementing it is tricky.
For outbound traffic, the challenge I encountered had to do with certificate verification. While they’re green in the screenshot, this is prior to enabling an NSG.

We want them all green. Anything that isn’t green in the Network Health blade is not a good thing. An unhealthy ISE (or worse, an unhealthy backend ASE) can lead to a total meltdown. Sadly, the specifics behind that are not published, which means you need to be extra vigilant here and ensure that your health checks are passing at all times.

So, this doesn’t seem overly difficult… right? Just punch holes for the required traffic in your NSG. Unfortunately, the docs never come right out and say that service tags don’t exist for things such as the SSL Certificate Verification checks, but let’s capture what we can from the docs to come up with a stab at what our NSG rules should look like. Again, we’ve chosen to secure an ISE that is not listening on an Internet-accessible IP address.
The documentation at ‘enable access for ISE‘ is a good place to begin piecing together what network traffic you need to allow in order to secure your internally accessible ISE; specifically, Network ports used by your ISE. So there’s two tables of things you need to add, one for inbound and another for outbound. Things are greatly simplified by using Service Tags–awesome! Wait… where are the requirements for SSL verification? Hmm, maybe they are a part of another Service Tag… AppServiceManagement, perhaps…
Immediately following the tables that identify the required traffic flows, it is noted that you also need to address ASE traffic dependencies and then link you to the ‘Introduction to the App Service Environments’ page. What you need to know about ASE traffic dependencies can be found at Networking Considerations for an App Service Environment and (possibly, depending on your use case) Configure ASE for Forced Tunneling. Following that are two bullets… both of which describe what you should do to allow the required ASE traffic if you’re running Azure Firewall or a firewall other than Azure Firewall, but nothing specific to the use of NSGs.
Okay… looks like we’re going to try the NSG configured as outlined in the documentation. Per the the Enable access for ISE documentation:
When you set up NSG security rules, you need to use both the TCP and UDP protocols, or you can select Any instead so you don’t have to create separate rules for each protocol. NSG security rules describe the ports that you must open for the IP addresses that need access to those ports. Make sure that any firewalls, routers, or other items that exist between these endpoints also keep those ports accessible to those IP addresses.
Official Microsoft documentation
Mostly, yes these allowances will work; however, a couple of the SSL Certificate verifications (part of ISE health checks) will take a dive in to the red zone. The ones I had issues with were crl.microsoft.com and www.microsoft.com. Dang!
So, why are those SSL health checks failing? Because there are no tags that correctly identify this traffic. The one that would have a chance at it would have to be AppServiceManagement, but no, that doesn’t get it.
How do we know the failing traffic? By enabling NSG flow logging.
{
"rule": "UserRule_DenyAllOutbound",
"flows": [
{
"mac": "0003FF56D399",
"flowTuples": [
"1612898847,10.2.1.5,52.226.74.110,23289,443,T,O,D",
"1612898857,10.2.1.5,40.86.102.100,23292,443,T,O,D",
"1612898866,10.2.1.5,23.55.221.138,23297,80,T,O,D",
"1612898866,10.2.1.5,23.55.221.138,23298,443,T,O,D",
"1612898866,10.2.1.5,23.39.37.199,23309,80,T,O,D",
"1612898866,10.2.1.5,23.39.37.199,23310,443,T,O,D",
"1612898879,10.2.1.5,40.86.102.100,23333,443,T,O,D",
"1612898900,10.2.1.5,40.86.102.100,23336,443,T,O,D"
]
}
]
}
The 23.55.221.138 and 23.39.37.199 addresses are both Akamai CDN tied to SSL certificate validation. Simply dig crl.microsoft.com
to see some examples.
Ultimately, at the time of this writing, I’m not aware of a work-around if you want to restrict outbound traffic originating from an internal ISE using NSG rules. It just doesn’t seem feasible. Even attempting to script name resolution and populating an NSG with the IPs failed because of the dynamic nature of the SSL verification host names. Opening outbound to everything 443 will work, though.
With an Azure Firewall in the mix, we ran in to similar issues with the ISE failing health checks. I was able to work around it by looking up AppServiceManagement.<Region> and LogicApps.<Region> from the Azure Datacenter IP ranges json dump, and adding those to a UDR attached to the ISE subnets. I didn’t have to specifically whitelist any of the SSL verification domains. Maybe those are included in the underlying infrastructure firewall rule, but I wasn’t able to find for certain one way or the other. Another possibility is that the service tags available to the NSGs were not accurate at the time I tested.