Hello Friends,
Welcome back to the Part 4. In the first three parts of this series, we built a complete pipeline: Part 1 explained the security problem and why tenant visibility matters, Part 2 showed the PowerShell technique for resolving storage FQDNs to tenant IDs using the WWW-Authenticate header, and Part 3 connected Azure Firewall logs through KQL queries to feed that script automatically.
The pipeline we have so far produces a report with an FQDN, a tenant ID, and an IsOwnTenant flag. That is useful, but it leaves two practical gaps. First, tenant IDs are GUIDs: machine-readable, but not something a security team can act on. When a report tells you that eight storage accounts belong to an unknown tenant, you want a name, not a GUID. Second, the report is a manual one-time snapshot. In practice, you want something that flags new unknown tenants as they appear in your firewall traffic without anyone having to remember to run a script.
This part closes both gaps. I will show how to use the Microsoft Graph API to resolve any tenant ID to an organization name, how to build a simple allowlist to classify each tenant as your own, approved, or unknown, and how to set up an Azure Monitor alert that fires automatically when new unclassified traffic appears.
Resolving Tenant Names with the Microsoft Graph API
The Microsoft Graph API exposes a function called findTenantInformationByTenantId that returns the organization display name and default domain for any valid Entra ID tenant. Importantly, this works for any tenant, not just ones where you have an existing relationship or guest access. As long as you have a valid Graph token, you can resolve a tenant ID to a name.
The endpoint follows this pattern:
GET https://graph.microsoft.com/v1.0/tenantRelationships/findTenantInformationByTenantId(tenantId='<guid>')
The response includes four fields. The two you care about are displayName (the organization name registered in Entra ID) and defaultDomainName (the .onmicrosoft.com domain, which is permanent and cannot be changed by the tenant admin).
{
"tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"displayName": "Contoso Ltd",
"defaultDomainName": "contoso.onmicrosoft.com",
"federationBrandName": null
}
To call this endpoint you need a bearer token with the CrossTenantInformation.ReadBasic.All permission. This is a low-privilege delegated permission: it does not require admin consent, and it does not grant access to the other tenant's data, directory, or users. It only allows you to look up the basic identity information that the tenant has chosen to make visible.
Since the series already uses the Az module, the cleanest way to get a Graph token is with Get-AzAccessToken. Make sure you have run Connect-AzAccount first.
$graphToken = (Get-AzAccessToken -ResourceUrl "https://graph.microsoft.com/" -AsPlainText).Token
The -AsPlainText parameter was added in Az.Accounts 2.17.0. In earlier versions, Token was already a plain string and you can omit it. If you get a parameter error, check your module version with Get-Module Az.Accounts -ListAvailable and update if needed.
With the token in hand, resolving a single tenant ID looks like this:
$tenantId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
$uri = "https://graph.microsoft.com/v1.0/tenantRelationships/findTenantInformationByTenantId(tenantId='$tenantId')"
$tenantInfo = Invoke-RestMethod -Uri $uri `
-Headers @{ Authorization = "Bearer $graphToken" } `
-UseBasicParsing
$tenantInfo | Select-Object displayName, defaultDomainName
Output:
displayName defaultDomainName
----------- -----------------
Contoso Ltd contoso.onmicrosoft.com
One thing worth noting: displayName reflects whatever the organization has set in their Entra ID profile. It can be changed by their admin at any time. If you need a stable identifier for a long-term allowlist, use the tenantId itself or the defaultDomainName, which is permanent.
Building an Approved Tenant Allowlist
With tenant names resolvable, the next step is classification. Not every external tenant in your report is a problem: you may have legitimate storage dependencies on partner tenants, Microsoft-managed tenants for platform services, or backup providers. The allowlist is how you encode that knowledge so the report can tell you what is expected and what is not.
I keep the allowlist as a simple JSON file, with one entry per approved tenant and a few metadata fields:
[
{
"TenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"DisplayName": "Contoso Ltd",
"Notes": "Primary data partner, approved by InfoSec 2024-03"
},
{
"TenantId": "yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
"DisplayName": "Fabrikam Inc",
"Notes": "Backup storage provider"
}
]
Your own tenant ID does not need to be in this file. The classification logic handles it separately. The allowlist is only for external tenants you have explicitly reviewed and approved.
In PowerShell, loading the allowlist and building a hashtable for fast lookup:
$allowlistPath = ".\approved-tenants.json"
$allowlist = Get-Content -Path $allowlistPath -Raw | ConvertFrom-Json
$allowlistIndex = @{}
foreach ($entry in $allowlist) {
$allowlistIndex[$entry.TenantId] = $entry
}
Classification is then a straightforward three-way check: if the tenant ID matches your own, it is Own; if it is in the allowlist index, it is Approved; if neither, it is Unknown. Unknown is the classification you act on.
Enriching the Full Pipeline
With the Graph resolver and the allowlist in place, the updated end-to-end script pulls everything together. The structure follows the same flow as Part 3, with two additions: a Graph name resolution pass over the unique tenant IDs before building the report, and a Classification column in the output.
$fqdnListPath = ".\firewall-storage-fqdns.txt"
$allowlistPath = ".\approved-tenants.json"
$reportPath = ".\tenant-visibility-report.csv"
$fqdns = Get-Content -Path $fqdnListPath
$results = Get-StorageAccountTenantId -storageFqdns $fqdns -Verbose
$myTenantId = (Get-AzContext).Tenant.Id
$graphToken = (Get-AzAccessToken -ResourceUrl "https://graph.microsoft.com/" -AsPlainText).Token
$displayNameCache = @{}
$uniqueTenantIds = $results | Where-Object { $_.TenantId } |
Select-Object -ExpandProperty TenantId -Unique
ForEach ($id in $uniqueTenantIds) {
try {
$uri = "https://graph.microsoft.com/v1.0/tenantRelationships/findTenantInformationByTenantId(tenantId='$id')"
$info = Invoke-RestMethod -Uri $uri -Headers @{ Authorization = "Bearer $graphToken" } -UseBasicParsing
$displayNameCache[$id] = $info.displayName
}
catch {
$displayNameCache[$id] = $null
}
}
$allowlist = Get-Content -Path $allowlistPath -Raw | ConvertFrom-Json
$allowlistIndex = @{}
ForEach ($entry in $allowlist) { $allowlistIndex[$entry.TenantId] = $entry }
$report = $results | Select-Object FQDN, TenantId, Status,
@{ Name = "DisplayName"; Expression = { $displayNameCache[$_.TenantId] } },
@{ Name = "Classification"; Expression = {
if (-not $_.TenantId) { "Unresolved" }
elseif ($_.TenantId -eq $myTenantId) { "Own" }
elseif ($allowlistIndex.ContainsKey($_.TenantId)) { "Approved" }
else { "Unknown" }
}}
$report | Export-Csv -Path $reportPath -NoTypeInformation
$report | Group-Object Classification | Select-Object Name, Count | Sort-Object Name
A typical summary output looks like this:
Name Count
---- -----
Approved 12
Own 47
Unknown 8
Unresolved 2
The Unknown rows are the ones that need attention. Eight storage accounts whose tenant IDs do not appear in your allowlist and are not your own is a concrete, actionable finding. From there, you can look up each display name in the report, decide whether to approve it, and update the allowlist file for the next run.
Setting Up an Alert for Unknown Tenants
The report is now enriched and useful, but running it manually on a schedule is not a sustainable process. The goal is to get alerted automatically when new unknown storage traffic appears in your firewall logs so you can investigate immediately rather than discovering it the next time someone runs the script.
The simplest approach uses an Azure Monitor Scheduled Query Alert. You configure a KQL query that looks for new storage FQDNs that were not seen in your baseline window, set the alert to run on a schedule, and define a threshold of more than zero results as the trigger condition.
The KQL is an adapted version of the new-FQDNs query from Part 3, tightened to a daily cadence:
let knownFqdns =
AZFWApplicationRule
| where TimeGenerated between (ago(30d) .. ago(1d))
| where Fqdn has "blob.core.windows.net"
or Fqdn has "dfs.core.windows.net"
| summarize by Fqdn;
AZFWApplicationRule
| where TimeGenerated >= ago(1d)
| where Fqdn has "blob.core.windows.net"
or Fqdn has "dfs.core.windows.net"
| summarize by Fqdn
| where Fqdn !in (knownFqdns)
| project Fqdn
In the Azure portal, create this alert under your Log Analytics workspace with the following settings:
- Alert logic: Number of results greater than 0
- Evaluation frequency: every 1 hour
- Lookback period: last 24 hours
- Severity: 2 (Warning) or 1 (Error) depending on your environment's risk tolerance
- Action group: email, Teams webhook, or PagerDuty, whichever your team already uses
When the alert fires, the notification payload contains the list of new FQDNs. At that point you have two paths: manually take the FQDN list, run the enrichment script from the previous section, and review the Classification column; or automate that step entirely with an Azure Automation runbook.
If you wire an Azure Automation runbook to the alert's action group via a webhook, the full enrichment pipeline can run automatically on every alert. The runbook receives the triggered alert payload, extracts the new FQDNs, calls Get-StorageAccountTenantId and the Graph API, checks the allowlist, and posts a Teams or email notification with the classified results already highlighted. You get a report in your inbox with Unknown tenants flagged, with no manual steps between the alert firing and the finding landing in front of the right person.
For environments where Azure Automation is not yet available or the setup overhead is not justified, the manual flow is still a significant improvement over no alerting at all: you know within an hour that new storage traffic has appeared, and the enrichment script takes under a minute to run against a short list of new FQDNs.
Wrapping Up
This is the final part of the series, so let me take a step back and look at what we built across all four articles.
In Part 1, we identified the core problem: outbound firewall rules that allow access to any Azure Storage account in any tenant are a data exfiltration risk, and Azure Firewall logs alone give you FQDNs but no tenant context.
In Part 2, we solved the discovery problem. By sending a request with two specific headers to any storage account endpoint, we force a 401 response that carries the WWW-Authenticate header with the tenant ID embedded inside it. A simple regex extracts it, and a public OIDC endpoint confirms it. No credentials, no special permissions.
In Part 3, we connected the script to real data. We walked through both Azure Firewall log formats, wrote KQL queries to extract storage FQDNs from each, and built a supporting query that surfaces only new FQDNs so you are not re-processing traffic you have already reviewed.
In Part 4, we made the output actionable. The Microsoft Graph API resolves tenant IDs to human-readable organization names. A JSON allowlist classifies each tenant as Own, Approved, or Unknown. And a Scheduled Query Alert in Azure Monitor means you hear about new unknown tenants within an hour of them appearing in your logs, without anyone having to remember to run a script.
The result is a lightweight but complete visibility capability: it runs against your existing firewall logs, requires no agents or additional data collection, and produces a report that a security team can read and act on. For most environments, the Unknown tenant list at the end of that report is the first time anyone has had a clear answer to the question of where outbound storage traffic is actually going.
I hope the series has been useful. If you have questions or want to share how you adapted it for your environment, feel free to reach out.