SPF's 10-lookup limit, explained by someone who's hit it 100 times
The standards break more domains than any other wingle thing in email authentication
DMARC aggregate reports arrive as XML attachments from every major mail provider. This guide shows you exactly how to interpret them and what to look for.
Every time an email is sent from your domain, the receiving mail servers generate data. If you have a DMARC policy in place, that data gets packaged into an XML report and sent to the address you specified in your rua tag. These are called aggregate reports, and most developers who've set up DMARC never actually read them.
That's a mistake. The reports contain everything you need to understand who is sending email on behalf of your domain, whether your SPF and DKIM are passing, and whether anyone is trying to spoof you.
This guide walks through what the reports contain, how to read the XML directly, and what to look for.
The XML structure is defined by the DMARC RFC, so every report from every provider follows the same schema. The top-level elements are:
report_metadata — who sent the report, the date range it covers, and the reporting organisationpolicy_published — your DMARC policy as the reporter sees itrecord — one record per source IP, per SPF/DKIM result combinationEach record has a row containing the source IP and message count, and an auth_results block containing the DKIM and SPF evaluation results.
Here's a minimal example of what a record looks like:
<record>
<row>
<source_ip>209.85.220.41</source_ip>
<count>47</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>pass</spf>
</policy_evaluated>
</row>
<identifiers>
<header_from>yourdomain.com</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>yourdomain.com</domain>
<result>pass</result>
</dkim>
<spf>
<domain>yourdomain.com</domain>
<result>pass</result>
</spf>
</auth_results>
</record>
When you read through aggregate reports, you're looking for three categories of result:
Legitimate sends passing correctly. Your email service provider's IP range, with dkim: pass and spf: pass. The disposition should be none because no action was taken — the mail was delivered normally. This is what you want to see for every authorised sender.
Forwarding failures. Email forwarding breaks SPF. When a recipient forwards your email to another address, the forwarding server relays it from its own IP, which isn't in your SPF record. This causes SPF to fail. DKIM usually still passes because the signature survives forwarding. This shows up as spf: fail, dkim: pass — DMARC passes on the DKIM result, so the mail still gets delivered. This is expected behaviour and not a problem unless your DMARC alignment is set to strict.
Spoofing attempts. An IP you don't recognise, sending mail that claims to be from your domain, with both SPF and DKIM failing. If your policy is p=reject, these should show disposition: reject. If it's p=none, they'll show disposition: none — meaning the mail was delivered anyway. This is why sitting at p=none indefinitely is a bad idea.
When you open an aggregate report, do this in order:
Check the source IPs. Do you recognise them? Cross-reference against your email providers' published IP ranges. Google's ranges are documented. Microsoft's are documented. Your transactional email provider (Mailgun, Postmark, SES) will have published IP lists. Any IP you can't account for deserves investigation.
Check for SPF-only failures with DKIM pass. These are almost always forwarding. Note the volume — if you have a high proportion of forwarded mail, this tells you something about your audience.
Check for both failing. These are either spoofing attempts or misconfigured internal senders (a marketing tool that nobody told IT about, a third-party CRM sending on your behalf without proper authentication). If the count is high, something is wrong.
Check your policy against what you intended. The policy_published section shows your p=, sp=, and pct= values as the reporter sees them. If they don't match what you think you've configured, your DNS changes may not have propagated or you have a typo.
Reading raw XML is fine for debugging specific issues, but for ongoing monitoring you want a tool that parses the reports and shows you a dashboard.
Several free and paid options exist. Postmark's DMARC monitoring is free for your own domains. Google's Postmaster Tools give you deliverability and domain reputation data for Gmail specifically. DMARC Analyser, Valimail, and others offer paid tiers with more detail.
The important thing is that you're looking at the data. Most organisations set up DMARC to p=none, look at the reports for a week, then forget about it. The point is to graduate from none to quarantine to reject once you're confident your legitimate senders are all properly authenticated.
DMARC alignment is separate from SPF and DKIM pass/fail. For SPF alignment, the domain in the Mail From envelope must match the From header domain. For DKIM alignment, the d= tag in the DKIM signature must match the From header domain.
Relaxed alignment (the default) allows subdomains to match. Strict alignment requires an exact match.
This catches people out when they use a subdomain for their ESP. If your marketing emails send from campaigns.yourdomain.com but the From header shows yourdomain.com, SPF alignment will fail even if SPF itself passes.
If you've set up DMARC and you're not sure whether your legitimate senders are all passing correctly, or you want a second opinion on your aggregate report data, we offer a free email deliverability audit. Get in touch →
Founded by Jon Morby, whose team has been running UK servers since 1992. Hosting built by engineers who care about deliverability and uptime.
Get in touch →The standards break more domains than any other wingle thing in email authentication
11/15 of the top UK digital marketing agencies had no DMARC policy at all, three had p=none and only one had it right. If that's the state of the agency's email security, imagine how badly configured their clients are!
Three records. Three different jobs. Most guides treat them as a checklist. This one explains what actually happens when each one fails — and why all three must work together.