Attribution data discrepancies can make even smart teams question their reporting. When I open Google Ads and see one total, GA4 shows another, and the CRM lands somewhere in the middle, it feels like the data is arguing with itself. Same campaign. Same month. Different numbers. For a B2B service company with a long sales cycle, that gap usually widens once form fills turn into booked meetings, SQLs, and revenue.
It is frustrating, but it is not random. The short version is simple: the tools are not counting the same thing. They use different rules, different clocks, and sometimes different identities for the same person. Once I can see where the counts split, the mess starts to look less like chaos and more like a reporting system that needs clear rules.
Attribution data discrepancies: why my marketing numbers don't match
When I compare marketing totals across systems, I almost never expect them to match line for line. Different systems count different users, events, windows, and timestamps. An ad platform may assign a conversion back to the date of the ad click. GA4 may record a key event on the day it happened in session. A CRM may count only deduped contacts, or only contacts that reach a specific lifecycle stage. All three numbers can be valid at the same time.
In B2B service companies, this gets harder because the journey is longer. A click can happen today, a form fill next week, a booked meeting two weeks later, and an opportunity a month after that. If I mix those moments in one report without clear rules, the story breaks fast.
In practice, the usual causes are attribution model differences, duplicate events, missing tags, broken pixels, CRM stage changes, offline imports, reporting lag, ad blockers, consent settings, different counting methods, and time zone mismatches. Before I blame the ad platform, the CRM, or the team, I start with a simple triage view.
| What I see | Likely cause | First place I check |
|---|---|---|
| Google Ads is higher than GA4 | Click-based reporting, modeled conversions, longer lookback window | Conversion settings in Google Ads and GA4 attribution reports |
| GA4 is higher than the CRM | Duplicate form events, spam leads, CRM dedupe, lifecycle filters | Form setup, hidden fields, CRM dedupe rules |
| CRM suddenly jumps after being flat | Offline import, workflow change, sales stage edit | CRM workflows, import logs, lifecycle property history |
| LinkedIn shows leads but the CRM shows few | View-through credit, long buying cycle, weak UTM capture | LinkedIn attribution settings, landing page UTMs, CRM source fields |
| All tools shift after a site release | Broken tags, double firing, consent banner change | Tag manager, browser tests, server event dedupe |
A simple example makes this real. I might see Google Ads report 42 conversions for a search campaign, GA4 show 34 lead events, and HubSpot show 29 new contacts from paid search. The gap usually comes from stacked causes, not one bad number. Three people may have submitted the form twice, so GA4 counted more than the CRM. Two leads may have been blocked by CRM dedupe rules. Four conversions may still be credited in Google Ads because those people clicked an ad earlier and converted within the click window, even though the last visit looked direct in GA4. Another set of leads may not appear in the CRM until later because a workflow held them for enrichment. Same campaign, different logic.
The anatomy of a data mismatch
When I trace a mismatch, I usually find it in layers, not in one bug. That framing makes the problem easier to fix.
click -> landing page visit -> form fill -> booked meeting -> SQL -> opportunity
Counts can split at every step. First, there is the tracking setup. I want to know whether the tag loaded on every page, whether the thank-you event fired once or twice, and whether the server-side event deduped correctly against the browser event. If that part is weak, I usually review Server-Side Tagging for B2B: When It’s Worth It and When It’s Not before I touch reporting logic.
Next comes the conversion definition. One tool may count a form submit. Another may count only a contact created. The CRM may count only leads with a business email. If the definitions are different, the totals will never line up. This is why I like to document the rules up front with How to define meaningful conversion events for B2B.
Then there are the attribution rules. Google Ads may credit the ad click. GA4 may use a data-driven model across sessions. LinkedIn may include view-through influence. The CRM may rely on first touch, last touch, or a custom source field that gets overwritten later. Same buyer journey, different credit rules.
After that comes identity resolution. One person can visit on mobile, return on desktop, and book through a calendar link from email. Some systems stitch that path together. Some do not. If consent is denied, that stitching gets even weaker. This is one reason cross-device attribution tracking matters so much in longer B2B journeys.
I also check time zone and reporting latency. One tool may close the day at midnight Eastern, another in Pacific time, while the CRM sync updates hourly. A conversion can look missing in the morning and appear later in the day.
Last, I look at CRM updates. A sales rep merges contacts, a workflow changes lead status, or an offline conversion import lands two days late. That can move last week’s totals after the fact. It feels wrong when it happens, but it is a normal part of many reporting setups.
Why ad platforms and analytics tools rarely agree
When I look at these systems side by side, I remind myself that they were built for different jobs. Google Ads and LinkedIn Campaign Manager are built to help optimize media spend. GA4 is built to show site behavior and conversion paths. The CRM is built to track people, pipeline, and revenue. Expecting them to match exactly sounds reasonable, but it usually is not. A lot of the confusion makes more sense once you understand Google Analytics attribution limitations.
The biggest gap often appears when an ad platform counts more conversions than analytics. In my experience, that usually comes from click-based attribution, longer lookback windows, modeled conversions, or view-through credit. Analytics may also miss visits when consent is denied or cookies are blocked. The platform is not always wrong. It is often just using different rules.
Click-based versus session-based reporting
Google Ads often ties a conversion back to the ad click. GA4 ties behavior to sessions and users as best it can. If someone clicks a paid ad on Monday and returns direct on Thursday to fill out a form, Google Ads may still claim the conversion while GA4 may not.
View-through conversions
LinkedIn is a common example in B2B. A buyer sees an ad, never clicks, then searches the brand later and fills out a form. LinkedIn may count that influence through a view window. GA4 will not, and the CRM usually will not unless attribution has been customized to capture it.
Modeled conversions
Privacy changes did not eliminate attribution, but they did make it less exact. Consent mode, browser limits, and app tracking rules mean some platforms estimate missing conversions. Those modeled numbers can help bidding, but they do not map neatly to a contact record in HubSpot or Salesforce. If you want the technical backdrop, this guide on losing attribution data from privacy updates is a useful reference.
Cross-device behavior
A buyer may first click an ad on mobile, revisit on a laptop, then book from an email link. One tool connects that path. Another sees separate interactions. Longer B2B research cycles make this more common.
Cookie loss and consent settings
If a consent banner blocks analytics until consent is given, GA4 may miss early visits. If the ad platform still receives server-side conversion data later, its totals can be higher. I see this especially often in setups with stricter privacy controls. When that is the issue, I usually audit Consent Mode v2 for Lead Gen: Getting Measurable Signals Without Guesswork before I touch channel budgets.
Lookback windows and default settings
A 30-day click window will usually count more than a 7-day click window. Add view-through reporting and the total rises again. High-consideration B2B buying tends to widen that gap.
None of that makes ad platform numbers useless. I still find them valuable for campaign tuning, bidding signals, audience feedback, and creative testing. The mistake is treating them as the only revenue report.
The hidden costs of ignoring discrepancies
It is easy to shrug and call the gap "close enough." I have learned to be careful with that phrase because it can get expensive.
When attribution discrepancies go unchecked, budget moves to the wrong place. A channel that reports strongly inside the platform can keep winning spend even if the CRM shows weak opportunity creation. Another channel can look quiet in the platform and still drive strong assisted revenue later in the sales cycle.
The gap also distorts ROAS and CAC. If the platform counts 100 leads and the CRM shows only 80 valid leads, the cost per lead is not what it first appeared to be.
Here is the simple math. If I spend $20,000 on paid search, Google Ads reports 100 qualified leads, and the CRM shows 80 real qualified leads after dedupe and sales review, the cost per qualified lead changes from $200 to $250. If 25 percent of qualified leads become opportunities, the platform view suggests 25 opportunities while the CRM supports only 20. At an average opportunity value of $15,000, that is a projected pipeline gap of $75,000.
There is a softer cost too: trust. Marketing says paid search is winning. Sales says the leads are weak. Leadership sees three dashboards and none agree. The meeting stops being about what to do next and turns into an argument about whose spreadsheet is right.
Building a single source of truth
When I talk about a single source of truth, I do not mean every tool should show the exact same total. That usually leads to fake precision. What I want instead is a reporting hierarchy: each system has a job, each metric has a home, and everyone knows which number to use for which decision.
For most B2B service companies, I assign ad platforms to media spend, clicks, and conversion signals used for bidding. I use analytics for on-site behavior, landing page trends, and path analysis. I use the CRM for leads, qualified leads, opportunities, pipeline, and revenue.
That only works if I also set operating rules. I keep conversion names consistent, keep UTM naming simple and fixed, lock source fields in the CRM once created when possible, and document who owns forms, tags, offline imports, and reporting. When a number moves, someone should know why. I also like having a stable naming standard in GA4, which is why Event naming in GA4 for B2B: a convention that scales is worth documenting before the account gets messy.
| Metric | Primary source | Secondary source | Why |
|---|---|---|---|
| Sessions and landing page engagement | GA4 | Ad platforms | Analytics is built for site behavior |
| Raw lead events | GA4 or form tool | CRM | Useful for spotting form volume changes |
| New contacts | CRM | GA4 | CRM handles dedupe and lifecycle fields |
| MQLs or SQLs | CRM | Marketing automation | Sales stage logic lives here |
| Opportunities | CRM | Sales reports | Pipeline should come from one place |
| Closed-won revenue | CRM or finance system | Ad platform import | Booked revenue matters more than platform estimates |
| Conversion signal for bidding | Ad platform using CRM import when possible | GA4 | Ad platforms need fast feedback |
Attribution windows that quietly change the story
Before I do anything else, I audit the attribution windows. A surprising amount of apparent data drama is just a window problem.
Take a service business selling a high-ticket consulting package. If I report paid search leads on a 7-day click window, I may see 18 conversions. Move to a 30-day click window and that may rise to 27. Add a 7-day view window on a platform like LinkedIn and I may see 33 influenced conversions. Those totals are not fake. They are based on different rules.
When I can, I align the click windows. When I cannot, I document the difference and keep it visible in the report. A simple note explaining that Google Ads uses a 30-day click window while GA4 uses a 7-day key event window prevents a lot of wasted debate.
Picking a primary conversion source without the drama
The primary conversion source depends on the decision in front of me. This is where a lot of teams get stuck because they want one number for everything. That sounds tidy, but it is rarely useful.
| Decision | Number I trust most |
|---|---|
| Which campaign gets more spend this week? | Ad platform conversions, ideally fed by CRM imports |
| How many net new leads did marketing create? | CRM new contacts after dedupe |
| How many qualified leads reached sales? | CRM lifecycle stages |
| How many opportunities came from paid search or organic search? | CRM with locked source fields |
| What is CAC, pipeline, or revenue by channel? | CRM and finance data |
So yes, I may use Google Ads conversions for bidding signals if they are tied to real business outcomes. But when the question is how much revenue paid search drove, I go to the CRM and booked revenue. In most mature accounts, that means using Offline Conversion Imports: The Only Signals Google Ads Should Optimize For instead of relying on form fills alone.
Practical steps to reconcile my data today
When a conversion tracking discrepancy spikes suddenly, I start with recent changes. In most cases, something changed in tracking, forms, consent settings, or CRM logic before the numbers moved. I work through the checks below from fastest impact to deeper cleanup.
- Check what changed recently. I review tag manager edits, form updates, consent banner changes, server-side dedupe changes, and CRM workflow history. Sudden spikes usually come from a recent change, not a mystery trend.
- Verify tags and pixels are firing where they should. I test the landing page, form submit, and thank-you step. One missing or broken tag can create a false channel problem.
- Compare event definitions side by side. I write out exactly what each system counts. If Google Ads counts form submissions while the CRM counts contacts created, I already know why the totals differ.
- Check server-side and browser event dedupe. If both fire without a valid event ID, double counting is likely.
- Look for duplicate firing. One event can fire from page load, button click, and form success if the setup is messy. A single live form test often catches this quickly.
- Match time zones where possible. Same-day comparisons can look broken when the ad account, analytics property, and CRM close the day at different times.
- Review offline imports, routing, and UTM capture. Offline uploads can change past totals, while bad redirects, blank hidden fields, or weak routing can create a wide gap between analytics and the CRM.
- Backfill broken periods and annotate the report. If tracking was down for five days, I mark it clearly. If I can reprocess records or import missed conversions, I do. If I cannot, I make the limitation visible.
When the variance gets large, I use a repeatable investigation process rather than guessing. This guide on how to fix attribution discrepancies in data is a good technical companion to that workflow.
Data quality checks when a conversion tracking discrepancy shows up
Good reporting is not something I set up once and forget. It needs maintenance.
Each week, I compare leads by channel across ad platforms, GA4, and the CRM, spot-check a handful of recent records for source and UTM accuracy, submit a test form, and review any website or tracking changes from the previous seven days. Each month, I go deeper by auditing conversion settings in each ad platform, confirming consent behavior, reviewing CRM workflow edits and field mappings, checking offline conversion imports, and verifying that source fields are not being overwritten later in the funnel.
If spam is inflating raw lead counts before they ever reach sales, I also revisit Reducing Spam Leads in B2B PPC Without Killing Volume. That one issue alone can make GA4 look healthy while the CRM looks weak.
I also set a tolerance threshold. Perfect parity is not the goal. For many B2B service companies, I treat a 5 to 15 percent variance between Google Ads and GA4 as normal, especially when windows and consent rules differ. Once the gap moves past 20 percent, or if the pattern changes suddenly, I investigate it.
Consistency matters more than chasing exact agreement. If I accept a 10 percent gap one month and panic at 8 percent the next, the problem is the process, not the report.
Moving forward with confidence
The healthiest way I know to handle attribution discrepancies is not to pretend they should disappear. It is to manage them with clear rules.
I anchor each decision to the right primary source. I use the ad platform for media signals, analytics for on-site behavior, and the CRM for pipeline and revenue. I interpret variance the same way every month, document known exceptions, and share the gaps openly with marketing, sales, and leadership so the reporting stays honest.
A simple operating rhythm is usually enough: weekly review of platform and analytics gaps, monthly review of CRM logic and imports, and quarterly alignment between sales, marketing, and leadership on source rules. I also want one clear owner, often in marketing ops or RevOps, plus an escalation path. If variance crosses the threshold, I check tracking first, then CRM workflows, then media settings.
I also find it useful to add a short discrepancy note to each monthly report that records the reporting period, the tools compared, the variance range, the known reasons for the gap, the source used for budget decisions, the source used for revenue reporting, the owner, and the next review date. That small addition does more than most teams expect. It reduces panic, adds context, and keeps the conversation focused on action instead of noise.





