Etavrian
keyboard_arrow_right Created with Sketch.
Blog
keyboard_arrow_right Created with Sketch.

Your Ecommerce Data Is Lying: Spot Real Signals

16
min read
Feb 26, 2026
Minimalist ecommerce insights dashboard with noisy metrics left clear revenue signal right human switching toggle

Most ecommerce teams do not suffer from a lack of data. They suffer from a lack of clear signals. Dashboards keep growing, yet decisions still feel slow and messy. To get signals in ecommerce, I have to move past raw numbers and start treating data like a conversation my store is having with me. Some messages are loud and clear. Most are background noise.

UNDERSTANDING ECOMMERCE SIGNALS: HOW TO GET SIGNALS IN ECOMMERCE

I start with two simple ideas: metrics and signals.

A metric is any number I track - sessions, conversion rate, cart adds, AOV, ROAS, and so on. A signal is a repeatable pattern in those metrics that ties directly to revenue or another outcome I care about.

Diagram showing how metrics combine into a revenue-impact signal across segments
A signal is a pattern with a segment, a timeframe, and a measurable business impact.

A jump in traffic is just a metric. A signal looks more like a connected story: mobile traffic from paid search rises and stays elevated for a week; within that same segment checkout errors climb; revenue from that segment falls even though spend is unchanged. Now I have a pattern, a segment, and a business impact. That is a signal.

I frame it this way because signals change how I work. They speed up decisions by pointing toward likely cause-and-effect instead of isolated numbers. They reduce false alarms by filtering out one-day blips. And they help me prioritise because I can rank them by impact rather than by how dramatic a chart looks.

Most teams drown in isolated metrics. The teams that move fastest build a habit of asking, week after week: is this just a number, or is this a signal that should change what I do next?

SIGNAL VS. NOISE: SEPARATING WHAT MATTERS

Every ecommerce store generates noise. A creator shares a link on social without warning. A developer ships a tracking or template change. A holiday weekend pulls people away from screens. Charts move even when nothing meaningful has changed.

When I’m deciding whether something is signal or noise, I use a lightweight scorecard before anyone treats it like an emergency. A pattern starts to look like a real signal if most of these are true:

  1. There’s enough volume. For site-level metrics, I prefer at least a few hundred sessions or orders in the period I’m analysing. For narrow segments, I watch absolute counts as closely as percentages.
  2. It lasts long enough. One bad day can be noise. Three to seven days of the same pattern, in the same segment, usually deserves attention.
  3. The change is big enough to matter. Tiny moves often mean nothing. As a rough rule of thumb, shifts bigger than ~15-20% on a key metric earn a closer look (with the caveat that low-volume segments can exaggerate percentages).
  4. It shows up in more than one related metric. Cart abandonment up alongside checkout time up is more convincing than either alone. AOV down alongside discount usage up is more meaningful than AOV down by itself.
  5. It passes a context check. I look for tracking changes, major promotions, and known seasonality that could explain the movement before I assume customer behaviour has changed.

Most false alarms I see come from three familiar traps.

Seasonality traps. Black Friday, payday cycles, back-to-school, and other seasonal effects can make week-over-week comparisons misleading. When seasonality is strong, I try to compare to the same period in the prior year (or at least the closest comparable period I have), not just last week.

Tracking changes. A tag manager tweak, a new consent prompt, or a change in attribution rules can move numbers without moving customer intent. I keep a simple analytics change log - what changed, when it changed, and who owns it - so I can rule this out quickly before I “fix” the wrong thing.

Promotion effects. Big promos distort almost every metric: traffic spikes, conversion lifts, and AOV often drops. That isn’t necessarily a conversion signal - it may simply be a discount signal. I separate promo weeks from non-promo weeks when I’m interpreting performance, so I’m not mixing two different customer contexts.

Once I use this lens consistently, something shifts: I spend less time debating data and more time making decisions, because I chase fewer ghosts.

STATISTICAL SIGNIFICANCE IN ECOMMERCE DATA

Sooner or later someone asks, “Is this statistically significant?” I try to keep the conversation grounded in two ideas.

Statistical significance asks whether a difference is unlikely to be random noise. Practical significance asks whether the difference is big enough to matter for the business. In ecommerce, I want both when I can get them - but I don’t need lab-grade proof for every decision.

When changes are high-risk or hard to reverse (for example, major checkout changes, pricing across a category, or critical page templates), I’m more careful and I look for stronger evidence. When the change is small, localised, or easily reversible (like tuning bids on one campaign or reacting to an obvious bug), I often act on directional trends first and validate after.

For smaller stores, sample size is the real constraint. If I only have a few dozen orders per day, waiting for perfect certainty can mean waiting too long. These rough minimums help me judge when a pattern is more likely to be trustworthy - without pretending they are strict rules:

Metric Minimum data for a solid read (per variant or segment)
Site conversion rate 500 to 1,000 sessions
Add to cart rate 300 to 500 product detail page views
Average order value 100 to 200 orders
Email click through rate 500 to 1,000 delivered emails
Bounce rate on a key landing 300 to 500 sessions

These are guardrails, not laws. If conversion drops sharply right after a deployment, I don’t wait for perfect testing - I act to stop the bleeding, then I validate what happened.

DIFFERENT TYPES OF SIGNALS IN ECOMMERCE

Once I can separate signal from noise, the next step is knowing which signals actually help run the business. In practice, most useful ecommerce signals fall into a handful of buckets.

Engagement signals tell me whether visitors are paying attention and whether pages meet expectations. Bounce rate becomes meaningful when I read it in context: a high bounce on an informational article can be fine, but a spike in bounce on a high-intent paid landing page often suggests a mismatch between the ad promise and the page experience. I also watch how far people scroll and how long they stay. If visitors rarely move past the hero area on product pages, the details that build confidence may be buried. If they scroll deeply but don’t click “add to cart,” pricing, shipping, or product clarity may be the blocker.

Conversion funnel signals are the rhythm of revenue: product view → add to cart → checkout start → completed order. I break funnel stages down by device, traffic source, and new vs returning visitors because “conversion is down” is not a diagnosis. A sudden drop only on mobile at the shipping step is a very different problem from a gentle decline across all stages on desktop. (If you like diagnostic thinking for stage drop-off, the framing in Pipeline Analytics: Reading Stage Drop-Off Like a Diagnostic translates well to ecommerce funnels.)

Retention and repeat purchase signals matter because revenue is not only made on first orders. If the repeat purchase interval drifts - from customers buying every 45 days to buying every 60 - that can be an early warning even while total revenue still looks stable. I also watch whether my best cohorts keep coming back to browse; fewer visits from high-value groups can signal competitive pressure, a weaker post-purchase experience, or simply that my assortment is losing relevance for them. Cohort thinking is often more revealing than averages (see B2B LTV models: when cohorts beat averages and why it matters for a useful mental model).

Framework diagram for categorizing ecommerce signals by type and business impact
Group signals into buckets so you can triage faster and avoid chasing isolated metrics.

Pricing and promotion signals show me where price sensitivity is hiding. I pay attention to promo lift - how much volume changes at different discount levels - because it helps me see where I’m buying demand versus unlocking demand. If a smaller discount generates nearly the same lift as a larger one, that’s a valuable signal about where margin might be leaking. I also treat price sensitivity as a practical concept, not an academic one: if price rises and unit sales fall meaningfully, demand is sensitive, and I need to be careful with broad increases. When teams model this, I anchor on price elasticity as the simplest shared language, then use controlled analysis to avoid guessing - even if the model is not perfect.

Operational and technical signals are unglamorous but real. A jump in page load time on key templates, spikes in payment errors, stockouts on top products, or delivery estimates creeping up often show up in funnel behaviour before they show up as complaints. A slow mobile experience, for example, becomes a conversion signal long before it looks like a server problem.

SEARCH AND NAVIGATION SIGNALS THAT REVEAL INTENT

On-site search is one of the most honest data sources I have: visitors tell me in plain language what they came for. Navigation patterns then show me how hard I make them work to find it.

I start by reviewing top internal search terms and looking for two kinds of gaps. If people repeatedly search for products I don’t carry, that’s a product strategy signal. If they search for items I do carry but still don’t convert, that often points to discoverability issues - merchandising, naming, category structure, or search relevance.

Zero-result searches are especially valuable because each one is a moment where intent hits a dead end. The fix is sometimes simple: add synonyms and spelling variations; route common queries to the right category; or rename products and categories to match the words customers use rather than internal terminology.

I also watch refinements: when “blue dress” turns into “blue summer dress under 50,” it signals that price and season are part of the intent. Over time, those patterns shape how I design filters and how I structure categories. Filter usage itself is a signal too. Heavy filtering followed by quick exits can mean the category tree doesn’t match how people think - or that an important filter (like material or size) is buried and effectively unusable.

When visitors bounce between related categories and then leave (“pogo sticking”), I treat it as a label and structure problem until proven otherwise. Clearer labels, fewer overlapping categories, and better “next best” paths can remove friction without changing the product range.

To make this actionable, I mentally bucket queries by intent. Broad category queries should land on a focused listing page with the right filters visible quickly. Specific product queries should take people to a relevant product page (or a very tight list). Problem or job-based queries belong with educational content that also connects to curated products. This simple model turns raw search logs into clear signals about what pages I need, how I should label them, and where I’m creating dead ends.

SYSTEMS TO CAPTURE ECOMMERCE SIGNALS

I don’t need an exotic setup to start acting on signals, but I do need reliable data capture and a way to see behaviour - not just totals.

At a minimum, I make sure key commerce events are consistently recorded (product views, add to cart, checkout steps, purchases, and important form submissions) along with the dimensions that make segmentation possible (device, source/medium, and basic customer status like new vs returning). If those fundamentals are shaky, everything else becomes debate.

Numbers alone also leave me guessing about “why,” so I pair quantitative tracking with behavioural observation. Seeing where people hesitate, rage-click, abandon forms, or get stuck in the checkout can turn a vague conversion drop into a concrete issue I can reproduce and fix.

Finally, I keep reporting simple enough that everyone looks at the same truth each week. One shared view of the north star metric, the core funnel, and the key segments is usually more useful than a dozen dashboards that nobody trusts.

As the business scales, what changes isn’t the philosophy - it’s the ability to connect signals across systems (storefront, marketing, support, inventory) and across time (cohorts, repeat behaviour, lifetime value). The point is to reduce the time from “something moved” to “I know where it moved, who it affected, and what likely caused it.”

ACTING ON SIGNALS: FROM INSIGHT TO CHANGE

A signal without a response process is just trivia. The value shows up when I treat signals as prompts for action with clear ownership and a consistent cadence.

Flow diagram for triaging a signal into urgent, material, or optimization actions
Response tiers keep teams calm: contain first, then diagnose, then validate.

I like response tiers because they reduce drama and clarify expectations. When something threatens immediate revenue (for example, checkout completion collapsing or widespread payment failures), I treat it as urgent and focus on fast containment - rollback, fix, or workaround - then do root-cause analysis after stability returns. When it’s a material issue or opportunity that persists over days (a meaningful conversion shift in a major channel, or a sudden change in returns for a core product family), I triage within a day and aim for a decision within a few days. When it’s a smaller optimisation signal (scroll depth soft on a new layout, or add-to-cart slightly down for a new category), I log it and handle it through scheduled experimentation rather than constant firefighting.

Ownership matters more than process diagrams. Before signals hit, I decide who monitors the core metrics, who can call a decision, and who can implement changes. Clarity here prevents both paralysis (“who’s on it?”) and chaos (“everyone’s on it”).

When a signal suggests a change, I resist shipping the first idea to everyone. I write a simple hypothesis, choose one primary metric and a couple of guardrails, and run a controlled rollout when possible - either an A/B test or a phased release. If the change harms key metrics beyond a pre-set threshold, I roll it back. No ego. The signal told me something was wrong; the test tells me whether my fix is right.

To reduce false positives, I rely on two small habits: I annotate reporting with promotions, releases, and tracking changes, and I do a quick tracking QA after releases to confirm critical events still fire. Many “bad weeks” disappear once I notice a key event stopped recording two days earlier.

ADVANCED SIGNAL STRATEGIES FOR GROWTH

Once I have clean events, stable reporting, and a habit of acting on basic signals, I can move toward signals that feel more predictive than descriptive.

One area is anticipating churn and next purchase. I can score customers using leading signals like days since last visit, changes in visit frequency, drops in email engagement, or increased browsing of help and returns content. I don’t need sophisticated modelling to start; even a simple point-based approach can highlight who is drifting so I can respond appropriately.

Another area is forecasting demand shifts. Historical sales patterns matter, but I get an edge by watching leading indicators that move before revenue does. Interest showing up in early campaign engagement, increases in “back in stock” intent, or rising attention to specific product themes can help me adjust expectations before lagging indicators - like shipped orders or return rates - force the adjustment.

Chart illustrating demand response to price changes and how elasticity creates actionable signals
Elasticity-style thinking helps separate “discount-driven demand” from “true demand.”

Journey analysis across devices and channels is another lever. Customers rarely follow a neat line from first click to purchase. When I can understand cross-device and cross-channel behaviour (even at a directional level), I can reinterpret which signals represent early intent. For example, a rise in wish list additions or repeated mobile browsing might be a stronger early signal for follow-up than same-day conversion.

Finally, attribution works better when it respects signals, not just last clicks. Last-click views often hide the contribution of early touches that create demand or reduce uncertainty. The model I choose should match the sales cycle: longer-consideration products may deserve more weight on first touch and mid-funnel education, while impulse purchases may lean more on late-stage reminders. If you want a pragmatic framework for this kind of thinking, Attribution for Long B2B Cycles: A Practical Model for Reality is a helpful reference point, even if your funnel is shorter.

SO, WHAT'S NEXT FOR YOUR ECOMMERCE SIGNALS

I don’t need months to start running on signals instead of raw metrics. In a week or two, I can move from “interesting dashboards” to a working signal system by narrowing focus and tightening execution.

I begin by choosing a single north star metric I truly care about, then three supporting metrics that feed it. Next, I define a small set of candidate signals as specific patterns - tied to a segment, a timeframe, and an outcome - so I’m not reacting to vague movement. Then I sanity-check instrumentation so the core events and dimensions I rely on are actually captured. After that, I create one shared view that connects the north star to the funnel and the key segments, with clear context notes for promotions and releases. Finally, I decide how I’ll respond when a signal appears - what thresholds matter, who owns the first triage, and how I’ll validate changes.

When this rhythm settles in, “how to get signals in ecommerce” stops being abstract. My data stops shouting random numbers at me. Instead, it starts delivering specific, usable messages about what to fix, what to grow, and what to leave alone.

Quickly summarize and get insighs with: 
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents