I have optimized 14,000+ hero images and reviewed 50,000+ Amazon listings, and the single most common reason brands declare a hero image test a "winner" when it isn't — or a loser when it actually won — is that they are dividing clicks by the wrong number.
Amazon gives you at least three different denominators you can use to measure hero image CTR: Glance Views, Sessions, and impressions from Search Query Performance. They are not interchangeable. They do not move together. And in 2026, with Rufus, Sponsored TV halos, and search-page reshuffles inflating one number but not the others, the gap between them is wider than most operators realize.
If you are running hero image A/B tests using the wrong denominator, you are not measuring CTR. You are measuring noise wearing a CTR costume.
This is the protocol I use across every client A/B test at Aspi — what each denominator actually counts, when each one lies, and how to pick the right one for the test you are running.
What Amazon means by each metric
You cannot pick the right denominator until you know what is in each bucket. The definitions Amazon publishes are vague on purpose — the actual mechanics matter more.
Glance Views (Brand Analytics, Catalog Performance): Every time the detail page is rendered for a unique customer in a 24-hour window, counted once. Includes ad clicks, organic clicks, Rufus referrals, deep links from off-Amazon, and Sponsored Display retargeting impressions that opened the page. Does not include search impressions where the listing was shown but not clicked.
Sessions (Business Reports): A unique customer-device pairing visiting any page that surfaces the ASIN — search pages where the ASIN appeared, browse pages, the detail page itself — within a 24-hour window. Sessions is broader than Glance Views by definition, because a session counts even if the customer never clicked into the PDP.
Impressions (Search Query Performance / Brand Analytics): Number of times the ASIN appeared in a search result page for a given query. Closest to a true "impression" denominator for organic search CTR. Does not include browse paths, Rufus, or off-Amazon referrals.
So: Glance Views = page renders. Sessions = ASIN exposures across surfaces. SQP Impressions = search-result exposures for a specific query.
Different numerators, different denominators, different stories.
Why the gap matters more in 2026 than ever before
Three things changed in the last 12 months that widened the spread between these metrics:
-
Rufus deflects search-page clicks into PDP renders without an impression counted in SQP. When a customer asks Rufus a question and Rufus surfaces your ASIN in its answer card, the resulting PDP visit counts in Glance Views and Sessions but not in SQP impressions for any query. We see Rufus-referred Glance Views running 8–18% of total Glance Views on Rufus-friendly listings, with zero counterpart in SQP.
-
Sponsored TV and off-Amazon retargeting added Glance View volume without adding ad-attributed clicks. When a Sponsored TV viewer searches your brand later that day, the resulting session shows up in Sessions and Glance Views — but in Sponsored Brands or Sponsored Display attribution windows, often not in your search ad reports.
-
Search results are paginated differently for mobile in 2026. The "first screen" on mobile now shows roughly 2.4 organic + ad results before the fold, down from 3.1 a year ago. That compresses CTR on the top results — but only in SQP, not in Glance Views or Sessions.
If your A/B test compares "before" data from January and "after" data from May using the same denominator definition, you might be measuring the algorithm shift more than the hero image change.
The denominator-mismatch failure modes
Here are the four most common ways I see CTR measurement break, in order of frequency.
Failure mode 1: Sessions as the CTR denominator for a hero image test
Sessions counts ASIN exposures across surfaces — including the search page where the customer scrolled past your listing without clicking. If you use Sessions as the denominator and the numerator is "PDP visits," you are measuring something closer to click-through rate from impression to PDP, which is what you want for hero image tests.
But if your numerator is "buy box visits" or "purchases" and your denominator is Sessions, you are measuring a hybrid metric that confounds hero image CTR with PDP conversion. A hero image change that increases CTR by 12% but loses 4% on PDP CVR (because the new image overpromises) will look like a +7.5% lift in your Sessions-based read. That is not a win. That is two opposing forces partially canceling.
Fix: For hero image tests, separate the two:
- Numerator: Sessions converted to PDP visits → CTR
- Numerator: PDP visits converted to purchases → CVR
Track them as two separate metrics on the same test window.
Failure mode 2: Glance Views as the denominator for a CTR test
Glance Views is page renders. It is downstream of the click. Using Glance Views as the denominator for a CTR calculation gets the funnel backwards — Glance Views is the result of the CTR, not the population from which CTR is measured.
I see this in 1 of every 5 brand audits. They compute "PDP visits / Glance Views" and call it CTR. It is not. It is roughly 100% by definition, minus some edge-case deduplication noise. They are measuring nothing.
Fix: Glance Views is a CVR denominator (purchases / Glance Views = page-render-to-purchase rate), not a CTR denominator.
Failure mode 3: SQP impressions for a hero image test on a high-Rufus-traffic listing
SQP impressions exclude Rufus referrals. On a listing where Rufus is sending 12–18% of visits, your SQP-based CTR read is missing those visitors entirely. If your hero image change moves the needle disproportionately on Rufus-referred visitors (because Rufus surfaces a thumbnail that is more or less clickable), SQP will mismeasure the change.
Fix: For listings where Rufus traffic is meaningful (you can estimate it from the gap between Glance Views and (SQP impressions × clicks/impression)), use a blended denominator: SQP impressions for the queries the listing ranks for, plus an estimate of Rufus-driven Glance Views, treated as separate cohorts.
Failure mode 4: Comparing pre/post windows on different denominators
The brand pulled the "before" data from Business Reports (Sessions-based) and the "after" data from SQP (impression-based) because that is what was open in the browser. The numbers are not comparable.
Fix: Lock the denominator at test setup. Document which report you are pulling from. Use the same denominator for the entire test window, including the pre-period baseline.
The protocol I use at Aspi for hero image CTR measurement
Every Aspi hero image A/B test runs the same five-window structure I covered in the CTR/CVR isolation protocol, but inside each window, the denominator is locked. Here is the version specific to denominator choice:
Step 1: Pick the question first, then the denominator
There are three questions a hero image test can answer:
- Question A: "Did more people who saw my listing in search click on it?" → Denominator: SQP impressions (organic + ad, segmented). Use this when the listing's traffic is dominated by branded and non-branded search and Rufus referrals are <8%.
- Question B: "Did more people who reached my PDP convert?" → Denominator: Glance Views. Use this for PDP-level CVR, not for hero CTR.
- Question C: "Did more total visits and purchases happen during the test window?" → Denominator: Sessions (with PDP visit and purchase numerators tracked separately). Use this when you want a full-funnel read and you can hold ad spend constant.
Pick one. Write it down. Pre-register.
Step 2: Pull the baseline from the same report you will pull the result from
If the test is going to be evaluated on SQP, pull the 28-day pre-period baseline from SQP. Not from Business Reports. Not from Brand Analytics summary dashboards. Same report. Same export. Same definitions.
Step 3: Segment traffic sources before computing CTR
For Question A (the most common hero image test), I split traffic into:
- Branded search (queries containing the brand name)
- Non-branded category search (high-volume category terms)
- Long-tail non-branded
- Rufus-referred (estimated as Glance Views minus calculated organic + ad clicks)
- Ad-attributed (from SP/SB/SD reports)
The hero image affects each of these cohorts differently. Branded search CTR is usually unaffected by hero image changes (the customer already decided). Non-branded category search is where hero image changes show up first. Mixing them dilutes the signal.
Step 4: Compute CTR per cohort, not on the blended number
A 7.4% blended CTR before and 8.1% after looks like a +9.5% lift. Inside the cohorts, you might find:
- Branded search CTR: 31% → 32% (noise)
- Non-branded category CTR: 4.2% → 5.1% (+21%, real)
- Long-tail CTR: 2.1% → 2.0% (flat)
- Rufus-referred (estimated): 6% → 8% (likely real but high uncertainty)
The blended number understates the real lift on the cohort that matters. Non-branded category search is the cohort that funds growth. Measure it directly.
Step 5: Validate with at least one off-Amazon control
If the hero image test runs during a week where Amazon happened to refresh its search ranking model, all your CTR numbers will move regardless of your change. The simplest control: track CTR on two control SKUs in the same category that you did not change. If the control SKUs moved by 8% in the same direction, your "+9% lift" is mostly market drift.
Real example: the test that flipped from "winner" to "loser"
Client: pet supplement brand, $410K/month, 6 SKUs.
The team ran a hero image A/B test on their hero SKU — added a "vet-recommended" badge to the bottom right. After 18 days they pulled Business Reports, computed (Sessions → Order Item Count) at the SKU level, and called it a +6.8% CVR lift. They were ready to roll it out catalog-wide.
I ran the protocol above on the same data:
- The Sessions-based read was 8% inflated by a Prime Day overlap from the prior year being absent in the current 18-day window. (They pre-pulled the 2025 baseline that included Prime Day; 2026 window did not.)
- On SQP, non-branded category CTR went from 3.9% to 3.4% — a 13% drop. The "vet-recommended" badge was too small at mobile thumbnail size and disrupted the silhouette.
- On Glance Views → purchase (true PDP CVR), the rate improved by 3.1%. The badge worked once the customer reached the PDP — but cost more clicks than it earned.
- Net: rolling this out catalog-wide would have lost the brand an estimated $11–14K/month from the CTR drop alone, partially offset by the PDP CVR gain, for a net negative.
We reverted, reduced the badge to a slot 2 corner, and the next test went +9% on non-branded CTR with no CVR loss.
The brand's original conclusion was not wrong from bad intent. It was wrong because they divided by Sessions when they should have divided by SQP impressions, and they did not segment traffic.
FAQ
Should I use Search Query Performance for every hero image test?
If your traffic is 70%+ from search and Rufus is <10% of Glance Views, yes — SQP is the right denominator. If you are heavy on Sponsored Display retargeting, off-Amazon traffic, or Rufus, use a blended approach with Glance Views as the cross-check.
Why does Sessions disagree with Glance Views even on a stable listing?
Sessions counts ASIN exposures on search and browse pages without a click. Glance Views only counts PDP renders. A listing that ranks well but does not get clicked will show high Sessions and low Glance Views — that gap is the CTR signal in a different form.
How long should a hero image test run to get a reliable CTR read?
Minimum 14 days post-settle (so 14 days after the algorithm has stabilized around the new image), at 250+ Sessions/day on the SKU. Lower volume needs longer windows or alternative testing methods like PickFu pre-tests. I cover the volume thresholds in the 250 daily sessions rule.
Does Rufus referral traffic behave like search traffic for CTR purposes?
No. Rufus-referred customers have already been pre-qualified by the assistant — they arrive at the PDP with higher purchase intent. Rufus-cohort CVR runs 1.6–2.4x higher than non-branded category search CVR in our audits, but volume is smaller. Treat it as a separate cohort.
What if I cannot get SQP data (small brand or no Brand Registry)?
Use Sessions as the denominator with the PDP visit numerator split out. The cohort segmentation is harder without SQP, but you can still segment ad-attributed (from Campaign Manager) versus organic (Sessions minus ad clicks) and get a usable signal.
If you are measuring hero image performance and you cannot answer "which denominator and which cohort" in one sentence, you are not measuring. You are guessing with extra steps.
The fix is not more sophisticated stats. It is locking the denominator at test setup, pulling baseline and result from the same report, and segmenting traffic by cohort before computing CTR. Five minutes of setup, weeks of test integrity recovered.
If you want a second set of eyes on a hero image test before you roll a change catalog-wide, that is exactly what we do at Aspi.