Published math

Brand health scores with the formulas on the page.

Competitor tools sell composite scores and keep the math private. When your Legal team asks where a number comes from, you can't answer. We publish ours on a single methodology page — every composite metric has its inputs, weightings, value range, and interpretation written out.

The four composite scores

Brand Control. MAP Discipline.
Fragmentation. Volatility.

Brand Control Score

Per-brand 0-100 composite summarizing visibility-integrity on Amazon. Higher = healthier brand presence. Inputs: authorized-seller density, MAP discipline, fragmentation, and operator stability — weighted and normalized nightly. Surfaced on the Brand Dossier and in monthly prospect reports.

MAP Discipline Score

How tightly a brand's pricing has held against its MAP floors historically. Computed from below-MAP events, authorized-vs-unauthorized ratios, and standard deviation of price-drop patterns over the available window. High = stable enforcement; low = leaky discipline signaling an entry opportunity for an authorized partner.

Fragmentation

1 minus the top-5-operators' buybox share on a brand or within a category. Higher = more room to enter (no dominant gatekeeper). Lower = consolidated around a few operators. Interpretation differs by audience — brand-protection uses it to prioritize enforcement focus; sourcing teams use it to spot entry-ready brands.

Category Volatility

Per-category 0-100 composite of seller turnover, brand turnover, price volatility, and BSR-rank volatility. Answers the sourcing question “is this category stable enough to build distribution relationships in, or is it churning beneath our feet?”

Why the math matters

Defensible, not defensive.

  • Legal review passes faster. When Counsel asks “what does Brand Control Score measure,” we hand them a URL. Methodology lives on the public methodology page + every HelpTip tooltip in the app anchors back to the same section.
  • Auditable reporting. Quarterly reviews, board updates, and insurance submissions cite the score + the formula, not a black-box number.
  • Cross-workspace comparability. Same formula applied to every brand in our universe means scores are comparable across your portfolio and across prospects — not calibrated per-account.

Full per-metric formulas live on the methodology page.

Scoped to a brand you care about

See the scores on your own portfolio.