Data model

Screens

How Sankofa uses the active screen / route as a cross-product correlation key — shared by Heatmap, Catch, Replay, Pulse, and Plan — and what your SDK needs to do for it to work.

A screen in Sankofa is the active route, view, or page the user is on at the moment an event happens. It's the single most useful cross-product correlation key the platform has — when every product agrees on what "Identify" means, the dashboard can answer questions like:

  • Which screens have the most crashes this week?
  • Show me session replays of users who saw an error on Checkout.
  • Filter the NPS responses to only respondents who submitted from the onboarding flow.
  • When a Plan ticket is auto-created from a crash, what screen was the user on?

Without screen tagging, every product is an island — you can see crashes, replays, surveys, and tickets in isolation, but you can't pivot between them around a shared "the user was here" axis.

Why this matters more than it sounds

Screen-tagging is also the prerequisite for Heatmap to work at all. A heatmap is, by definition, the aggregation of clicks/scrolls/rage-taps for a specific screen — if the SDK doesn't tag the screen, the heatmap has no key to bucket the events under and the screen list stays empty. The same value powers:

ProductWhat screen-tagging enables
HeatmapPer-screen aggregation. Without a tag, no heatmap can render.
Catch (errors)Filter issues by screen. Top-screens widget on issue detail.
ReplayFilter sessions by screen. "Show me replays where this happened."
Pulse (surveys)Target surveys by screen. Filter responses by screen.
Plan (tickets)Tickets auto-created from Catch / Pulse inherit the source screen.
AnalyticsEvery event auto-stamped with $screen_name for slice-and-dice.

A renamed screen — "Identify""Identity Verification" — is renamed once in the Lexicon and propagates to every surface that reads through it. No per-product config drift.

Setting the active screen

How the SDK learns the active screen depends on the platform. The resolution order is the same everywhere: explicit > auto-detected > unknown.

How the value flows

Once the SDK knows the screen, it ships on every cross-product surface:

  1. Catch error events

    Top-level screen field on CatchEventV1. ClickHouse-indexed so the issues list can filter and the per-issue Top Screens widget can aggregate without scanning breadcrumbs.

  2. Pulse response submissions

    Top-level screen field on the submit payload. Postgres + ClickHouse mirror both store it as a first-class column.

  3. Analytics events (web + RN auto)

    Auto-attached as $screen_name on every track() call, so every analytics query can pivot on screen.

  4. Heatmap

    Reads screen_name directly — the entire heatmap is keyed on it.

  5. Replay

    Tagged on each session segment, so the replay list supports ?screen= filtering and cross-product back-links from Catch / Heatmap land here pre-filtered.

  6. Plan tickets (auto-created from Catch / Pulse)

    Inherits screen from the source issue / response so triage knows where the user was without leaving the ticket.

The Lexicon

Every screen the SDK reports is auto-discovered into the Lexicon at Settings → Lexicon → Screens. From there you can:

  • Rename internal screen names to human-friendly labels (signup_step_2Sign Up — Verify Email). The rename flows through every product without per-surface config.
  • Hide internal / debug screens you don't want polluting dashboards.
  • Annotate with descriptions so anyone landing on the heatmap or issue list knows what the screen is.

The Lexicon is project-scoped and environment-aware, so test-mode discovery doesn't leak into live dashboards.

Time on screen

The dashboard automatically computes dwell time — how long users actually spend on each screen — from the $screen_view events the SDK already sends. No SDK changes, no extra config; it's a derived metric off the analytics stream.

What you get:

  • Avg / p50 / p95 dwell per screen (last 30d), shown inline on the Lexicon Screens tab and on the Catch issue detail's Top Screens widget.
  • Total dwell across all users — the "where is your audience's time going" view.
  • Top exit screens — for each screen, where users go next (or drop off).
  • Daily trend — does avg dwell on Checkout drop after a release? The trend chart on the screen detail surfaces it.

How dwell is computed:

text
dwell(screen_view_i) = ts(next_screen_view_in_same_session) - ts(i)

The last screen of a session has no successor, so we cap it at 30 minutes — matching the SDK's default session-timeout — to keep idle backgrounded sessions from inflating the average. Sessions naturally truncate most tails; the cap is a defensive ceiling.

The /api/v1/screens/dwell endpoint exposes the rollup directly if you want to feed dwell stats into your own dashboards, weekly digest, or alerting.

Live presence ("X live now")

Beyond dwell, the SDK pings a lightweight /api/v1/screens/heartbeat endpoint every ~15 seconds while a screen is foregrounded. The dashboard subscribes to a Server-Sent Events stream and renders a live count next to every screen on the Heatmap viewer and the Lexicon Screens tab.

Properties of the live signal:

  • 30-second TTL. A user not seen for 30s is considered gone — handles tab-close, app-backgrounded, and abrupt network drops uniformly.
  • Visibility-gated. Browser tabs use the visibility API; native apps use AppState / ProcessLifecycleOwner / WidgetsBindingObserver. Backgrounded apps stop pinging immediately so we don't paint stale "still live" badges.
  • Best-effort. Failed heartbeats are silent; presence is decoration, not correctness. The TTL handles whatever the network can't.
  • Single in-process map by default. No Redis required for single-pod deployments. Multi-pod setups can swap in a Redis backplane via the same pattern Vision Realtime uses.

The heartbeat is on by default in every SDK — no opt-in. If you need to disable it (a strict CSP, a regulated environment, a billing-tier downgrade), the SDK exposes a config flag per platform.

Common pitfalls

Edit this page on GitHub