LogoLogo
  • 🦩Overview
  • 💾Datasets
    • Overview
    • Core Concepts
      • Columns & Annotations
      • Type & Property Mappings
      • Relationships
    • Basic Datasets
      • dbt Integration
      • Sigma Integration
      • Looker Integration
    • SaaS Datasets
    • CSV Datasets
    • Streaming Datasets
    • Entity Resolution
    • AI Columns
      • AI Prompts Recipe Book
    • Enrichment Columns
      • Quick Start
      • HTTP Request Enrichments
    • Computed Columns
    • Version Control
  • 📫Syncs
    • Overview
    • Triggering & Scheduling
    • Retry Handling
    • Live Syncs
    • Audience Syncs
    • Observability
      • Current Sync Run Overview
      • Sync History
      • Sync Tracking
      • API Inspector
      • Sync Alerts
      • Observability Lake
      • Datadog Integration
      • Warehouse Writeback
      • Sync Lifecycle Webhooks
      • Sync Dry Runs
    • Structuring Data
      • Liquid Templates
      • Event Syncs
      • Arrays and Nested Objects
  • 👥Audience Hub
    • Overview
    • Creating Segments
      • Segment Priorities
      • Warehouse-Managed Audiences
    • Experiments and Analysis
      • Audience Match Rates
    • Activating Segments
    • Calculated Columns
    • Data Preparation
      • Profile Explorer
      • Exclusion Lists
  • 🧮Data Sources
    • Overview
    • Available Sources
      • Amazon Athena
      • Amazon Redshift
      • Amazon S3
      • Azure Synapse
      • ClickHouse
      • Confluent Cloud
      • Databricks
      • Elasticsearch
      • Kafka
      • Google AlloyDB
      • Google BigQuery
      • Google Cloud SQL for PostgreSQL
      • Google Pub/Sub
      • Google Sheets
      • Greenplum
      • HTTP Request
      • HubSpot
      • Materialize
      • Microsoft Fabric
      • MotherDuck
      • MySQL
      • PostgreSQL
      • Rockset
      • Salesforce
      • SingleStore
      • Snowflake
      • SQL Server
      • Trino
  • 🛫Destinations
    • Overview
    • Available Destinations
      • Accredible
      • ActiveCampaign
      • Adobe Target
      • Aha
      • Airship
      • Airtable
      • Algolia
      • Amazon Ads DSP (AMC)
      • Amazon DynamoDB
      • Amazon EventBridge
      • Amazon Pinpoint
      • Amazon Redshift
      • Amazon S3
      • Amplitude
      • Anaplan
      • Antavo
      • Appcues
      • Apollo
      • Asana
      • AskNicely
      • Attentive
      • Attio
      • Autopilot Journeys
      • Azure Blob Storage
      • Box
      • Bloomreach
      • Blackhawk
      • Braze
      • Brevo (formerly Sendinblue)
      • Campaign Monitor
      • Canny
      • Channable
      • Chargebee
      • Chargify
      • ChartMogul
      • ChatGPT Retrieval Plugin
      • Chattermill
      • ChurnZero
      • CJ Affiliate
      • CleverTap
      • ClickUp
      • Constant Contact
      • Courier
      • Criteo
      • Crowd.dev
      • Customer.io
      • Databricks
      • Delighted
      • Discord
      • Drift
      • Drip
      • Eagle Eye
      • Emarsys
      • Enterpret
      • Elasticsearch
      • Facebook Ads
      • Facebook Product Catalog
      • Freshdesk
      • Freshsales
      • Front
      • FullStory
      • Gainsight
      • GitHub
      • GitLab
      • Gladly
      • Google Ads
        • Customer Match Lists (Audiences)
        • Offline Conversions
      • Google AlloyDB
      • Google Analytics 4
      • Google BigQuery
      • Google Campaign Manager 360
      • Google Cloud Storage
      • Google Datastore
      • Google Display & Video 360
      • Google Drive
      • Google Search Ads 360
      • Google Sheets
      • Heap.io
      • Help Scout
      • HTTP Request
      • HubSpot
      • Impact
      • Insider
      • Insightly
      • Intercom
      • Iterable
      • Jira
      • Kafka
      • Kevel
      • Klaviyo
      • Kustomer
      • Labelbox
      • LaunchDarkly
      • LinkedIn
      • LiveIntent
      • Loops
      • Mailchimp
      • Mailchimp Transactional (Mandrill)
      • Mailgun
      • Marketo
      • Meilisearch
      • Microsoft Advertising
      • Microsoft Dynamics
      • Microsoft SQL Server
      • Microsoft Teams
      • Mixpanel
      • MoEngage
      • Mongo DB
      • mParticle
      • MySQL
      • NetSuite
      • Notion
      • OneSignal
      • Optimizely
      • Oracle Database
      • Oracle Eloqua
      • Oracle Fusion
      • Oracle Responsys
      • Orbit
      • Ortto
      • Outreach
      • Pardot
      • Partnerstack
      • Pendo
      • Pinterest
      • Pipedrive
      • Planhat
      • PostgreSQL
      • PostHog
      • Postscript
      • Productboard
      • Qualtrics
      • Radar
      • Reddit Ads
      • Rokt
      • RollWorks
      • Sailthru
      • Salesforce
      • Salesforce Commerce Cloud
      • Salesforce Marketing Cloud
      • Salesloft
      • Segment
      • SendGrid
      • Sense
      • SFTP
      • Shopify
      • Singular
      • Slack
      • Snapchat
      • Snowflake
      • Split
      • Sprig
      • Statsig
      • Stripe
      • The Trade Desk
      • TikTok
      • Totango
      • Userflow
      • Userpilot
      • Vero Cloud
      • Vitally
      • Webhooks
      • Webflow
      • X Ads (formerly Twitter Ads)
      • Yahoo Ads (DSP)
      • Zendesk
      • Zoho CRM
      • Zuora
    • Custom & Partner Destinations
  • 📎Misc
    • Credits
    • Census Embedded
    • Data Storage
      • Census Store
        • Query Census Store from Snowflake
        • Query Census Store locally using DuckDB
      • General Object Storage
      • Bring Your Own Bucket
        • Bring your own S3 Bucket
        • Bring your own GCS Bucket
        • Bring your own Azure Bucket
    • Developers
      • GitLink
      • Dataset API
      • Custom Destination API
      • Management API
    • Security & Privacy
      • Login & SSO Settings
      • Workspaces
      • Role-based Access Controls
      • Network Access Controls
      • SIEM Log Forwarding
      • Secure Storage of Customer Credentials
      • Digital Markets Act (DMA) Consent for Ad Platforms
    • Health and Usage Reporting
      • Workspace Homepage
      • Product Usage Dashboard
      • Observability Toolkit
      • Alerts
    • FAQs
Powered by GitBook
On this page
  • Experiments Tab
  • Setting up Split Tests
  • Cohort Behavior and Management
  • Destinations Tab
  • Performance Tab
  • Setting Up Metrics
  • Understanding Performance Metrics
  • Activity Tab
  • Comparison Tab
  • Additional Analysis Tools
  • Warehouse Writeback

Was this helpful?

  1. Audience Hub

Experiments and Analysis

PreviousWarehouse-Managed AudiencesNextAudience Match Rates

Last updated 3 months ago

Was this helpful?

Census provides several tools to help you manage and analyze your segments after creation. This guide walks through the different tabs available in the Segment interface and their functionality. For information about creating segments and using the Definition tab, see the .

Experiments Tab

The Experiments tab allows you to create and manage split tests to measure the impact of your campaigns.

Setting up Split Tests

A split test divides the people in your segment randomly into one or more treatments as well as a control cohort, which should be used as a baseline to compare the impact of your campaign. Each cohort has a percentage size you control, letting you set the relative sizes of each size.

Split testing enables a number of marketing efforts:

  • Create a simple treatment with control group and measure the increased conversion rate over users that received the treatment.

  • Divide a segment into multiple treatments for different channels and compare relative conversion rates of the same segment across each.

  • Use a treatment and control group to "ramp up" a very large campaign over time. Start with 10% of segment and grow the treatment once you're confident it's performing as expected.

Cohort Behavior and Management

  • Cohort Stability: Within a segment, users are deterministically assigned to cohorts and will not change unless cohort percentages are modified

  • Minimum Requirements: Experiments must maintain at least 2 cohorts at all times

  • Cohort Deletion:

    • When a cohort is deleted, its users are distributed evenly among the remaining cohorts

    • The total percentage allocation must always equal 100%

    • If there are only 2 cohorts, neither can be deleted (must maintain minimum of 2)

  • Cohort Size Changes:

    • When reducing a cohort's percentage (e.g., 50% to 25%), affected users are distributed evenly among other cohorts

    • Distribution is approximately even but not guaranteed to be exactly equal

    • The same users will consistently move together when cohort percentages are adjusted

Best Practice: When running experiments that involve multiple phases (e.g., testing with 5% then expanding to 100%), consider using your destination platform's deduplication features or create new segments with exclusion rules to prevent double-targeting of users.

Destinations Tab

Performance Tab

Note: The Performance tab is only available for Experiments. You must set up an Experiment with cohorts to use these features.

The Performance tab allows you to track and analyze the results of your experiments over time. You can create customized metrics to compare performance across your control and treatment cohorts.

Setting Up Metrics

To measure experiment performance, you first need to define the metrics that Census should measure. Each metric consists of:

  • The name of the metric, which will appear in reporting UI.

  • The dataset it should be applied to. This should be the exact same dataset that you plan on segmenting.

  • The aggregation to apply, either a count of the number of filtered events, or a sum of a particular attribute on those events.

  • The related events dataset that should be aggregated over to calculate useful performance metrics. These are typically conversion-type events such as purchases or views of a particular bottom-of-funnel web page.

  • Optionally a filter can be applied to as well, such that the count/sum only applies to a subset of the events dataset.

Important: Metrics should be created before starting campaigns as they will only be measured for campaigns once they've been created, they will not be generated historically. While metrics can be added after an experiment starts, calculations will only begin from the day the metric is created (not retroactive).

Understanding Performance Metrics

Normalized Values

When viewing performance metrics across different cohorts with varying sizes, Census provides "normalized" values to make comparisons more meaningful. A normalized value shows what the metric would be if each cohort represented 100% of the population. For example, if you have:

  • Treatment A: 25% of users

  • Treatment B: 25% of users

  • Control: 50% of users

The normalized values would be:

  • Treatment A: Raw result × 4

  • Treatment B: Raw result × 4

  • Control: Raw result × 2

This normalization makes it easier to compare the relative performance of cohorts regardless of their size differences.

Uplift Calculation

The uplift percentage shown in performance tracking represents the percent difference between a cohort's normalized value and the control group's normalized value. It is calculated as:

uplift = ((cohort_normalized_value - control_normalized_value) / control_normalized_value) × 100

Activity Tab

The Activity tab provides two key pieces of information:

  1. A chart showing how your segment size has changed over time

  2. A history log of all changes made to the segment, including who made each change

This tab is particularly useful for monitoring segment health and auditing changes to segment definitions.

Comparison Tab

The Comparison tab allows you to compare your segment against any other existing segment. This is particularly useful when you have many segments and want to understand overlap between audiences or verify that a new segment is sufficiently different from existing ones.

Additional Analysis Tools

Warehouse Writeback

To take advantage of Warehouse Writeback for your analysis, ensure it is enabled on your warehouse connection.

Each cohort can be , including back to warehouse, and users appearing in each cohort are available in Warehouse Writeback (see ).

The Destinations tab shows all active syncs from your segment and provides one-click sync toggles for easy management. Each cohort can be , including back to your warehouse.

Census will capture and report the audience match rates for segments synced to various ad destinations. For more information on how to view match rates and which services are supported, see .

Metrics are defined on Event type datasets that are related to other datasets. If you don't see an expected option when creating a metric, make sure your datasets' types and relationship are configured correctly. See for more details.

For the deepest level of analysis, you can take advantage of , which logs all sync activity back to your data source. You can use this data to determine when users were added and removed from segments in each of the destinations your segment is synced to or the relative conversion performance of users across cohorts.

👥
synced to their own set of destinations
Audience Match Rates
Dataset Core Concepts
Warehouse Writeback
synced to their own set of destinations
below
Getting Started guide