LogoLogo
  • 🦩Overview
  • 💾Datasets
    • Overview
    • Core Concepts
      • Columns & Annotations
      • Type & Property Mappings
      • Relationships
    • Basic Datasets
      • dbt Integration
      • Sigma Integration
      • Looker Integration
    • SaaS Datasets
    • CSV Datasets
    • Streaming Datasets
    • Entity Resolution
    • AI Columns
      • AI Prompts Recipe Book
    • Enrichment Columns
      • Quick Start
      • HTTP Request Enrichments
    • Computed Columns
    • Version Control
  • 📫Syncs
    • Overview
    • Triggering & Scheduling
    • Retry Handling
    • Live Syncs
    • Audience Syncs
    • Observability
      • Current Sync Run Overview
      • Sync History
      • Sync Tracking
      • API Inspector
      • Sync Alerts
      • Observability Lake
      • Datadog Integration
      • Warehouse Writeback
      • Sync Lifecycle Webhooks
      • Sync Dry Runs
    • Structuring Data
      • Liquid Templates
      • Event Syncs
      • Arrays and Nested Objects
  • 👥Audience Hub
    • Overview
    • Creating Segments
      • Segment Priorities
      • Warehouse-Managed Audiences
    • Experiments and Analysis
      • Audience Match Rates
    • Activating Segments
    • Calculated Columns
    • Data Preparation
      • Profile Explorer
      • Exclusion Lists
  • 🧮Data Sources
    • Overview
    • Available Sources
      • Amazon Athena
      • Amazon Redshift
      • Amazon S3
      • Azure Synapse
      • ClickHouse
      • Confluent Cloud
      • Databricks
      • Elasticsearch
      • Kafka
      • Google AlloyDB
      • Google BigQuery
      • Google Cloud SQL for PostgreSQL
      • Google Pub/Sub
      • Google Sheets
      • Greenplum
      • HTTP Request
      • HubSpot
      • Materialize
      • Microsoft Fabric
      • MotherDuck
      • MySQL
      • PostgreSQL
      • Rockset
      • Salesforce
      • SingleStore
      • Snowflake
      • SQL Server
      • Trino
  • 🛫Destinations
    • Overview
    • Available Destinations
      • Accredible
      • ActiveCampaign
      • Adobe Target
      • Aha
      • Airship
      • Airtable
      • Algolia
      • Amazon Ads DSP (AMC)
      • Amazon DynamoDB
      • Amazon EventBridge
      • Amazon Pinpoint
      • Amazon Redshift
      • Amazon S3
      • Amplitude
      • Anaplan
      • Antavo
      • Appcues
      • Apollo
      • Asana
      • AskNicely
      • Attentive
      • Attio
      • Autopilot Journeys
      • Azure Blob Storage
      • Box
      • Bloomreach
      • Blackhawk
      • Braze
      • Brevo (formerly Sendinblue)
      • Campaign Monitor
      • Canny
      • Channable
      • Chargebee
      • Chargify
      • ChartMogul
      • ChatGPT Retrieval Plugin
      • Chattermill
      • ChurnZero
      • CJ Affiliate
      • CleverTap
      • ClickUp
      • Constant Contact
      • Courier
      • Criteo
      • Crowd.dev
      • Customer.io
      • Databricks
      • Delighted
      • Discord
      • Drift
      • Drip
      • Eagle Eye
      • Emarsys
      • Enterpret
      • Elasticsearch
      • Facebook Ads
      • Facebook Product Catalog
      • Freshdesk
      • Freshsales
      • Front
      • FullStory
      • Gainsight
      • GitHub
      • GitLab
      • Gladly
      • Google Ads
        • Customer Match Lists (Audiences)
        • Offline Conversions
      • Google AlloyDB
      • Google Analytics 4
      • Google BigQuery
      • Google Campaign Manager 360
      • Google Cloud Storage
      • Google Datastore
      • Google Display & Video 360
      • Google Drive
      • Google Search Ads 360
      • Google Sheets
      • Heap.io
      • Help Scout
      • HTTP Request
      • HubSpot
      • Impact
      • Insider
      • Insightly
      • Intercom
      • Iterable
      • Jira
      • Kafka
      • Kevel
      • Klaviyo
      • Kustomer
      • Labelbox
      • LaunchDarkly
      • LinkedIn
      • LiveIntent
      • Loops
      • Mailchimp
      • Mailchimp Transactional (Mandrill)
      • Mailgun
      • Marketo
      • Meilisearch
      • Microsoft Advertising
      • Microsoft Dynamics
      • Microsoft SQL Server
      • Microsoft Teams
      • Mixpanel
      • MoEngage
      • Mongo DB
      • mParticle
      • MySQL
      • NetSuite
      • Notion
      • OneSignal
      • Optimizely
      • Oracle Database
      • Oracle Eloqua
      • Oracle Fusion
      • Oracle Responsys
      • Orbit
      • Ortto
      • Outreach
      • Pardot
      • Partnerstack
      • Pendo
      • Pinterest
      • Pipedrive
      • Planhat
      • PostgreSQL
      • PostHog
      • Postscript
      • Productboard
      • Qualtrics
      • Radar
      • Reddit Ads
      • Rokt
      • RollWorks
      • Sailthru
      • Salesforce
      • Salesforce Commerce Cloud
      • Salesforce Marketing Cloud
      • Salesloft
      • Segment
      • SendGrid
      • Sense
      • SFTP
      • Shopify
      • Singular
      • Slack
      • Snapchat
      • Snowflake
      • Split
      • Sprig
      • Stripe
      • The Trade Desk
      • TikTok
      • Totango
      • Userflow
      • Userpilot
      • Vero Cloud
      • Vitally
      • Webhooks
      • Webflow
      • X Ads (formerly Twitter Ads)
      • Yahoo Ads (DSP)
      • Zendesk
      • Zoho CRM
      • Zuora
    • Custom & Partner Destinations
  • 📎Misc
    • Credits
    • Census Embedded
    • Data Storage
      • Census Store
        • Query Census Store from Snowflake
        • Query Census Store locally using DuckDB
      • General Object Storage
      • Bring Your Own Bucket
        • Bring your own S3 Bucket
        • Bring your own GCS Bucket
        • Bring your own Azure Bucket
    • Developers
      • GitLink
      • Dataset API
      • Custom Destination API
      • Management API
    • Security & Privacy
      • Login & SSO Settings
      • Workspaces
      • Role-based Access Controls
      • Network Access Controls
      • SIEM Log Forwarding
      • Secure Storage of Customer Credentials
      • Digital Markets Act (DMA) Consent for Ad Platforms
    • Health and Usage Reporting
      • Workspace Homepage
      • Product Usage Dashboard
      • Observability Toolkit
      • Alerts
    • FAQs
Powered by GitBook
On this page
  • Configuring a bucket for Sync Tracking
  • 1. Create a GCS Service Account
  • 2. Generate a Service Account Key
  • 3. Create a GCS Bucket
  • Configuring a bucket for temporary warehouse unloads from BigQuery

Was this helpful?

  1. Misc
  2. Data Storage
  3. Bring Your Own Bucket

Bring your own GCS Bucket

This feature is available to Census Paid Plans. Please contact your Census account representative for details before proceeding with these steps.

Buckets are used for 2 purposes

  1. Temporary storage for warehouse unloads during sync runs

  2. Record-level sync tracking for observability

If you bring your own GCS bucket:

  1. It will only be used for warehouse unloads from BigQuery and select other warehouses. Please contact support for the full list, or if you'd like to use your GCS bucket for every warehouse unload temporary storage.

  2. It will be used as the record-level sync tracking storage for every sync run, regardless of source or destination.

After following the steps below, you must provide your Census support representative with details. Please contact support via your established channels, or support@getcensus.com. Your Census rep will require

  1. GCP Project ID

  2. GCS Bucket Name

  3. GCP Service Account Email

Configuring a bucket for Sync Tracking

1. Create a GCS Service Account

Log into Google Cloud Console and navigate to IAM & Admin -> Service Accounts

  1. Click Create Service Account

  2. Enter a Service Account Name

  3. Keep the generated Service Account ID, generate a new one, or enter your own

  4. Copy the Service Account ID

  5. Configure the project permissions for the Service Account

2. Generate a Service Account Key

  1. Log into Google Cloud Console and navigate to IAM & Admin -> Service Accounts

  2. Click the name of your Service Account

  1. Open the Keys tab and click Add Key -> Create New Key

  1. Choose the key type "JSON" and click Create

  1. Save the generated .json file in a secure location.

3. Create a GCS Bucket

  1. Log into Google Cloud Console and navigate to your Project

  2. Go to Cloud Storage -> Buckets and click Create

  1. Give your bucket a name

  2. Configure the bucket settings

    1. Location: choose the same location setting as your BigQuery project

    2. Storage Class: Standard

    3. Check Enforce Public Access Prevention

    4. Access Control: Uniform

    5. Leave versioning, and encryption on defaults.

    6. Lifecycle Policy: Lifecycle rules should be enabled to:

      1. Action: Delete object

      2. Object condition: 14+ days since object was created Name matches prefix 'sync-unloads/'.

  3. Once the bucket has been created, navigate to Permissions

  4. Grant your Service Account the permissions

    1. Storage Object User

    2. Storage Object Creator

Configuring a bucket for temporary warehouse unloads from BigQuery

When creating your BigQuery connection

  1. In Census, create a connection to BigQuery using the appropriate Project ID and Location.

  2. Paste the contents of the Service Account Key JSON file (from above) into the Service Account Key JSON (Optional) box.

  3. Click Connect and Confirm

    1. Note: There is no need to copy and run the GCP Console commands

      provided in-app when following this setup guide. Please ignore them.

PreviousBring your own S3 BucketNextBring your own Azure Bucket

Last updated 7 months ago

Was this helpful?

📎