LogoLogo
  • 🦩Overview
  • 💾Datasets
    • Overview
    • Core Concepts
      • Columns & Annotations
      • Type & Property Mappings
      • Relationships
    • Basic Datasets
      • dbt Integration
      • Sigma Integration
      • Looker Integration
    • SaaS Datasets
    • CSV Datasets
    • Streaming Datasets
    • Entity Resolution
    • AI Columns
      • AI Prompts Recipe Book
    • Enrichment Columns
      • Quick Start
      • HTTP Request Enrichments
    • Computed Columns
    • Version Control
  • 📫Syncs
    • Overview
    • Triggering & Scheduling
    • Retry Handling
    • Live Syncs
    • Audience Syncs
    • Observability
      • Current Sync Run Overview
      • Sync History
      • Sync Tracking
      • API Inspector
      • Sync Alerts
      • Observability Lake
      • Datadog Integration
      • Warehouse Writeback
      • Sync Lifecycle Webhooks
      • Sync Dry Runs
    • Structuring Data
      • Liquid Templates
      • Event Syncs
      • Arrays and Nested Objects
  • 👥Audience Hub
    • Overview
    • Creating Segments
      • Segment Priorities
      • Warehouse-Managed Audiences
    • Experiments and Analysis
      • Audience Match Rates
    • Activating Segments
    • Calculated Columns
    • Data Preparation
      • Profile Explorer
      • Exclusion Lists
  • 🧮Data Sources
    • Overview
    • Available Sources
      • Amazon Athena
      • Amazon Redshift
      • Amazon S3
      • Azure Synapse
      • ClickHouse
      • Confluent Cloud
      • Databricks
      • Elasticsearch
      • Kafka
      • Google AlloyDB
      • Google BigQuery
      • Google Cloud SQL for PostgreSQL
      • Google Pub/Sub
      • Google Sheets
      • Greenplum
      • HTTP Request
      • HubSpot
      • Materialize
      • Microsoft Fabric
      • MotherDuck
      • MySQL
      • PostgreSQL
      • Rockset
      • Salesforce
      • SingleStore
      • Snowflake
      • SQL Server
      • Trino
  • 🛫Destinations
    • Overview
    • Available Destinations
      • Accredible
      • ActiveCampaign
      • Adobe Target
      • Aha
      • Airship
      • Airtable
      • Algolia
      • Amazon Ads DSP (AMC)
      • Amazon DynamoDB
      • Amazon EventBridge
      • Amazon Pinpoint
      • Amazon Redshift
      • Amazon S3
      • Amplitude
      • Anaplan
      • Antavo
      • Appcues
      • Apollo
      • Asana
      • AskNicely
      • Attentive
      • Attio
      • Autopilot Journeys
      • Azure Blob Storage
      • Box
      • Bloomreach
      • Blackhawk
      • Braze
      • Brevo (formerly Sendinblue)
      • Campaign Monitor
      • Canny
      • Channable
      • Chargebee
      • Chargify
      • ChartMogul
      • ChatGPT Retrieval Plugin
      • Chattermill
      • ChurnZero
      • CJ Affiliate
      • CleverTap
      • ClickUp
      • Constant Contact
      • Courier
      • Criteo
      • Crowd.dev
      • Customer.io
      • Databricks
      • Delighted
      • Discord
      • Drift
      • Drip
      • Eagle Eye
      • Emarsys
      • Enterpret
      • Elasticsearch
      • Facebook Ads
      • Facebook Product Catalog
      • Freshdesk
      • Freshsales
      • Front
      • FullStory
      • Gainsight
      • GitHub
      • GitLab
      • Gladly
      • Google Ads
        • Customer Match Lists (Audiences)
        • Offline Conversions
      • Google AlloyDB
      • Google Analytics 4
      • Google BigQuery
      • Google Campaign Manager 360
      • Google Cloud Storage
      • Google Datastore
      • Google Display & Video 360
      • Google Drive
      • Google Search Ads 360
      • Google Sheets
      • Heap.io
      • Help Scout
      • HTTP Request
      • HubSpot
      • Impact
      • Insider
      • Insightly
      • Intercom
      • Iterable
      • Jira
      • Kafka
      • Kevel
      • Klaviyo
      • Kustomer
      • Labelbox
      • LaunchDarkly
      • LinkedIn
      • LiveIntent
      • Loops
      • Mailchimp
      • Mailchimp Transactional (Mandrill)
      • Mailgun
      • Marketo
      • Meilisearch
      • Microsoft Advertising
      • Microsoft Dynamics
      • Microsoft SQL Server
      • Microsoft Teams
      • Mixpanel
      • MoEngage
      • Mongo DB
      • mParticle
      • MySQL
      • NetSuite
      • Notion
      • OneSignal
      • Optimizely
      • Oracle Database
      • Oracle Eloqua
      • Oracle Fusion
      • Oracle Responsys
      • Orbit
      • Ortto
      • Outreach
      • Pardot
      • Partnerstack
      • Pendo
      • Pinterest
      • Pipedrive
      • Planhat
      • PostgreSQL
      • PostHog
      • Postscript
      • Productboard
      • Qualtrics
      • Radar
      • Reddit Ads
      • Rokt
      • RollWorks
      • Sailthru
      • Salesforce
      • Salesforce Commerce Cloud
      • Salesforce Marketing Cloud
      • Salesloft
      • Segment
      • SendGrid
      • Sense
      • SFTP
      • Shopify
      • Singular
      • Slack
      • Snapchat
      • Snowflake
      • Split
      • Sprig
      • Stripe
      • The Trade Desk
      • TikTok
      • Totango
      • Userflow
      • Userpilot
      • Vero Cloud
      • Vitally
      • Webhooks
      • Webflow
      • X Ads (formerly Twitter Ads)
      • Yahoo Ads (DSP)
      • Zendesk
      • Zoho CRM
      • Zuora
    • Custom & Partner Destinations
  • 📎Misc
    • Credits
    • Census Embedded
    • Data Storage
      • Census Store
        • Query Census Store from Snowflake
      • General Object Storage
      • Bring Your Own Bucket
        • Bring your own S3 Bucket
        • Bring your own GCS Bucket
        • Bring your own Azure Bucket
    • Developers
      • GitLink
      • Dataset API
      • Custom Destination API
      • Management API
    • Security & Privacy
      • Login & SSO Settings
      • Workspaces
      • Role-based Access Controls
      • Network Access Controls
      • SIEM Log Forwarding
      • Secure Storage of Customer Credentials
      • Digital Markets Act (DMA) Consent for Ad Platforms
    • Health and Usage Reporting
      • Workspace Homepage
      • Product Usage Dashboard
      • Observability Toolkit
      • Alerts
    • FAQs
Powered by GitBook
On this page
  • Getting started
  • Prerequisites
  • Step 1: Connect SFTP
  • Step 2: Connect your data warehouse
  • Step 3: Create your model
  • Step 4: Create your first sync
  • Step 5: Confirm the synced data
  • Supported Sync Behaviors
  • Update or Create Syncs
  • File Path
  • Variables
  • Advanced Configuration
  • Need help connecting your server?

Was this helpful?

  1. Destinations
  2. Available Destinations

SFTP

This page describes how to use Census with SFTP.

PreviousSenseNextShopify

Last updated 8 months ago

Was this helpful?

Getting started

This guide shows you how to use Census to connect your SFTP server to your data warehouse and create your first sync.

Prerequisites

Before you begin, you'll need the following:

  • Census account: If you don't have this already, .

  • SFTP server: You'll need the host address, username, and password or private key.

  • Have the proper credentials to access to your data source. See our docs for each supported data source for further information:

Step 1: Connect SFTP

  1. Click New Destination.

  2. Select SFTP from the dropdown list.

  3. Enter a Name for your destination. This is only for your reference – it can be anything that makes sense to you.

  4. Enter authentication details for your SFTP server. Host and Username are always required. If your server requires a password instead of an SSH key, enter the Password. If your server uses SSH keys, you can leave the Password blank.

  5. Click Save Connection.

  6. If you're using SSH keys to authenticate your server, download the SFTP Public Key from this screen and upload it to your server. Then, click Test to verify that the connection works.

  • If you aren't using a password for your server, Census provides an RSA token with an OpenSSH-formatted public key.

Your end state should look something like this: 👇

Step 2: Connect your data warehouse

The steps for connecting your data warehouse will depend on your technology. See the following guides:

After setting up your warehouse, your Destinations page should look something like this: 👇

Step 3: Create your model

When defining models, you'll write SQL queries to select the data you want to sync. This can be as simple as selecting everything in a specific database table or as complex as creating new calculated values.

  1. Enter a name for your model. You'll use this to select the model later.

  2. Enter your SQL query. If you want to test the query, use the Preview button.

  3. Click Save Model.

Step 4: Create your first sync

The sync will move data from your warehouse to a new or existing CSV file on your SFTP server. In this step, you'll define how that will work.

  1. Under What data do you want to sync?, choose your data warehouse as the Connection and your model as the Source.

  2. Under How should changes to the source be synced?, Replace will be automatically selected. This is the only supported sync behavior for SFTP.

  3. Under Which properties should be updated?, choose whether to sync only Selected Properties or Sync All Properties. Syncing all properties will add new properties to the sync if the model changes.

  4. To test your sync without actually syncing data, click Run Test and verify the results.

  5. Click Next. This will open the Confirm Details page where you can see a recap of your setup.

  6. If you want to start a sync immediately, set the Run a sync now? checkbox.

  7. Click Create Sync.

When configuring your sync, the page should look something like this: 👇

Step 5: Confirm the synced data

Once your sync is complete, it's time to check your data. Go to the specified path on your SFTP server and check that the file updated correctly.

Supported Sync Behaviors

Behaviors

Supported?

Objects

Update or Create

✅

All

Replace

✅

All

Update or Create Syncs

Update or Create syncs upload your whole dataset on the first run and only new changes on subsequent runs. Each sync run saves to a different file. The first run saves with "full" at the end of the file name. For example, filename_12_12_23_full.csv if it runs on 12/12/2023. Later syncs save with a timestamp at the end, like filename_12_12_23_1702426195.csv, so you can see how your data changes over time.

Learn more about all of our sync behaviors on our Core Concepts page.

File Path

When setting up a sync to SFTP, you can provide a file path for the file name Census will create/replace. The file path can include folders. Data arrives in one file to the designated server and file path.

Variables

When defining the File Path, you can use variables that will be set when the sync runs. This allows you to create and sync to new files that reflect the date and time of the sync.

Variable

Description

Example Values

%Y

4-digit year

1997

%y

2-digit year

97

%m

month with zero padding

07, 12

%-m

month without zero padding

7, 12

%d

day with zero padding

03, 23

%-d

day without zero padding

3, 23

%H

24 hour with zero padding

08, 18

%k

24 hour without zero padding

8, 18

%I

12 hour with zero padding

08, 12

%l

12 hour without zero padding

8, 12

%M

minute with zero padding

04, 56

%S

second with zero padding

06, 54

Advanced Configuration

In addition to the file path, you can configure how the data is encoded as it is written. Primarily this is a question of file format:

  • CSV - The standard comma separated values file. You can optionally specify an alternative delimeter such as |*, and you can enable/disable the header row.

  • TSV - The tab separated values file. You can enable/disable the header row.

  • JSON - A single JSON arraay of objects

  • NDJSON - New line-delimited list of JSON objects

  • Parquet - A columnar storage format that is more efficient for certain types of data.

  • If your configured delimiter is present in data values, Census will automatically add double quotes around the value. Example: Hello, world is written as as "Hello, world" if the chosen delimiter is a comma.

In addition to file format, you can also provide a PGP Public Key to encrypt the data before it is written to the file. This is useful for ensuring that the data is secure in transit and at rest.

Need help connecting your server?

Log into Census and navigate to .

From inside your Census account, navigate to the page.

From inside your Census account, navigate to the page and click Add Sync.

Under Where do you want to sync data to?, choose the name you assigned in Step 1 (we used SFTP) as the Connection. Enter the File Path for the CSV file where data will sync. The path can accept variables that will populate when the sync runs. See . Confirm the file path in the Template Preview field.

If everything went well, that's it! You've started syncing data from your warehouse to your SFTP server!

And if anything went wrong, contact the to get some help.

if you want Census to support additional sync behaviors for SFTP server connections.

You can send our at support@getcensus.com or start a conversation from the in-app chat.

🛫
start with a free trial
Azure Synapse
Databricks
Elasticsearch
Google BigQuery
Google Sheets
MySQL
Postgres
Redshift
Snowflake
SQL Server
Destinations
Databricks
Google BigQuery
Google Sheets
Postgres
Redshift
Snowflake
Models
Syncs
🥳️
Census support team
Let us know
support team an email
File Path Variables
Destinations page with SFTP server set up
Destinations page with data warehouse and SFTP server
Basic SQL query for a new model
Setting up a sync to an SFTP server