RapidCanvas Docs
  • Welcome
  • GETTING STARTED
    • Quick start guide
    • Introduction to RapidCanvas
    • RapidCanvas Concepts
    • Accessing the platform
  • BASIC
    • Projects
      • Projects Overview
        • Creating a project
        • Reviewing the Projects listing page
        • Duplicating a Project
        • Modifying the project settings
        • Deleting Project(s)
        • Configuring global variables at the project level
        • Working on a project
        • Generating the about content for the project
        • Generating AI snippets for each node on the Canvas
        • Marking & Unmarking a Project as Favorite
      • Canvas overview
        • Shortcut options on canvas
        • Queuing the Recipes
        • Bulk Deletion of Canvas Nodes
        • AI Guide
      • Recipes
        • AI-assisted recipe
        • Rapid model recipe
        • Template recipe
        • Code Recipe
        • RAG Recipes
      • Scheduler overview
        • Creating a scheduler
        • Running the scheduler manually
        • Managing schedulers in a project
        • Viewing the schedulers in a project
        • Viewing the run history of a specific scheduler
        • Publishing the updated data pipeline to selected jobs from canvas
        • Fetching the latest data pipeline to a specific scheduler
        • Comparing the canvas of the scheduler with current canvas of the project
      • Predictions
        • Manual Prediction
        • Prediction Scheduler
      • Segments and Scenarios
      • DataApps
        • Model DataApp
        • Project Canvas Datasets
        • Custom Uploaded Datasets
        • SQL Sources
        • Documents and PDFs
        • Prediction Service
        • Scheduler
        • Import DataApp
    • Connectors
      • Importing dataset(s) from the local system
      • Importing Text Files from the Local System
      • Connectors overview
      • Connect to external connectors
        • Importing data from Google Cloud Storage (GCS)
        • Importing data from Amazon S3
        • Importing data from Azure Blob
        • Importing data from Mongo DB
        • Importing data from Snowflake
        • Importing data from MySQL
        • Importing data from Amazon Redshift
        • Importing data from Fivetran connectors
    • Workspaces
      • User roles and permissions
    • Artifacts & Models
      • Adding Artifacts at the Project Level
      • Adding Models at the Project Level
      • Creating an artifact at the workspace level
      • Managing artifacts at the workspace level
      • Managing Models at the Workspace Level
      • Prediction services
    • Environments Overview
      • Creating an environment
      • Editing the environment details
      • Deleting an environment
      • Monitoring the resource utilization in an environment
  • ADVANCED
    • Starter Guide
      • Quick Start
    • Setup and Installation
      • Installing and setting up the SDK
    • Helper Functions
    • Notebook Guide
      • Introduction
      • Create a template
      • Code Snippets
      • DataApps
      • Prediction Service
      • How to
        • How to Authenticate
        • Create a new project
        • Create a Custom Environment
        • Add a dataset
        • Add a recipe to the dataset
        • Manage cloud connection
        • Code recipes
        • Display a template on the UI
        • Create Global Variables
        • Scheduler
        • Create new scenarios
        • Create Template
        • Use a template in a flow notebook
      • Reference Implementations
        • DataApps
        • Artifacts
        • Connectors
        • Feature Store
        • ML model
        • ML Pipeline
        • Multiple Files
      • Sample Projects
        • Model build and predict
    • Rapid Rag
  • Additional Reading
    • Release Notes
      • May 14, 2025
      • April 21, 2025
      • April 01, 2025
      • Mar 18, 2025
      • Feb 27, 2025
      • Jan 27, 2025
      • Dec 26, 2024
      • Nov 26, 2024
      • Oct 24, 2024
      • Sep 11, 2024
        • Aug 08, 2024
      • Aug 29, 2024
      • July 18, 2024
      • July 03, 2024
      • June 19, 2024
      • May 30, 2024
      • May 15, 2024
      • April 17, 2024
      • Mar 28, 2024
      • Mar 20, 2024
      • Feb 28, 2024
      • Feb 19, 2024
      • Jan 30, 2024
      • Jan 16, 2024
      • Dec 12, 2023
      • Nov 07, 2023
      • Oct 25, 2023
      • Oct 01, 2024
    • Glossary
Powered by GitBook
On this page
  1. BASIC
  2. Connectors
  3. Connect to external connectors

Importing data from Snowflake

PreviousImporting data from Mongo DBNextImporting data from MySQL

Last updated 1 month ago

You can import the datasets from Snowflake to the RapidCanvas platform. For this, you must establish a connection with to Snowflake by providing the account details. After establishing the connection successfully, you can select the warehouse and database from where you want to fetch the datasets.

To import data from Snowflake:

  1. Hover over the menu icon and select Connectors. The Connectors page is displayed showing the total number of connectors.

    The Data connectors screen is displayed.

  2. Click the plus icon on the top. You can also use the +New data connector button on the workspace to create a new connection.

  3. Click the Snowflake tile.

  4. Click Create Connection. The Data connectors configuration page is displayed.

  5. Specify this information to configure Snowflake Data connector and access files stored in this Snowflake account:

    Name: The name of the Data connector. User: The name of the user account. Account: The name of the account. Choose any of these options Password: Enter the password of the user account Key Authentication: Private key - Provide a private key to enhance security and eliminate the need for passwords. Passphrase (optional) - Provide the passphrase only for encrypted private key. This is used to protect the private key associated with your Snowflake connection. During the initial connection setup, users will be required to enter the passphrase. This passphrase is not transmitted to Snowflake, but it adds an additional layer of security to ensure safe and secure authentication

  1. Click Test to check if you are able to establish the connection to the Data connector successfully.

  2. Click Save to save the database details. The Data tab fields are enabled only after saving the connector details.

  3. Specify this information on the Data tab. The fields on this tab are enabled only after you establish connected with the Data connector.

    Warehouse: The warehouse to which you have to connect.

    Role: The permission assigned to the user.

    Database: The name of the database to use.

    Schema: The schema of the file.

    Jsonquery: The JSON query you have to provide in the terminal to fetch the data you want.

  4. Click RUN QUERY to run this query and fetch the data from the database.

Once the connection is established, you can see the data imported from the Snowflake to the platform in the table format.

You can manage files, datasets, and published outputs for this data connector across different tabs:

  • Data Tab: View the files retrieved from this data connector.

  • Datasets Tab: See the projects where datasets fetched from this data connector have been used.

  • Schedulers Tab: View the outputs published to this connector. When creating a scheduler, users can configure an external connector as the destination to publish the generated outputs upon job execution.

Left Nav Data Sources
New Data Source
Snowflake DB Connect
Create Connect Snowflake