RapidCanvas Docs
  • Welcome
  • GETTING STARTED
    • Quick start guide
    • Introduction to RapidCanvas
    • RapidCanvas Concepts
    • Accessing the platform
  • BASIC
    • Projects
      • Projects Overview
        • Creating a project
        • Reviewing the Projects listing page
        • Duplicating a Project
        • Modifying the project settings
        • Deleting Project(s)
        • Configuring global variables at the project level
        • Working on a project
        • Generating the about content for the project
        • Generating AI snippets for each node on the Canvas
        • Marking & Unmarking a Project as Favorite
      • Canvas overview
        • Shortcut options on canvas
        • Queuing the Recipes
        • Bulk Deletion of Canvas Nodes
        • AI Guide
      • Recipes
        • AI-assisted recipe
        • Rapid model recipe
        • Template recipe
        • Code Recipe
        • RAG Recipes
      • Scheduler overview
        • Creating a scheduler
        • Running the scheduler manually
        • Managing schedulers in a project
        • Viewing the schedulers in a project
        • Viewing the run history of a specific scheduler
        • Publishing the updated data pipeline to selected jobs from canvas
        • Fetching the latest data pipeline to a specific scheduler
        • Comparing the canvas of the scheduler with current canvas of the project
      • Predictions
        • Manual Prediction
        • Prediction Scheduler
      • Segments and Scenarios
      • DataApps
        • Model DataApp
        • Project Canvas Datasets
        • Custom Uploaded Datasets
        • SQL Sources
        • Documents and PDFs
        • Prediction Service
        • Scheduler
        • Import DataApp
    • Connectors
      • Importing dataset(s) from the local system
      • Importing Text Files from the Local System
      • Connectors overview
      • Connect to external connectors
        • Importing data from Google Cloud Storage (GCS)
        • Importing data from Amazon S3
        • Importing data from Azure Blob
        • Importing data from Mongo DB
        • Importing data from Snowflake
        • Importing data from MySQL
        • Importing data from Amazon Redshift
        • Importing data from Fivetran connectors
    • Workspaces
      • User roles and permissions
    • Artifacts & Models
      • Adding Artifacts at the Project Level
      • Adding Models at the Project Level
      • Creating an artifact at the workspace level
      • Managing artifacts at the workspace level
      • Managing Models at the Workspace Level
      • Prediction services
    • Environments Overview
      • Creating an environment
      • Editing the environment details
      • Deleting an environment
      • Monitoring the resource utilization in an environment
  • ADVANCED
    • Starter Guide
      • Quick Start
    • Setup and Installation
      • Installing and setting up the SDK
    • Helper Functions
    • Notebook Guide
      • Introduction
      • Create a template
      • Code Snippets
      • DataApps
      • Prediction Service
      • How to
        • How to Authenticate
        • Create a new project
        • Create a Custom Environment
        • Add a dataset
        • Add a recipe to the dataset
        • Manage cloud connection
        • Code recipes
        • Display a template on the UI
        • Create Global Variables
        • Scheduler
        • Create new scenarios
        • Create Template
        • Use a template in a flow notebook
      • Reference Implementations
        • DataApps
        • Artifacts
        • Connectors
        • Feature Store
        • ML model
        • ML Pipeline
        • Multiple Files
      • Sample Projects
        • Model build and predict
    • Rapid Rag
  • Additional Reading
    • Release Notes
      • May 14, 2025
      • April 21, 2025
      • April 01, 2025
      • Mar 18, 2025
      • Feb 27, 2025
      • Jan 27, 2025
      • Dec 26, 2024
      • Nov 26, 2024
      • Oct 24, 2024
      • Sep 11, 2024
        • Aug 08, 2024
      • Aug 29, 2024
      • July 18, 2024
      • July 03, 2024
      • June 19, 2024
      • May 30, 2024
      • May 15, 2024
      • April 17, 2024
      • Mar 28, 2024
      • Mar 20, 2024
      • Feb 28, 2024
      • Feb 19, 2024
      • Jan 30, 2024
      • Jan 16, 2024
      • Dec 12, 2023
      • Nov 07, 2023
      • Oct 25, 2023
      • Oct 01, 2024
    • Glossary
Powered by GitBook
On this page
  • Connectors dashboard
  • Various sections on the connectors dashboard
  • Editing the details of a Data connector
  • Deleting a data connector
  1. BASIC
  2. Connectors

Connectors overview

PreviousImporting Text Files from the Local SystemNextConnect to external connectors

Last updated 2 months ago

The first step to create machine learning models is to import a dataset. You can import files with data from your local system or from in-house data connectors such as Google Cloud Platform (CGP), MongoDB, MySQL, Amazon S3, Azure Blob Storage, PostgreSQL, Amazon Redshift, Snowflake, PostgreSQL, or Fivetran connectors by establishing a connection. Once the connection to a data connector is established successfully, you can import data stored in its storage account. Subsequently, the imported files can be uploaded onto the canvas to build flows and make predictions. You can export the generated output after running the data transformations to the configured connector.

You can upload the files with categorical, numeric, time series and text data. Various file formats that are supported by this platform include - .csv,.xls,.xlsx, and .parquet.

Connectors dashboard

This is the page you view when you click on Data connectors from the left navigation. The page allows you to quickly access the list of Data connectors available under this tenant. On each card, you can see the total number of projects in which the data connector is used, creation date of the data connector, and source from where the data is imported.

Various sections on the connectors dashboard

This section explains various sections of the connectors dashboard page:

Filter: You can filter the Data connectors by type such as Google Cloud Platform (CGP), MongoDB, MySQL, Amazon S3, Azure Blob Storage, Amazon Redshift, Snowflake and Fivetran connectors. Select the check box corresponding to the Data connector that you want to filter by. Also, you can view the number of Data connectors created for each type.

+: You can create a new Data connector.

Switch from list view to card view: You can use this option to switch from list view to the card view.

Switch from card view to list view - You can use this option to switch the Data connectors from card view to the list view.

Search - You can use the search bar on the top of the Data connectors dashboard to find a specific Data connector from the list.

Editing the details of a Data connector

Use this procedure to modify the details of a Data connector. You can use the ellipsis icon in the card view to edit the details (or) the pencil icon corresponding to the Data connector you want to modify, in the list view.

  1. Select a Data connector that you want to modify.

  2. Click the ellipsis icon on the Data connector widget or card and select Edit Data connector.

  3. Modify the required details and click Save.

Deleting a data connector

Use this procedure to delete a data connector created in a tenant.

  1. Select a data connector that you want to delete.

  2. Click the ellipsis icon on Data connector widget or card and select Delete data connector.

  3. A dialog box prompts to delete or cancel the data connector. Click Delete to delete the Data connector permanently from the list.

Note: click Cancel to discard the action.

Data Source Dashboard
Filter Data Source
New Data Source Dashboard
List to Card View New
List View
Search Data Source
Edit Data Source
Delete Data Source