RapidCanvas Docs
  • Welcome
  • GETTING STARTED
    • Quick start guide
    • Introduction to RapidCanvas
    • RapidCanvas Concepts
    • Accessing the platform
  • BASIC
    • Projects
      • Projects Overview
        • Creating a project
        • Reviewing the Projects listing page
        • Duplicating a Project
        • Modifying the project settings
        • Deleting Project(s)
        • Configuring global variables at the project level
        • Working on a project
        • Generating the about content for the project
        • Generating AI snippets for each node on the Canvas
        • Marking & Unmarking a Project as Favorite
      • Canvas overview
        • Shortcut options on canvas
        • Queuing the Recipes
        • Bulk Deletion of Canvas Nodes
        • AI Guide
      • Recipes
        • AI-assisted recipe
        • Rapid model recipe
        • Template recipe
        • Code Recipe
        • RAG Recipes
      • Scheduler overview
        • Creating a scheduler
        • Running the scheduler manually
        • Managing schedulers in a project
        • Viewing the schedulers in a project
        • Viewing the run history of a specific scheduler
        • Publishing the updated data pipeline to selected jobs from canvas
        • Fetching the latest data pipeline to a specific scheduler
        • Comparing the canvas of the scheduler with current canvas of the project
      • Predictions
        • Manual Prediction
        • Prediction Scheduler
      • Segments and Scenarios
      • DataApps
        • Model DataApp
        • Project Canvas Datasets
        • Custom Uploaded Datasets
        • SQL Sources
        • Documents and PDFs
        • Prediction Service
        • Scheduler
        • Import DataApp
    • Connectors
      • Importing dataset(s) from the local system
      • Importing Text Files from the Local System
      • Connectors overview
      • Connect to external connectors
        • Importing data from Google Cloud Storage (GCS)
        • Importing data from Amazon S3
        • Importing data from Azure Blob
        • Importing data from Mongo DB
        • Importing data from Snowflake
        • Importing data from MySQL
        • Importing data from Amazon Redshift
        • Importing data from Fivetran connectors
    • Workspaces
      • User roles and permissions
    • Artifacts & Models
      • Adding Artifacts at the Project Level
      • Adding Models at the Project Level
      • Creating an artifact at the workspace level
      • Managing artifacts at the workspace level
      • Managing Models at the Workspace Level
      • Prediction services
    • Environments Overview
      • Creating an environment
      • Editing the environment details
      • Deleting an environment
      • Monitoring the resource utilization in an environment
  • ADVANCED
    • Starter Guide
      • Quick Start
    • Setup and Installation
      • Installing and setting up the SDK
    • Helper Functions
    • Notebook Guide
      • Introduction
      • Create a template
      • Code Snippets
      • DataApps
      • Prediction Service
      • How to
        • How to Authenticate
        • Create a new project
        • Create a Custom Environment
        • Add a dataset
        • Add a recipe to the dataset
        • Manage cloud connection
        • Code recipes
        • Display a template on the UI
        • Create Global Variables
        • Scheduler
        • Create new scenarios
        • Create Template
        • Use a template in a flow notebook
      • Reference Implementations
        • DataApps
        • Artifacts
        • Connectors
        • Feature Store
        • ML model
        • ML Pipeline
        • Multiple Files
      • Sample Projects
        • Model build and predict
  • Additional Reading
    • Release Notes
      • April 21, 2025
      • April 01, 2025
      • Mar 18, 2025
      • Feb 27, 2025
      • Jan 27, 2025
      • Dec 26, 2024
      • Nov 26, 2024
      • Oct 24, 2024
      • Sep 11, 2024
        • Aug 08, 2024
      • Aug 29, 2024
      • July 18, 2024
      • July 03, 2024
      • June 19, 2024
      • May 30, 2024
      • May 15, 2024
      • April 17, 2024
      • Mar 28, 2024
      • Mar 20, 2024
      • Feb 28, 2024
      • Feb 19, 2024
      • Jan 30, 2024
      • Jan 16, 2024
      • Dec 12, 2023
      • Nov 07, 2023
      • Oct 25, 2023
      • Oct 01, 2024
    • Glossary
Powered by GitBook
On this page
  1. BASIC
  2. Projects
  3. Scheduler overview

Creating a scheduler

PreviousScheduler overviewNextRunning the scheduler manually

Last updated 21 days ago

Use this procedure to create a scheduler for a scenario within a project.

  1. From the left navigation menu, select Projects. The Projects dashboard is displayed.

  2. Select the project for which you want to create a scheduler. You can create schedulers for different scenarios in a project.

    The Canvas page is displayed.

  3. Click the Scheduler tab on the project navigation menu on the left to open the Scheduler page.

  1. Do one of the following:

  • Click the plus icon on the top right-corner of the page.

  • Click the +New Scheduler option to create a scheduler. However, you can only view this option when there are no schedulers created in this project.

The following page is displayed where you can create a scheduler to run this data pipeline at the set time interval.

  1. Click the default scheduler name to provide a custom name on the top.

  1. Select the scenario on which you want to run the project flow at the scheduled frequency.

  2. Select the scheduler frequency. Possible values:

    • Daily - This displays Hrs and Min. drop-down to select the time at which the job should be triggered.

    • Weekly - This displays days in a week and time at which the scheduler should be run.

    • Cron - This displays the unix-corn format to create a scheduler.

  3. View the project canvas.

  4. Click Save to create the scheduler. This also enables the +Destination option to configure the data connector to which you can publish the generated output datasets or the input dataset.

    You can see the project variables button only if the variables are defined at the project level. After creating the scheduler, you can change the value in project variables.

  5. Click + Destination. This opens the Destinations side panel.

Note: This button is enabled only if you have configured external data sources in your workspace.

  1. Click + DESTINATION.

  2. Select the dataset that you want to add to the destination. If the dataset list is huge, you can use the search option to search for the dataset you want.

  3. Select the destination from the drop-down list. You can only view the list of external data sources configured under this tenant excluding Snowflake and Fivetran connectors.

ℹ️ Info: When you select the SQL connector to synchronize or copy the output dataset generated after running the project, the table name column is displayed. Here, you can provide the table name and select either "Append" or "Replace". Opting for the "Append" option will append the dataset to the existing one, provided both datasets share the same schema. Alternatively, selecting the "Replace" option will replace the existing dataset with the new one.

If you choose the data connector as MongoDB, you can provide the database name and collection. In the event that the provided collection name already exists, the new dataset will be appended to the existing collection.

  1. Provide the destination folder and destination file name to save the file in the destination folder with the new file name after the job is run every time at the scheduled time.

  1. Select the To create new file for every run check box to create a new file after every job run. The new file will be saved with the RUN ID. Clearing this check box overrides the existing file.

  2. Click Save to save this destination. This button is enabled only after you select all the required destination fields.

    Note:

    You can store files in multiple destinations. To add another destination, click + DESTINATION. If you want to remove any destination, click the delete icon.

    If you no longer want to save the output to the configured destination, you can use the delete icon to delete the destination.

  3. Close the window after configuring the destination for the job. Once the destination details are set, the destination node will appear on the scheduler canvas.

  4. Click GLOBAL VARIABLES to change the configured parameters for this job.

  5. Change the value for the key. Please note that you cannot change the key.

Note: The GLOBAL VARIABLES button is enabled only when the global variables are declared at the project level. To configure global variables, refer .

configuring global variables at a project level