RapidCanvas Docs
  • Welcome
  • GETTING STARTED
    • Quick start guide
    • Introduction to RapidCanvas
    • RapidCanvas Concepts
    • Accessing the platform
  • BASIC
    • Projects
      • Projects Overview
        • Creating a project
        • Reviewing the Projects listing page
        • Duplicating a Project
        • Modifying the project settings
        • Deleting Project(s)
        • Configuring global variables at the project level
        • Working on a project
        • Generating the about content for the project
        • Generating AI snippets for each node on the Canvas
        • Marking & Unmarking a Project as Favorite
      • Canvas overview
        • Shortcut options on canvas
        • Queuing the Recipes
        • Bulk Deletion of Canvas Nodes
        • AI Guide
      • Recipes
        • AI-assisted recipe
        • Rapid model recipe
        • Template recipe
        • Code Recipe
        • RAG Recipes
      • Scheduler overview
        • Creating a scheduler
        • Running the scheduler manually
        • Managing schedulers in a project
        • Viewing the schedulers in a project
        • Viewing the run history of a specific scheduler
        • Publishing the updated data pipeline to selected jobs from canvas
        • Fetching the latest data pipeline to a specific scheduler
        • Comparing the canvas of the scheduler with current canvas of the project
      • Predictions
        • Manual Prediction
        • Prediction Scheduler
      • Segments and Scenarios
      • DataApps
        • Model DataApp
        • Project Canvas Datasets
        • Custom Uploaded Datasets
        • SQL Sources
        • Documents and PDFs
        • Prediction Service
        • Scheduler
        • Import DataApp
    • Connectors
      • Importing dataset(s) from the local system
      • Importing Text Files from the Local System
      • Connectors overview
      • Connect to external connectors
        • Importing data from Google Cloud Storage (GCS)
        • Importing data from Amazon S3
        • Importing data from Azure Blob
        • Importing data from Mongo DB
        • Importing data from Snowflake
        • Importing data from MySQL
        • Importing data from Amazon Redshift
        • Importing data from Fivetran connectors
    • Workspaces
      • User roles and permissions
    • Artifacts & Models
      • Adding Artifacts at the Project Level
      • Adding Models at the Project Level
      • Creating an artifact at the workspace level
      • Managing artifacts at the workspace level
      • Managing Models at the Workspace Level
      • Prediction services
    • Environments Overview
      • Creating an environment
      • Editing the environment details
      • Deleting an environment
      • Monitoring the resource utilization in an environment
  • ADVANCED
    • Starter Guide
      • Quick Start
    • Setup and Installation
      • Installing and setting up the SDK
    • Helper Functions
    • Notebook Guide
      • Introduction
      • Create a template
      • Code Snippets
      • DataApps
      • Prediction Service
      • How to
        • How to Authenticate
        • Create a new project
        • Create a Custom Environment
        • Add a dataset
        • Add a recipe to the dataset
        • Manage cloud connection
        • Code recipes
        • Display a template on the UI
        • Create Global Variables
        • Scheduler
        • Create new scenarios
        • Create Template
        • Use a template in a flow notebook
      • Reference Implementations
        • DataApps
        • Artifacts
        • Connectors
        • Feature Store
        • ML model
        • ML Pipeline
        • Multiple Files
      • Sample Projects
        • Model build and predict
  • Additional Reading
    • Release Notes
      • April 21, 2025
      • April 01, 2025
      • Mar 18, 2025
      • Feb 27, 2025
      • Jan 27, 2025
      • Dec 26, 2024
      • Nov 26, 2024
      • Oct 24, 2024
      • Sep 11, 2024
        • Aug 08, 2024
      • Aug 29, 2024
      • July 18, 2024
      • July 03, 2024
      • June 19, 2024
      • May 30, 2024
      • May 15, 2024
      • April 17, 2024
      • Mar 28, 2024
      • Mar 20, 2024
      • Feb 28, 2024
      • Feb 19, 2024
      • Jan 30, 2024
      • Jan 16, 2024
      • Dec 12, 2023
      • Nov 07, 2023
      • Oct 25, 2023
      • Oct 01, 2024
    • Glossary
Powered by GitBook
On this page
  • Fetching code from the AI
  • Creating a code recipe manually
  1. ADVANCED
  2. Notebook Guide
  3. How to

Code recipes

Code recipes for Notebook users allows adding functional code logic within a recipe in the machine learning flow without having to create a separate custom template. This simplifies the ability to create new custom templates within a recipe. With the AI capabilities integrated into the platform, users can invoke AI and retrieve the code based on the query passed. The generated code is used to create custom code recipes on the Notebook in different stages of model building. Users can set the ask.ai flag in the code to True to use the AI capabilities for code generation and set the same parameter to False to manually write the code for creating custom code recipes.

Fetching code from the AI

Use this code block to fetch the code logic you want to use in the flow from the AI based on the query passed as an input.

Firstly, you have to create a recipe for a dataset inside the project.

recipe = project.addRecipe([titanic], name="code_recipe")

Secondly, use the following code block to send your query to the AI and get the code.

code_response = recipe.generate_code(
    user_input="Show dataframe having people against Fare. Plot histogram of number of people against Fare. Plot plot of survived by gender",
    ask_ai=True,
    with_signature=True,
    outputs=["output1"],
    charts=["chart1", "chart2"]
)

print(code_response.get_body())

The following table describes about each parameter in the generate_code block:

Parameter
Description

user_input

The input that is given to the AI to generate the code.

ask_ai

Set this flag either to False or True. Setting this to True will allow you to use the AI capabilities and fetch the code from the AI for the query passed and changing this to False will allow you to manually write the code within the recipe.

with_signature

By default, it is set to True.

outputs

The output generated by the template.

Output from the AI

The following is the code block generated by the AI for the given user input.

def transform(entities):
    input_df_1 = entities['titanic']
    

    import pandas as pd
    import matplotlib.pyplot as plt
    output_df_1 = input_df_1[['PassengerId', 'Fare']]
    plt.hist(input_df_1['Fare'], bins=20)
    plt.xlabel('Fare')
    plt.ylabel('Number of People')
    plt.title('Histogram of Number of People against Fare')
    fig_1 = plt.figure(1)
    plt.show()
    survived_by_gender = input_df_1.groupby(['Sex', 'Survived'])['Survived'].count(
        ).unstack()
    survived_by_gender.plot(kind='bar', stacked=True)
    plt.xlabel('Gender')
    plt.ylabel('Number of People')
    plt.title('Survived by Gender')
    fig_2 = plt.figure(2)
    plt.show()
    

    return {
        'output1': output_df_1,
        'chart1': fig_1,
        'chart2': fig_2,
    }

Creating a code recipe manually

Use this procedure to write the code within a recipe for executing logic in the flow manually. In the generate_code block, you have to set the ask.ai flag to False to manually the code within a recipe.

code_response = recipe.generate_code(
    user_input="Create train and test split based on Survived column. Do not do any group by and maintain all columns",
    ask_ai=False,
    with_signature=True,
    outputs=["output_1", "output_2"]
)

code_response.update("""def transform(entities):
    input_df_1 = entities['titanic']
    

    import pandas as pd
    from sklearn.model_selection import train_test_split
    train_df, test_df = train_test_split(input_df_1, test_size=0.2, stratify=
        input_df_1['Survived'], random_state=42)
    output_df_1 = train_df[['PassengerId', 'Survived', 'Pclass', 'Name', 'Sex',
        'Age', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked']]
    output_df_2 = test_df[['PassengerId', 'Survived', 'Pclass', 'Name', 'Sex',
        'Age', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked']]
    

    return {
        'output_1': output_df_1,
        'output_2': output_df_2,
    }""")

For more information to use code recipes in your projects, refer the following sample projects:

Sample projects

PreviousManage cloud connectionNextDisplay a template on the UI