RapidCanvas Docs
  • Welcome
  • GETTING STARTED
    • Quick start guide
    • Introduction to RapidCanvas
    • RapidCanvas Concepts
    • Accessing the platform
  • BASIC
    • Projects
      • Projects Overview
        • Creating a project
        • Reviewing the Projects listing page
        • Duplicating a Project
        • Modifying the project settings
        • Deleting Project(s)
        • Configuring global variables at the project level
        • Working on a project
        • Generating the about content for the project
        • Generating AI snippets for each node on the Canvas
        • Marking & Unmarking a Project as Favorite
      • Canvas overview
        • Shortcut options on canvas
        • Queuing the Recipes
        • Bulk Deletion of Canvas Nodes
        • AI Guide
      • Recipes
        • AI-assisted recipe
        • Rapid model recipe
        • Template recipe
        • Code Recipe
        • RAG Recipes
      • Scheduler overview
        • Creating a scheduler
        • Running the scheduler manually
        • Managing schedulers in a project
        • Viewing the schedulers in a project
        • Viewing the run history of a specific scheduler
        • Publishing the updated data pipeline to selected jobs from canvas
        • Fetching the latest data pipeline to a specific scheduler
        • Comparing the canvas of the scheduler with current canvas of the project
      • Predictions
        • Manual Prediction
        • Prediction Scheduler
      • Segments and Scenarios
      • DataApps
        • Model DataApp
        • Project Canvas Datasets
        • Custom Uploaded Datasets
        • SQL Sources
        • Documents and PDFs
        • Prediction Service
        • Scheduler
        • Import DataApp
    • Connectors
      • Importing dataset(s) from the local system
      • Importing Text Files from the Local System
      • Connectors overview
      • Connect to external connectors
        • Importing data from Google Cloud Storage (GCS)
        • Importing data from Amazon S3
        • Importing data from Azure Blob
        • Importing data from Mongo DB
        • Importing data from Snowflake
        • Importing data from MySQL
        • Importing data from Amazon Redshift
        • Importing data from Fivetran connectors
    • Workspaces
      • User roles and permissions
    • Artifacts & Models
      • Adding Artifacts at the Project Level
      • Adding Models at the Project Level
      • Creating an artifact at the workspace level
      • Managing artifacts at the workspace level
      • Managing Models at the Workspace Level
      • Prediction services
    • Environments Overview
      • Creating an environment
      • Editing the environment details
      • Deleting an environment
      • Monitoring the resource utilization in an environment
  • ADVANCED
    • Starter Guide
      • Quick Start
    • Setup and Installation
      • Installing and setting up the SDK
    • Helper Functions
    • Notebook Guide
      • Introduction
      • Create a template
      • Code Snippets
      • DataApps
      • Prediction Service
      • How to
        • How to Authenticate
        • Create a new project
        • Create a Custom Environment
        • Add a dataset
        • Add a recipe to the dataset
        • Manage cloud connection
        • Code recipes
        • Display a template on the UI
        • Create Global Variables
        • Scheduler
        • Create new scenarios
        • Create Template
        • Use a template in a flow notebook
      • Reference Implementations
        • DataApps
        • Artifacts
        • Connectors
        • Feature Store
        • ML model
        • ML Pipeline
        • Multiple Files
      • Sample Projects
        • Model build and predict
  • Additional Reading
    • Release Notes
      • April 21, 2025
      • April 01, 2025
      • Mar 18, 2025
      • Feb 27, 2025
      • Jan 27, 2025
      • Dec 26, 2024
      • Nov 26, 2024
      • Oct 24, 2024
      • Sep 11, 2024
        • Aug 08, 2024
      • Aug 29, 2024
      • July 18, 2024
      • July 03, 2024
      • June 19, 2024
      • May 30, 2024
      • May 15, 2024
      • April 17, 2024
      • Mar 28, 2024
      • Mar 20, 2024
      • Feb 28, 2024
      • Feb 19, 2024
      • Jan 30, 2024
      • Jan 16, 2024
      • Dec 12, 2023
      • Nov 07, 2023
      • Oct 25, 2023
      • Oct 01, 2024
    • Glossary
Powered by GitBook
On this page
  • Creating and Testing a Prediction Service
  • Viewing the Prediction Service History
  1. BASIC
  2. Artifacts & Models

Prediction services

PreviousManaging Models at the Workspace LevelNextEnvironments Overview

Last updated 21 days ago

Prediction Service allows you to send real-time data to a model and receive predictions immediately. You can create an endpoint of the model that is exposed as an API and upload the test dataset. This API that has the model will make predictions on the uploaded data.

Creating and Testing a Prediction Service

Use this procedure to create a prediction service or an endpoint for a model. This service can only be created for models generated after running the data pipeline in a project or for models that are manually added. After generating the model as an API, you can test it on the uploaded dataset to make predictions.

Steps to Create a Prediction Service

  1. Select Artifacts & Models from the left navigation menu. The Artifacts tab is displayed, showing a list of all artifacts in the tenant.

  2. Click the Models tab to see the list of models created in this tenant.

  3. Click +Add in the Prediction Service column for the model whose API you want to create. The Prediction Service tab is displayed.

  4. In the Details section, specify the following:

    • Name: The name of the prediction service.

    • Description: A brief description of the prediction service.

    • Environment: The environment in which you want to test the prediction service.

    • Pre-process & Post-process: If needed, add pre-processing and post-processing steps using the integrated code editor.

  5. Specify the Configuration Options:

    • Timeout: The duration (in minutes) after which the incoming request should time out.

    • Concurrency: The number of parallel requests you can send (ranges from 5 to 100).

  6. Enable the Save History option to view a detailed record of activities in the prediction service. Disable this option to stop tracking logs.

  7. Click Save to create the endpoint for the model.

This generates a unique endpoint for the model along with a CURL command.

Testing the Prediction Service

  1. Select the file format. Possible values:

    • JSON

    • CSV/XLSX/XLS File

    • Canvas Datasets

  2. Click Browse to upload the file in CSV or JSON format based on the selected file format.

  3. Click Test to check the prediction results.

  1. Click History to view a comprehensive list of all executions of the prediction service. For more details, see Viewing the Prediction Service History.

After testing the prediction service on an uploaded dataset, you will see two options:

  • Download as CSV: Download the output as a CSV file.

  • Add to Canvas: Add the dataset to the canvas for further analysis.

Note: You can review logs from the past 30 days by clicking the Logs option. Logs provide detailed information, including:

  • Types of queries executed

  • Number of successful queries

  • Number of failed queries

Logs are accessible only if the Save History toggle is enabled. Additionally, you can export logs using:

  • Export: Download logs as a .txt file via the export option in the side panel.

  • Open in New Tab: View logs in a separate tab for better visibility and analysis.

Viewing the Prediction Service History

Use this procedure to view the list of times the prediction service was executed in the past 30 days.

Steps to View Prediction Service History

  1. Select Artifacts & Models from the left navigation menu. The Artifacts tab is displayed, showing a list of all artifacts in the tenant.

  2. Click the Models tab to see the list of models created in this tenant.

  3. Search for the model whose prediction service history you want to view.

  4. Click the link under the Prediction Service column. This navigates to the Prediction Service page for the selected model.

  5. Click History at the top of the Test Prediction Service section. This displays a record of all times the prediction service was executed in the last 30 days.

  1. Review the Prediction Service History information:

  • Start Time: The timestamp indicating when the prediction service execution began.

  • End Time: The timestamp marking when the prediction service execution was completed.

  • User: The user who initiated the prediction service run.

  • Status: The current status of the prediction service run (e.g., success, failure).

  • Info: Details of the request and response generated after the prediction service execution.

  • Tracking ID: A unique tracking ID generated for every run.

  • Request ID: A unique request ID generated for each run. To view this in the table, enable the Request ID column from the table settings.

Filtering Prediction Service Records

Use the Filter option to refine records based on their status:

  • Success: Displays only successful executions.

  • Failure: Displays only failed executions.

  • Both: Displays all executions.