Artifacts & Models
This page covers how to manage artifacts and build prediction services on models.
Artifacts & Models
An artifact is a versatile data entity that serves as a superset of a dataset and can exist in various formats such as ZIP files, folders, XLS, Parquet, and more. Artifacts can be uploaded to the canvas flow and used as inputs for running recipes. For recipes or templates to work with artifacts, they must be explicitly configured for artifact input in the flow. Running these recipes also generates an artifact as output, enabling seamless data processing workflows.
A trained model, on the other hand, is an output generated after executing a model builder transformation within a machine learning pipeline. All models created within projects under a specific tenant are stored in the Model Catalog, allowing users to access and reuse these models for making predictions on live data.
Artifacts and Models at Different Levels
Project Level
Artifacts and models generated as outputs from recipes or used as inputs in the project pipeline can be managed and viewed at the project level. Users can:
Upload new artifacts or reuse artifacts from other projects.
Add models from other projects to use within the current project.
Create prediction services directly from the project-level view.
Workspace Level
Artifacts and models produced as outputs or used as inputs across various projects within the workspace can be accessed at the workspace level. Users can:
Upload new artifacts and view models from multiple projects in one centralized location.
Create prediction services using models generated across various projects within the workspace.
Last updated