Experiment Class

Represents the main entry point for creating and working with experiments in Azure Machine Learning.

An Experiment is a container of trials that represent multiple model runs.

Experiment constructor.

Inheritance
azureml._logging.chained_identity.ChainedIdentity
Experiment
azureml.core._portal.HasExperimentPortal
Experiment

Constructor

Experiment(workspace, name, _skip_name_validation=False, _id=None, _archived_time=None, _create_in_cloud=True, _experiment_dto=None, **kwargs)

Parameters

Name Description
workspace
Required

The workspace object containing the experiment.

name
Required
str

The experiment name.

kwargs
Required

A dictionary of keyword args.

workspace
Required

The workspace object containing the experiment.

name
Required
str

The experiment name.

kwargs
Required

A dictionary of keyword args.

_skip_name_validation
Default value: False
_id
Default value: None
_archived_time
Default value: None
_create_in_cloud
Default value: True
_experiment_dto
Default value: None

Remarks

An Azure Machine Learning experiment represent the collection of trials used to validate a user's hypothesis.

In Azure Machine Learning, an experiment is represented by the Experiment class and a trial is represented by the Run class.

To get or create an experiment from a workspace, you request the experiment using the experiment name. Experiment name must be 3-36 characters, start with a letter or a number, and can only contain letters, numbers, underscores, and dashes.


   experiment = Experiment(workspace, "MyExperiment")

If the experiment is not found in the workspace, a new experiment is created.

There are two ways to execute an experiment trial. If you are interactively experimenting in a Jupyter Notebook, use start_logging If you are submitting an experiment from source code or some other type of configured trial, use submit

Both mechanisms create a Run object. In interactive scenarios, use logging methods such as log to add measurements and metrics to the trial record. In configured scenarios use status methods such as get_status to retrieve information about the run.

In both cases you can use query methods like get_metrics to retrieve the current values, if any, of any trial measurements and metrics.

Methods

archive

Archive an experiment.

delete

Delete an experiment in the workspace.

from_directory

(Deprecated) Load an experiment from the specified path.

get_docs_url

Url to the documentation for this class.

get_runs

Return a generator of the runs for this experiment, in reverse chronological order.

list

Return the list of experiments in the workspace.

reactivate

Reactivates an archived experiment.

refresh

Return the most recent version of the experiment from the cloud.

remove_tags

Delete the specified tags from the experiment.

set_tags

Add or modify a set of tags on the experiment. Tags not passed in the dictionary are left untouched.

start_logging

Start an interactive logging session and create an interactive run in the specified experiment.

submit

Submit an experiment and return the active created run.

tag

Tag the experiment with a string key and optional string value.

archive

Archive an experiment.

archive()

Remarks

After archival, the experiment will not be listed by default. Attempting to write to an archived experiment will create a new active experiment with the same name. An archived experiment can be restored by calling reactivate as long as there is not another active experiment with the same name.

delete

Delete an experiment in the workspace.

static delete(workspace, experiment_id)

Parameters

Name Description
workspace
Required

The workspace which the experiment belongs to.

experiment_id
Required

The experiment id of the experiment to be deleted.

from_directory

(Deprecated) Load an experiment from the specified path.

static from_directory(path, auth=None)

Parameters

Name Description
path
Required
str

Directory containing the experiment configuration files.

auth

The auth object. If None the default Azure CLI credentials will be used or the API will prompt for credentials.

Default value: None

Returns

Type Description

Returns the Experiment

get_docs_url

Url to the documentation for this class.

get_docs_url()

Returns

Type Description
str

url

get_runs

Return a generator of the runs for this experiment, in reverse chronological order.

get_runs(type=None, tags=None, properties=None, include_children=False)

Parameters

Name Description
type

Filter the returned generator of runs by the provided type. See add_type_provider for creating run types.

Default value: None
tags

Filter runs by "tag" or {"tag": "value"}.

Default value: None
properties

Filter runs by "property" or {"property": "value"}

Default value: None
include_children

By default, fetch only top-level runs. Set to true to list all runs.

Default value: False

Returns

Type Description

The list of runs matching supplied filters.

list

Return the list of experiments in the workspace.

static list(workspace, experiment_name=None, view_type='ActiveOnly', tags=None)

Parameters

Name Description
workspace
Required

The workspace from which to list the experiments.

experiment_name
str

Optional name to filter experiments.

Default value: None
view_type

Optional enum value to filter or include archived experiments.

Default value: ActiveOnly
tags

Optional tag key or dictionary of tag key-value pairs to filter experiments on.

Default value: None

Returns

Type Description

A list of experiment objects.

reactivate

Reactivates an archived experiment.

reactivate(new_name=None)

Parameters

Name Description
new_name
Required
str

Not supported anymore

Remarks

An archived experiment can only be reactivated if there is not another active experiment with the same name.

refresh

Return the most recent version of the experiment from the cloud.

refresh()

remove_tags

Delete the specified tags from the experiment.

remove_tags(tags)

Parameters

Name Description
tags
Required
[str]

The tag keys that will get removed

set_tags

Add or modify a set of tags on the experiment. Tags not passed in the dictionary are left untouched.

set_tags(tags)

Parameters

Name Description
tags
Required

The tags stored in the experiment object

start_logging

Start an interactive logging session and create an interactive run in the specified experiment.

start_logging(*args, **kwargs)

Parameters

Name Description
experiment
Required

The experiment.

outputs
Required
str

Optional outputs directory to track. For no outputs, pass False.

snapshot_directory
Required
str

Optional directory to take snapshot of. Setting to None will take no snapshot.

args
Required
kwargs
Required

Returns

Type Description
Run

Return a started run.

Remarks

start_logging creates an interactive run for use in scenarios such as Jupyter Notebooks. Any metrics that are logged during the session are added to the run record in the experiment. If an output directory is specified, the contents of that directory is uploaded as run artifacts upon run completion.


   experiment = Experiment(workspace, "My Experiment")
   run = experiment.start_logging(outputs=None, snapshot_directory=".", display_name="My Run")
   ...
   run.log_metric("Accuracy", accuracy)
   run.complete()

Note

run_id is automatically generated for each run and is unique within the experiment.

submit

Submit an experiment and return the active created run.

submit(config, tags=None, **kwargs)

Parameters

Name Description
config
Required

The config to be submitted.

tags

Tags to be added to the submitted run, {"tag": "value"}.

Default value: None
kwargs
Required

Additional parameters used in submit function for configurations.

Returns

Type Description
Run

A run.

Remarks

Submit is an asynchronous call to the Azure Machine Learning platform to execute a trial on local or remote hardware. Depending on the configuration, submit will automatically prepare your execution environments, execute your code, and capture your source code and results into the experiment's run history.

To submit an experiment you first need to create a configuration object describing how the experiment is to be run. The configuration depends on the type of trial required.

An example of how to submit an experiment from your local machine is as follows:


   from azureml.core import ScriptRunConfig

   # run a trial from the train.py code in your current directory
   config = ScriptRunConfig(source_directory='.', script='train.py',
       run_config=RunConfiguration())
   run = experiment.submit(config)

   # get the url to view the progress of the experiment and then wait
   # until the trial is complete
   print(run.get_portal_url())
   run.wait_for_completion()

For details on how to configure a run, see the configuration type details.

  • ScriptRunConfig

  • azureml.train.automl.automlconfig.AutoMLConfig

  • azureml.pipeline.core.Pipeline

  • azureml.pipeline.core.PublishedPipeline

  • azureml.pipeline.core.PipelineEndpoint

Note

When you submit the training run, a snapshot of the directory that contains your training scripts is created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the run again, only the changed files will be uploaded.

To prevent files from being included in the snapshot, create a .gitignore or .amlignore file in the directory and add the files to it. The .amlignore file uses the same syntax and patterns as the .gitignore file. If both files exist, the .amlignore file takes precedence.

For more information, see Snapshots.

tag

Tag the experiment with a string key and optional string value.

tag(key, value=None)

Parameters

Name Description
key
Required
str

The tag key

value
Required
str

An optional value for the tag

Remarks

Tags on an experiment are stored in a dictionary with string keys and string values. Tags can be set, updated and deleted. Tags are user-facing and generally contain meaning information for the consumers of the experiment.


   experiment.tag('')
   experiment.tag('DeploymentCandidate')
   experiment.tag('modifiedBy', 'Master CI')
   experiment.tag('modifiedBy', 'release pipeline') # Careful, tags are mutable

Attributes

archived_time

Return the archived time for the experiment. Value should be None for an active experiment.

Returns

Type Description
str

The archived time of the experiment.

id

Return id of the experiment.

Returns

Type Description
str

The id of the experiment.

name

Return name of the experiment.

Returns

Type Description
str

The name of the experiment.

tags

Return the mutable set of tags on the experiment.

Returns

Type Description

The tags on an experiment.

workspace

Return the workspace containing the experiment.

Returns

Type Description

Returns the workspace object.

workspace_object

(Deprecated) Return the workspace containing the experiment.

Use the workspace attribute.

Returns

Type Description

The workspace object.