Edit on GitHub

SageMaker

MLEM SageMaker allow you to deploy MLEM models to AWS SageMaker. You can learn more about SageMaker here.

Requirements

$ pip install mlem[sagemaker]
# or
$ pip install sagemaker boto3

To be able to deploy to SageMaker you need to do some AWS configuration. This is not MLEM specific requirements, rather it's needed for any SageMaker interaction.

Here is the list:

  • AWS User Credentials
  • SageMaker access for this user (policy arn:aws:iam::aws:policy/AmazonSageMakerFullAccess)
  • ECR access for this user (policy arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess)
  • AWS IAM Role with SageMaker access
  • S3 Access

You can configure those manually or use existing ones. You can also use terraform with this template and helper script (terraform needs to be installed).

This script is not part of MLEM public API, so you'll need to run it manually like this

from mlem.contrib.sagemaker.env_setup import sagemaker_terraform

sagemaker_terraform(export_secret="creds.csv")

It's recommended to use aws cli with separate profile configured for MLEM. You can also provide credentials with AWS environment variables.

Configuring and running deployment

SageMaker Environment declaration can be used to hold your SageMaker configuration.

$ mlem declare env sagemaker env.mlem \
    --role <role> \
    --account <account> \
    --region <region> \
    --bucket <bucket> \
    --ecr_repository <repo>

You can also pre-declare SageMaker Deployment itself.

$ mlem declare deployment sagemaker app.mlem \
    --env env.mlem \
    --method predict \
    --instance_type ml.t2.medium

To deploy a model then, run

$ mlem deployment run --load app.mlem --model <path>

What happens internally

Once you run our this sweet mlem deployment run ... command, a number of things will happen.

  1. If you did not specify pre-built image, a new docker image will be built. It will include all model's requirements. This image will be pushed to configured ECR repository.
  2. Model is packaged and uploaded to configured s3 bucket as per this doc
  3. Endpoint configuration is created as per this doc
  4. Model is deployed thus creating a SageMaker Endpoint

After this command exits, however it can take some time on SageMakers side to actually run VMs with your model. You can check status with

$ mlem deployment status --load app.mlem

or block until model is ready with

$ mlem deployment wait --load app.mlem -i starting

Making requests

MLEM SageMaker deployments are fully compatible with SageMaker InvokeEndpoint API, however it's a lot easier to use MLEM SagemakerClient. To obtain one, just call get_client method on your deployment object.

from mlem.api import load_meta

service = load_meta("...")
client = service.get_client()

You can then use this client instance to invoke your model as if it is local.

data = ...  # pd.DataFrame or whatever model.predict accepts
preds = client.predict(data)

MLEM do not support batch invocations. We will add support for them soon

Content

šŸ› Found an issue? Let us know! Or fix it:

Edit on GitHub

ā“ Have a question? Join our chat, we will help you:

Discord Chat