š DataChain Open-Source Release. Star us on !
MLEM SageMaker allow you to deploy MLEM models to AWS SageMaker. You can learn more about SageMaker here.
$ pip install mlem[sagemaker]
# or
$ pip install sagemaker boto3
To be able to deploy to SageMaker you need to do some AWS configuration. This is not MLEM specific requirements, rather it's needed for any SageMaker interaction.
Here is the list:
arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
)arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess
)You can configure those manually or use existing ones. You can also use terraform with this template and helper script (terraform needs to be installed).
This script is not part of MLEM public API, so you'll need to run it manually like this
from mlem.contrib.sagemaker.env_setup import sagemaker_terraform
sagemaker_terraform(export_secret="creds.csv")
It's recommended to use aws cli with separate profile configured for MLEM. You can also provide credentials with AWS environment variables.
SageMaker Environment declaration can be used to hold your SageMaker configuration.
$ mlem declare env sagemaker env.mlem \
--role <role> \
--account <account> \
--region <region> \
--bucket <bucket> \
--ecr_repository <repo>
You can also pre-declare SageMaker Deployment itself.
$ mlem declare deployment sagemaker app.mlem \
--env env.mlem \
--method predict \
--instance_type ml.t2.medium
To deploy a model then, run
$ mlem deployment run --load app.mlem --model <path>
Once you run our this sweet mlem deployment run ...
command, a number of
things will happen.
After this command exits, however it can take some time on SageMakers side to actually run VMs with your model. You can check status with
$ mlem deployment status --load app.mlem
or block until model is ready with
$ mlem deployment wait --load app.mlem -i starting
MLEM SageMaker deployments are fully compatible with SageMaker
InvokeEndpoint
API, however it's a lot easier to use
MLEM SagemakerClient. To obtain one, just call
get_client
method on your deployment object.
from mlem.api import load_meta
service = load_meta("...")
client = service.get_client()
You can then use this client
instance to invoke your model as if it is local.
data = ... # pd.DataFrame or whatever model.predict accepts
preds = client.predict(data)
MLEM do not support batch invocations. We will add support for them soon