MLEM SageMaker allow you to deploy MLEM models to AWS SageMaker. You can learn more about SageMaker here.
$ pip install mlem[sagemaker] # or $ pip install sagemaker boto3
To be able to deploy to SageMaker you need to do some AWS configuration. This is not MLEM specific requirements, rather it's needed for any SageMaker interaction.
Here is the list:
This script is not part of MLEM public API, so you'll need to run it manually like this
from mlem.contrib.sagemaker.env_setup import sagemaker_terraform sagemaker_terraform(export_secret="creds.csv")
SageMaker Environment declaration can be used to hold your SageMaker configuration.
$ mlem declare env sagemaker ... --role <role> \ --account <account> \ --region <region> \ --bucket <bucket> \ --ecr_repository <repo>
You can also pre-declare SageMaker Deployment itself.
$ mlem declare deployment sagemaker ... --env ... \ --method predict \ --instance_type ml.t2.medium
To run deployment, run
$ mlem deployment run ... --model <path>
Once you run our this sweet
mlem deployment run ... command, a number of
things will happen.
After this command exits, however it can take some time on SageMakers side to actually run VMs with your model. You can check status with
$ mlem deployment status ...
or block until model is ready with
$ mlem deployment wait ... -i starting
MLEM SageMaker deployments are fully compatible with SageMaker
API, however it's a lot easier to use
MLEM SagemakerClient. To obtain one, just call
get_client method on your deployment object.
from mlem.api import load_meta service = load_meta("...") client = service.get_client()
You can then use this
client instance to invoke your model as if it is local.
data = ... # pd.DataFrame or whatever model.predict accepts preds = client.predict(data)
MLEM do not support batch invocations. We will add support for them soon