MLEM now offers deployment to Kubernetes and Sagemaker with a single command.
Our guides describe the major concepts in MLEM and how it works comprehensively, explaining when and how to use its features.
Saving machine learning models to files or loading them back into Python objects
may seem like a simple task at first. For example, the
libraries can serialize/deserialize model objects to/from files. However, MLEM
adds some "special sauce" by inspecting the objects and saving their metadata
.mlem metafiles and using these intelligently later on.
The metadata in
.mlem files is necessary to reliably enable actions like
packaging and serving different model types in various ways. MLEM allows us
to automate a lot of the pain points we would hit in typical ML workflows by
codifying and managing the information about our ML models (or other objects)