Check out our new VS Code extension for experiment tracking and model development
Apply deployed model (possibly remote) against provided data.
def apply_remote(
client: Union[str, Client],
*data: Union[str, MlemData, Any],
method: str = None,
output: str = None,
target_project: str = None,
index: bool = False,
**client_kwargs,
) -> Optional[Any]
from mlem.api import apply_remote
res = apply_remote(client_obj, data, method="predict")
This API is the underlying mechanism for the mlem apply-remote command and facilitates running inferences on entire datasets for models which are deployed remotely or are being served locally. The API requires an explicit client object, which knows how to make requests to the deployed model.
client
(required) - The client to access methods of deployed model.data
(required) - Input to the model.method
(optional) - Which model method to use. If None, use the only method
model has. If more than one is available, will fail.output
(optional) - If value is provided, assume it's path and save output
there.target_project
(optional) - The path to project to save the results to.index
(optional) - Whether to index saved output in MLEM root folder.client_kwargs
(optional) - Keyword arguments for the underlying client
implementation being used.WrongMethodError
- Thrown if wrong method name for model is providedInvalidArgumentError
- Thrown if arguments are invalid, when method cannot
be NoneNotImplementedError
- Saving several input data objects is not implemented
yetfrom numpy import ndarray
from sklearn.datasets import load_iris
from mlem.api import apply_remote
from mlem.runtime.client import HTTPClient
train, _ = load_iris(return_X_y=True)
client = HTTPClient(host="0.0.0.0", port=8080)
res = apply_remote(client, train, method="predict")
assert isinstance(res, ndarray)