MLEM now offers deployment to Kubernetes and Sagemaker with a single command.
We welcome contributions to MLEM by the community. See the Contributing to the Documentation guide if you want to fix or update the documentation or this website.
Please search issue tracker before creating a new issue (problem or an improvement request). Feel free to add issues related to the project.
For problems with mlem.ai site please use this GitHub repository.
If you feel that you can fix or implement it yourself, please read a few paragraphs below to learn how to submit your changes.
tests/. You can skip this step if the effort to create tests for your change is unreasonable. Changes without tests are still going to be considered by us.
We will review your pull request as soon as possible. Thank you for contributing!
Get the latest development version. Fork and clone the repo:
$ git clone [email protected]:<your-username>/mlem.git
Make sure that you have Python 3.7 or higher installed. On macOS, we recommend
brew to install Python. For Windows, we recommend an official
pip version 20.3+ is required.
Install MLEM in editable mode with
pip install -e ".[tests]". But before we do
that, we strongly recommend creating a
$ cd mlem $ python3 -m venv .env $ source .env/bin/activate $ pip install -e ".[tests]"
Install coding style pre-commit hooks with:
$ pip install pre-commit $ pre-commit install
That's it. You should be ready to make changes, run tests, and make commits! If you experience any problems, please don't hesitate to ping us in our chat.
We have unit tests in
tests/unit/ and functional tests in
Consider writing the former to ensure complicated functions and classes behave
For specific functionality, you will need to use functional tests alongside
pytest fixtures to create a temporary
directory, Git and/or MLEM repo, and bootstrap some files. See the
for some usage examples.
The simplest way to run tests:
$ cd mlem $ python -m tests
pytest to run the full test suite and report the result. At the very
end you should see something like this:
$ python -m tests ... ============= 434 passed, 6 skipped, 8 warnings in 131.43 seconds ==============
Otherwise, for each failed test you should see the following output, to help you identify the problem:
... [gw2] [ 84%] FAILED tests/unit/test_progress.py::TestProgressAware::test tests/unit/test_prompt.py::TestConfirm::test_eof tests/test_updater.py::TestUpdater::test ... =================================== FAILURES =================================== ____________________________ TestProgressAware.test ____________________________ ... ======== 1 failed, 433 passed, 6 skipped, 8 warnings in 137.49 seconds =========
You can pass any additional arguments to the script that will override the
To run a single test case:
$ python -m tests tests/func/test_metrics.py::TestCachedMetrics
To run a single test function:
$ python -m tests tests/unit/utils/test_fs.py::test_get_inode
To pass additional arguments:
$ python -m tests --pdb
We are using PEP8 and checking that our code is formatted with black.
For docstrings, we try to adhere by the Google Python Style Guide.
(component): (short description) (long description) Fixes #(GitHub issue id).
remote: add support for Amazon S3 Fixes #123