The Wayback Machine - https://web.archive.org/web/20200613191258/https://github.com/cortexlabs/cortex
Skip to content
Build machine learning APIs
Go Python Shell Other
Branch: master
Clone or download

Latest commit

Latest commit fd4b51c Jun 12, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci Update dependency versions (#1055) May 12, 2020
.github Update docs links (#1095) May 31, 2020
build Add multi-model endpoint feature to TF & ONNX predictors (#1107) Jun 11, 2020
cli Add multi-model endpoint feature to TF & ONNX predictors (#1107) Jun 11, 2020
dev Add note about docker client to versions.md Jun 6, 2020
docs Add multi-model endpoint feature to TF & ONNX predictors (#1107) Jun 11, 2020
examples Add multi-model endpoint feature to TF & ONNX predictors (#1107) Jun 11, 2020
images Add multi-model endpoint feature to TF & ONNX predictors (#1107) Jun 11, 2020
manager Update generate_eks.py (#1111) Jun 8, 2020
pkg Add multi-model endpoint feature to TF & ONNX predictors (#1107) Jun 11, 2020
.dockerignore Initial commit Jan 24, 2019
.gitbook.yaml Initial commit Jan 24, 2019
.gitignore Add multi-model endpoint feature to TF & ONNX predictors (#1107) Jun 11, 2020
CODE_OF_CONDUCT.md Update domain name May 10, 2019
CONTRIBUTING.md Update CONTRIBUTING.md Nov 6, 2019
LICENSE Add format and lint to Makefile and CI (#23) Feb 28, 2019
Makefile Revert back to rerun from watchmedo (#1103) Jun 2, 2020
README.md Update README.md Jun 13, 2020
get-cli.sh Add zsh completion (#1024) May 7, 2020
go.mod Update dependency versions (#1055) May 12, 2020
go.sum Update dependency versions (#1055) May 12, 2020

README.md

Machine learning model serving infrastructure


installdocsexampleswe're hiringchat with us

Demo


Key features

  • Multi framework: deploy TensorFlow, PyTorch, scikit-learn, and other models.
  • Autoscaling: automatically scale APIs to handle production workloads.
  • ML instances: run inference on G4, P2, M5, C5 and other AWS instance types.
  • Spot instances: save money with spot instances.
  • Multi-model endpoints: deploy multiple models in a single API.
  • Rolling updates: update deployed APIs with no downtime.
  • Log streaming: stream logs from deployed models to your CLI.
  • Prediction monitoring: monitor API performance and prediction results.

Deploying a model

Install the CLI

$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.17/get-cli.sh)"

Implement your predictor

# predictor.py

class PythonPredictor:
    def __init__(self, config):
        self.model = download_model()

    def predict(self, payload):
        return self.model.predict(payload["text"])

Configure your deployment

# cortex.yaml

- name: sentiment-classifier
  predictor:
    type: python
    path: predictor.py
  compute:
    gpu: 1
    mem: 4G

Deploy your model

$ cortex deploy

creating sentiment-classifier

Serve predictions

$ curl http://localhost:8888 \
    -X POST -H "Content-Type: application/json" \
    -d '{"text": "serving models locally is cool!"}'

positive

Deploying models at scale

Spin up a cluster

Cortex clusters are designed to be self-hosted on any AWS account:

$ cortex cluster up

aws region: us-east-1
aws instance type: g4dn.xlarge
spot instances: yes
min instances: 0
max instances: 5

your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability

○ spinning up your cluster ...

your cluster is ready!

Deploy to your cluster with the same code and configuration

$ cortex deploy --env aws

creating sentiment-classifier

Serve predictions at scale

$ curl http://***.amazonaws.com/sentiment-classifier \
    -X POST -H "Content-Type: application/json" \
    -d '{"text": "serving models at scale is really cool!"}'

positive

Monitor your deployment

$ cortex get sentiment-classifier

status   up-to-date   requested   last update   avg request   2XX
live     1            1           8s            24ms          12

class     count
positive  8
negative  4

How it works

The CLI sends configuration and code to the cluster every time you run cortex deploy. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using a Network Load Balancer (NLB) and FastAPI / TensorFlow Serving / ONNX Runtime (depending on the model type). The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.

Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.


Examples

You can’t perform that action at this time.