COLLECTED BY
Organization:
Internet Archive
Focused crawls are collections of frequently-updated webcrawl data from narrow (as opposed to broad or wide) web crawls, often focused on a single domain or subdomain.
The Wayback Machine - https://web.archive.org/web/20200911174533/https://github.com/topics/model-serving
Here are
40 public repositories
matching this topic...
In this repository, I will share some useful notes and references about deploying deep learning-based models in production.
Code samples for the Lightbend tutorial on writing microservices with Akka Streams, Kafka Streams, and Kafka
Updated
May 30, 2019
Scala
Common library for serving TensorFlow, XGBoost and scikit-learn models in production.
flink-jpmml is a fresh-made library for dynamic real time machine learning predictions built on top of PMML standard models and Apache Flink streaming engine
Updated
May 9, 2019
Scala
FastAPI Skeleton App to serve machine learning models production-ready.
Updated
Aug 29, 2020
Python
A scalable, high-performance serving system for federated learning models
Updated
Sep 10, 2020
Java
Code and presentation for Strata Model Serving tutorial
Updated
Sep 26, 2019
Scala
An umbrella project for multiple implementations of model serving
Updated
Sep 18, 2017
Scala
fastText model serving service
Updated
Jul 16, 2020
Rust
Updated
Oct 4, 2017
Python
mlserve turns your python models into RESTful API, serves web page with form generated to match your input data.
Updated
Sep 4, 2020
Python
Deploy DL/ ML inference pipelines with minimal extra code.
Updated
Aug 7, 2020
Python
BentoML Example Projects Gallery
Updated
Sep 11, 2020
Jupyter Notebook
Kubeflow example of machine learning/model serving
Updated
Jan 11, 2020
Jupyter Notebook
A collection of model deployment library and technique.
Generic Model Serving Implementation leveraging Flink
Titus 2 : Portable Format for Analytics (PFA) implementation for Python 3.4+
Updated
Apr 19, 2020
Python
Production ready templates for deploying Driverless AI (DAI) scorers.
Updated
Sep 10, 2020
Java
Updated
Jan 6, 2018
Scala
Speculative model serving with Flink
Updated
Sep 24, 2018
Scala
Implementation of Model serving in pipelines
Updated
Nov 11, 2019
Scala
Tensorflow Serving with Docker / Docker Compose
Updated
Jan 27, 2020
Python
Serving the deep learning models easily.
Updated
Jul 14, 2020
Python
Experimental implementation of speculative model serving
Updated
May 30, 2019
Scala
Machine learning logistics and serving platform
Updated
Aug 17, 2020
JavaScript
🍦 Serve doddle-model in a pipeline implemented with Apache Beam
Updated
Nov 19, 2018
Scala
Serving layer for large machine learning models on Apache Flink
A wiki for discussion of FlinkML concepts.
Improve this page
Add a description, image, and links to the
model-serving
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
model-serving
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
Describe the bug
I believe this is a bug, but I might just be missing how to properly use the containerize functionality.
The Dockerfile generated by https://github.com/bentoml/BentoML/blob/master/bentoml/clipper/__init__.py#L97 includes two args for PIP repos. However, the containerize functionality doesn't seem to expose these arguments. The problem is that this will override other p