🤘 awesome-semantic-segmentation
-
Updated
May 8, 2021
🤘 awesome-semantic-segmentation
Building a modern functional compiler from first principles. (http://dev.stephendiehl.com/fun/)
Klipse is a JavaScript plugin for embedding interactive code snippets in tech blogs.
Python package for the evaluation of odometry and SLAM
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
SuperCLUE: 中文通用大模型综合性基准 | A Benchmark for Foundation Models in Chinese
Your open-source LLM evaluation toolkit. Get scores for factual accuracy, context retrieval quality, tonality, and many more to understand the quality of your LLM applications
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
☁️ 🚀 📊 📈 Evaluating state of the art in AI
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
Multi-class confusion matrix library in Python
An open-source visual programming environment for battle-testing prompts to LLMs.
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
Short and sweet LISP editing
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
FuzzBench - Fuzzer benchmarking as a service.
XAI - An eXplainability toolbox for machine learning
Add a description, image, and links to the evaluation topic page so that developers can more easily learn about it.
To associate your repository with the evaluation topic, visit your repo's landing page and select "manage topics."