The Wayback Machine - https://web.archive.org/web/20230729055429/https://github.com/topics/instruction-tuning
Skip to content
#

instruction-tuning

Here are 38 public repositories matching this topic...

Otter

🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.

  • Updated Jul 29, 2023
  • Python

We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. Meanwhile, we created a new branch to build a Tabular LLM.(我们分别统一了丰富的IFT数据(如CoT数据,目前仍不断扩充)、多种训练效率方法(如lora,p-tuning)以及多种LLMs,三个层面上的接口,打造方便研究人员上手的LLM-IFT研究平台。同时tabular_llm分支构建了面向表格智能任务的LLM。

  • Updated Jul 26, 2023
  • Jupyter Notebook

“百聆”是一个具有增强的语言对齐的英语/中文大语言模型,具有优越的英语/中文能力,在多项测试中取得ChatGPT 90%的性能。BayLing is an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.

  • Updated Jul 11, 2023
  • Python

Improve this page

Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."

Learn more