Using Low-rank adaptation to quickly fine-tune diffusion models.
-
Updated
Jun 13, 2023 - Jupyter Notebook
Using Low-rank adaptation to quickly fine-tune diffusion models.
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
Easily build, customize and control your own LLMs
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
An open platform for operating large language models (LLMs) in production. Fine-tune, serve, deploy, and monitor any LLMs with ease.
This repo contains a PyTorch implementation of a pretrained BERT model for multi-label text classification.
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
[MICCAI 2019] Implementation and Pre-trained Models for Models Genesis
LibFewShot: A Comprehensive Library for Few-shot Learning.
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA)
The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers
An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
AI wizard powers for mere mortals
simpleT5 is built on top of PyTorch-lightning
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Fine-Tune EleutherAI GPT-Neo And GPT-J-6B To Generate Netflix Movie Descriptions Using Hugginface And DeepSpeed
Add a description, image, and links to the fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning topic, visit your repo's landing page and select "manage topics."