AI Code Completions
-
Updated
Dec 12, 2022 - Shell
AI Code Completions
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
Chinese version of GPT2 training code, using BERT tokenizer.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
This repository contains demos I made with the Transformers library by HuggingFace.
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
An unnecessarily tiny implementation of GPT-2 in NumPy.
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/
Prompt Engineering | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
Large-scale pretraining for dialogue
Easily build, customize and control your own LLMs
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Simple UI for LLM Model Finetuning
GPT2 for Multiple Languages, including pretrained models. GPT2 多语言支持, 15亿参数中文预训练模型
Add a description, image, and links to the gpt-2 topic page so that developers can more easily learn about it.
To associate your repository with the gpt-2 topic, visit your repo's landing page and select "manage topics."