Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
-
Updated
Nov 15, 2022 - Python
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
RoBERTa中文预训练模型: RoBERTa for Chinese
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
news-please - an integrated web crawler and information extractor for news that just works
The implementation of DeBERTa
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
Add a description, image, and links to the roberta topic page so that developers can more easily learn about it.
To associate your repository with the roberta topic, visit your repo's landing page and select "manage topics."