The Wayback Machine - https://web.archive.org/web/20220318174543/https://github.com/topics/pre-trained-language-models
Skip to content
#

pre-trained-language-models

Here are 27 public repositories matching this topic...

We start a company-name recognition task with a small scale and low quality training data, then using skills to enhanced model training speed and predicting performance with least artificial participation. The methods we use involve lite pre-training models such as Albert-small or Electra-small with financial corpus, knowledge of distillation and multi-stage learning. The result is that we improve the recall rate of company names recognition task from 0.73 to 0.92 and get 4 times as fast as BERT-Bilstm-CRF model.

  • Updated Aug 10, 2020
  • Python

Improve this page

Add a description, image, and links to the pre-trained-language-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pre-trained-language-models topic, visit your repo's landing page and select "manage topics."

Learn more