Write With Transformer
Get a modern neural network to
auto-complete your thoughts.
This web app, built by the Hugging Face team, is the official demo of the ๐Ÿค—/transformers repository's text generation capabilities.
Models
๐Ÿฆ„ GPT-2
The almighty king of text generation, GPT-2 comes in four available sizes, only three of which have been publicly made available. Feared for its fake news generation capabilities, it currently stands as the most syntactically coherent model. A direct successor to the original GPT, it reinforces the already established pre-training/fine-tuning killer duo. From the paper: Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever.
๐Ÿ’ฏ XLNet
Overcoming the unidirectional limit while maintaining an independent masking algorithm based on permutation, XLNet improves upon the state-of-the-art autoregressive model that is TransformerXL. Using a bidirectional context while keeping its autoregressive approach, this model outperforms BERT on 20 tasks while keeping an impressive generative coherence. From the paper: XLNet: Generalized Autoregressive Pretraining for Language Understanding, by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov and Quoc V. Le.
โ˜ ๏ธ GPT
Released by OpenAI, this seminal architecture has shown that large gains on several NLP tasks can be achieved by generative pre-training a language model on unlabeled text before fine-tuning it on a downstream task. From the paper: Improving Language Understanding by Generative Pre-Training, by Alec Radford, Karthik Naraimhan, Tim Salimans and Ilya Sutskever.
Do you want to contribute or suggest a new model checkpoint? Open an issue on ๐Ÿค—/transformers ๐Ÿ”ฅ.
โ€œIt is to writing what calculators are to calculus.โ€