Background
- Encoder, Decoder
- Auto Encoder Pretraining
Abstract
- 기존 토큰 별로 mask 하여 학습한 것과 달리, segment 단위로 학습을 진행함.
Methods
Reference
'Natural Language Processing' 카테고리의 다른 글
Multi-task deep neural networks for natural language understanding (0) | 2023.06.08 |
---|---|
XLNet: Generalized autoregressive pretraining for language understanding (0) | 2023.06.08 |
Language Models are Unsupervised Multitask Learners (0) | 2023.06.08 |
Improving language understanding by generative pre-training (0) | 2023.06.08 |
NLP 기초 다지기 (0) | 2023.04.03 |