https://arxiv.org/abs/2005.14165 Language Models are Few-Shot LearnersRecent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fiarxiv.org 1. Abstract 1. 최근 NLP 연구의 흐름 방대한 텍스트를 통해서 사전학습을 한 후에, 특정 태스크..