논문 링크 Language Models are Few-Shot Learners Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fi arxiv.org 1. Introduction 최근 연구의 한계 몇가지 예제만으로 task에 적응 할 수 있으면 더 다양한 테스트크에 적용 가능 pretrain 과정에서 큰 ..