논문 링크 Language Models are Few-Shot LearnersRecent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fiarxiv.org1. Introduction최근 연구의 한계몇가지 예제만으로 task에 적응 할 수 있으면 더 다양한 테스트크에 적용 가능pretrain 과정에서 큰 정보가 학습..