one shot1 [논문 리뷰] Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165 Language Models are Few-Shot LearnersRecent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fiarxiv.org 2020년도에 출현한, GPT-3 버전이 담고 있다.파라미터와 모델이 점점 커지기에, 이를 막고 효율적인 성능을.. 2025. 1. 31. 이전 1 다음