Foundations and Applications in Large-scale AI Models: Pre-training, Fine-tuning, and Prompt-based Learning
Abstract
Deep learning techniques have advanced rapidly in recent years, leading to significant progress in pre-trained and fine-tuned large-scale AI models. For example, in the natural language processing domain, the traditional "pre-train, fine-tune"paradigm is shifting towards the "pre-train, prompt, and predict"paradigm, which has achieved great success on many tasks across different application domains such as ChatGPT/BARD for Conversational AI and P5 for a unified recommendation system. Moreover, there has been a growing interest in models that combine vision and language modalities (vision-language models) which are applied to tasks like Visual Captioning/Generation. Considering the recent technological revolution, it is essential to emphasize these paradigm shifts and highlight the paradigms with the potential to solve different tasks. We thus provide a platform for academic and industrial researchers to showcase their latest work, share research ideas, discuss various challenges, and identify areas where further research is needed in pre-training, fine-tuning, and prompt-learning methods for large-scale AI models. We foster the development of a strong research community focused on solving challenges related to large-scale AI models, providing superior and impactful strategies that can change people's lives in the future.