China’s Super Large-scale Al model “WenHui” announced
On January 12, the joint research team of Alibaba, BAAI (Beijing Academy of Artificial Intelligence), Tsinghua University, and other joint research teams released the new super-large-scale pre-training model “Wenhui” for cognition.
This model not only improves the understanding of AI but also recognizes AI creation based on common sense. It will be applied to scenarios such as text understanding, human-computer interaction, and visual Q&A in the future.
Unlike traditional AI training that requires manual labeling of data, the cognitive-oriented pre-training language model provides a new way of learning. Which has the AI automatically learn a large amount of language, text, and image data first, then has it remember and understand the information and human language expressions.
After this regularity, further study professional domain knowledge, so the AI can master both common sense and professional knowledge at the same time.
The “Wenhui” released is currently the largest pre-training model in China, with a parameter of 11.3 billion. It adopts a high-performance distributed framework that unifies multiple parallel strategies developed by Alibaba and uses model parallel, pipeline parallel, and Data parallel training performs distributed training.
“Wenhui” breaks through the two major problems of multi-modal understanding and multi-modal generation, can easily understand text and image information, and can complete creative tasks.
For example, you only need to enter the title, dynasty, and author of the poem and “Wenhui” can automatically generate antique poems.
Currently, “Wenhui” supports multiple natural languages and cross-modal application tasks based on cognitive reasoning, and some applications will be launched soon.
Jingren Zhou, head of the Intelligent Computing Laboratory of the Alibaba DAMO Academy, said,
“The pre-training language model is one of the most innovative natural language models in the past 70 years, and its model design is much more difficult than traditional models. The DAMO Academy research team will continue to overcome algorithm and system engineering problems and accelerate artificial intelligence to cognitive intelligence.”