Solutional nouveau logo (1)

BERT (modèle linguistique)

Partager
" Retour à l'index des glossaires

BERT, short for Bidirectional Encoder Representations from Transformers, is a language model developed by Google[1]. This model uses a method called WordPiece to convert English words into integer codes, and is capable of understanding the context of words in both directions, left and right. BERT comes in two versions, BASE and LARGE, the latter being bigger with 12 transformer encoders. This model, however, doesn’t include a decoder, which makes generating text a bit challenging. BERT has been recognized for its high performance in natural language understanding tasks, even winning an award at the 2019 NAACL Conference. It has been influential in the field of natural language processing, sparking the development of other models. Google uses BERT to enhance its search algorithms, and it’s also used for text classification, machine comprehension, and more. Numerous studies and papers have been published on BERT, contributing to our understanding of its impact and effectiveness.

Définitions des termes
1. Google ( Google ) Google est une entreprise technologique de renommée mondiale, principalement connue pour son moteur de recherche. Fondée en 1998 par Larry Page et Sergey Brin, l'entreprise s'est largement développée, se diversifiant dans divers secteurs liés à la technologie. Google propose un large éventail de produits et de services, notamment Gmail, Maps, Cloud, YouTube et Android. Elle produit également du matériel comme les smartphones Pixel et les Chromebooks. L'entreprise, qui fait partie d'Alphabet Inc. depuis 2015, est réputée pour son innovation et sa culture d'entreprise, qui encourage les employés à travailler sur des projets personnels. Bien qu'elle soit confrontée à divers problèmes juridiques et éthiques, Google continue d'influencer l'industrie technologique grâce à ses innovations et à ses avancées techniques, telles que le développement d'Android OS et l'acquisition d'entreprises axées sur l'IA.

Bidirectional Encoder Representations from Transformers (BERT) is a language model based on the transformer architecture, notable for its dramatic improvement over previous state of the art models. It was introduced in October 2018 by researchers at Google. A 2020 literature survey concluded that "in a little over a year, BERT has become a ubiquitous baseline in Natural Language Processing (NLP) experiments counting over 150 research publications analyzing and improving the model."

BERT was originally implemented in the English language at two model sizes: (1) BERTBASE: 12 encoders with 12 bidirectional self-attention heads totaling 110 million parameters, and (2) BERTLARGE: 24 encoders with 16 bidirectional self-attention heads totaling 340 million parameters. Both models were pre-trained on the Toronto BookCorpus (800M words) and English Wikipedia (2,500M words).

" Retour à l'index des glossaires
fr_FRFR
Retour en haut