Draft:Semantic Embeddings in Language Modeling

From Wikipedia, the free encyclopedia

Language Modeling

Semantic embeddings, including popular models like Word2Vec and GloVe, serve as foundational components in the field of language modeling within natural language processing (NLP). These embeddings play a crucial role in enabling models to grasp the essence of language by understanding and representing its complexities.

Language modeling is a crucial concept in natural language processing (NLP), involving the training of models on unlabeled datasets in an unsupervised manner. This approach is significant because it allows us to leverage vast amounts of unlabeled text data, which far exceeds the availability of labeled data that requires human annotation. One common task in language modeling is predicting missing words in a sentence. This task is straightforward to implement, as it involves masking out a random word in the text and training the model to predict the missing word based on the context provided by the rest of the sentence. By training on this task, language models can learn the intricacies of language and improve their ability to generate coherent and contextually appropriate text.

Training Embeddings[edit]

In our previous examples, we used pre-trained semantic embeddings, but it is interesting to see how those embeddings can be trained. There are several possible ideas the can be used:

  • N-Gram language modeling, when we predict a token by looking at N previous tokens (N-gram)
  • Continuous Bag-of-Words (CBoW), when we predict the middle token $W_0$ in a token sequence $W_{-N}$, ..., $W_N$.
  • Skip-gram, where we predict a set of neighboring tokens {$W_{-N},\dots, W_{-1}, W_1,\dots, W_N$} from the middle token $W_0$.

Conclusion[edit]

Now we know that training word embeddings is not a very complex task, and we should be able to train our own word embeddings for domain specific text if needed.

References[edit]

https://www.turing.com/kb/guide-on-word-embeddings-in-nlp

https://datasciencedojo.com/blog/embeddings-and-llm/ https://braindrop.me