DISTRIBUTED REPRESENTATIONS OF AWORD: A SURVEY

Authors

  • Mr.D.Koteswara Rao
  • Mr.T.Rathna Kumar
  • Mrs.K.Vineela

DOI:

https://doi.org/10.46243/jst.2018.v3.i06.pp59-67

Abstract

The continuous Skip-gram model, which was recently presented, is a quick and easy way to build high- quality distributed vector representations that capture a huge number of accurate syntactic and semantic word associations. We provide numerous enhancements in this study that improve the quality of the vectors as well as the training speed. We get a considerable speedup and learn more regular word representations by subsampling the frequent words. We also discuss negative sampling, which is a simple alternative to hierarchical softmax. The indifference to word order and inability to capture idiomatic phrases are two fundamental limitations of word representations. The meanings of "Canada" and "Air," for example, cannot simply be merged to become "Air Canada."

Downloads

Published

2018-11-21

How to Cite

Mr.D.Koteswara Rao, Mr.T.Rathna Kumar, & Mrs.K.Vineela. (2018). DISTRIBUTED REPRESENTATIONS OF AWORD: A SURVEY. Journal of Science & Technology (JST), 3(6), 1–6. https://doi.org/10.46243/jst.2018.v3.i06.pp59-67