Skip to main content

2 posts tagged with "unsupervised learning"

View All Tags

· 14 min read
Harshil Shah

Re:infer's machine learning algorithms are based on pre-trained Transformer models, which learn semantically informative representations of sequences of text, known as embeddings. Over the past few years, Transformer models have achieved state of the art results on the majority of common natural language processing (NLP) tasks.

But how did we get here? What has led to the Transformer being the model of choice for training embeddings? Over the past decade, the biggest improvements in NLP have been due to advances in learning unsupervised pre-trained embeddings of text. In this post, we look at the history of embedding methods, and how they have improved over time.

This post will

  • Explain what embeddings are and how they are used in common NLP applications.
  • Present a history of popular methods for training embeddings, including traditional methods like word2vec and modern Transformer-based methods such as BERT.
  • Discuss the weaknesses of embedding methods, and how they can be addressed.

· 21 min read
Harshil Shah

Businesses run on communication - customers reach out when they want something, colleagues communicate to get work done. Every message counts. Our mission at Re:infer is to unlock the value in these messages and to help every team in a business deliver better products and services efficiently and at scale.

With that goal, we continuously research and develop our core machine learning and natural language understanding technology. The machine learning models at Re:infer use pre-training, unsupervised learning, semi-supervised learning and active learning to deliver state of the art accuracy with minimal time and investment from our users.

In this research post, we explore a new unsupervised approach to automatically recognising the topics and intents, and their taxonomy structure, from a communications dataset. It's about improving the quality of the insights we deliver and the speed with which these are obtained.