One of the things I'm interested in is how techniques that work in one context might work in other contexts, and what we can learn about those techniques when we go beyond their typical applications.
Word embeddings, aka word vectors, are typically used with large corpora, such as Wikipedia or Common Crawl web pages or massive numbers of tweets. One paper said something to the effect of "As long as your corpora have 100 million words, this technique will work."
But what if your corpus doesn't have 100 million words? What if you are interested in how an author uses words in just one book?