wordfreq 1.4: more words, plus word frequencies from Reddit

The wordfreq module is an easy Python interface for looking up the frequencies of words. It was originally designed for use cases where it was most important to find common words, so it would list all the words that occur at least once per million words: that’s about 30,000 words in English. An advantage of ending the list there is that it loads really fast and takes up a small amount of RAM.

But there’s more to know about word frequencies. There’s a difference between words that are used a bit less than once in a million words, like “almanac”, “crusty”, and “giraffes”, versus words that are used just a few times per billion, such as “centerback”, “polychora”, and “scanlations”. As I’ve started using wordfreq in some aspects of the build process of ConceptNet, I’ve wanted to be able to rank words by frequency even if they’re less common than “giraffes”, and I’m sure other people do too.

So one big change in wordfreq 1.4 is that there is now a ‘large’ wordlist available in the languages that have enough data to support it: English, German, Spanish, French, and Portuguese. These lists contain all words used at least once per 100 million words. The default wordlist is still the smaller, faster one, so you have to ask for the ‘large’ wordlist explicitly — see the documentation.

Including word frequencies from Reddit

The best way to get representative word frequencies is to include a lot of text from a lot of different sources. Now there’s another source available: the Reddit comment corpus.

Reddit is an English-centric site and 99.2% of its comments are in English. We still need to account for the exceptions, such as /r/es, /r/todayilearned_jp, /r/sweden, and of course, the thread named “HELP reddit turned spanish and i cannot undo it!”.

I used pycld2 to detect the language of Reddit comments. In this version, I decided to only use the comments that could be detected as English, because I couldn’t be sure that the data I was getting from other languages was representative enough. For example, unfortunately, most comments in Italian on Reddit are spam, and most comments in Japanese are English speakers trying to learn Japanese. The data that looks the most promising is Spanish, and I might decide to include that in a later version.

So now some Reddit-centric words have claimed a place in the English word list, alongside words from Google Books, Wikipedia, Twitter, television subtitles, and the Leeds Internet Corpus:

>>> wordfreq.word_frequency('upvote', 'en')
1.0232929922807536e-05

>>> wordfreq.word_frequency('eli5', 'en', 'large')
6.165950018614822e-07

One more thing: we only use words from comments with a score of 1 or more. This helps keep the worst examples of spam and trolling from influencing the word list too much.

The Zipf frequency scale

When Marc Brysbaert let me include his excellent SUBTLEX data (word frequencies from television subtitles) as part of wordfreq, he asked me to include his preferred frequency scale as an option. I agree that I find it nicer than looking at raw frequencies.

The Zipf scale is a logarithmic scale of word frequency that’s meant to give you intuitive, small, positive numbers for reasonable words: it’s 9 plus the log (base 10) of the word frequency. This was easy to include in wordfreq, because it stores its frequencies on a log-10 scale anyway. You can now use the wordfreq.zipf_frequency function to see frequencies on this scale.

>>> wordfreq.zipf_frequency('people', 'en', 'large')
6.23

>>> wordfreq.zipf_frequency('cats', 'en', 'large')
4.42

>>> wordfreq.zipf_frequency('giraffes', 'en', 'large')
3.0

>>> wordfreq.zipf_frequency('narwhals', 'en', 'large')
2.1

>>> wordfreq.zipf_frequency('heffalumps', 'en', 'large')
1.78

>>> wordfreq.zipf_frequency('borogoves', 'en', 'large')
1.16

wordfreq is part of a stack of natural language tools developed at Luminoso and used in ConceptNet. Its data is available under the Creative Commons Attribution-ShareAlike 4.0 license.

Cramming for the test set: We need better ways to evaluate analogies

The publication of word2vec (as “Efficient Estimation of Word Representations in Vector Space” by Mikolov et al.) got a considerable amount of attention by demonstrating that a representation designed to predict words in context could also be used to predict analogies between words. The word2vec authors demonstrated this by including their own corpus of analogies for evaluation. Since then, other representations have been evaluated against that same corpus.

But a word representation that is better at capturing general knowledge of the relationships between things won’t necessarily do better on Mikolov et al.’s evaluation. That evaluation tests numerous examples of only a few types of analogies:

  • Geographical facts, such as “Athens : Greece :: Baghdad : Iraq
  • Gender-swapping analogies, such as “man : woman :: king : queen
  • Names of international currency, such as “Angola : kwanza :: Armenia : dram
  • Morphological relationships, such as “free : freely :: happy : happily
  • Factoids about multi-word named entities, such as “Baltimore : Baltimore Sun :: Cleveland : Cleveland Plain Dealer

The multi-word named entities are usually considered separately. Even word2vec, which this evaluation was designed to evaluate, required a differently-trained vector space to be able to get entities like “Cleveland Plain Dealer” into its vocabulary.

Conceptnet Numberbatch and analogy questions

I’ve been posting about the state-of-the-art set of word embeddings, Conceptnet Numberbatch, and you might wonder how it does on word2vec’s analogies. So even though I’m not a big fan of the word2vec analogy data, I ran a quick evaluation to find out, using Omer Levy’s 3CosMul metric for choosing the best analogies. Here’s how it scored, broken down by the type of question:

  • Geography: 95.6%
  • Gender: 95.8%
  • Currency: 45.5%
  • Morphology: ???
  • Multi-word: 2.2% (most terms are out-of-vocabulary)

Let’s talk about the question marks next to “Morphology”. It doesn’t make sense to ask Numberbatch about morphology. Like most English NLP systems but unlike word2vec, Numberbatch expects morphology to be handled as a separate step. This is a better plan than forgetting everything we know about morphology and hoping the system can rediscover it.

The overwhelming majority of the morphology questions look like “write : writes :: work : works”. Notice that answering this question involves nothing about the meanings of the words “write” and “work”. In fact, the less a system knows about meaning, the less there will be to distract it from its morphological task of adding the letter “s”.

Numberbatch has the same representation for “write” and “writes”, and I think this is reasonable for a system focused on semantics. They have the same meaning, just different morphology. If you want to do morphology, ask a lemmatizer.

So Numberbatch does well on some categories, and it could probably be tuned to do better. But I think this tuning would be counterproductive, because it would reward memorized facts over general knowledge.

Teaching to the test

word2vec’s evaluation was a fine demonstration of the capabilities of word2vec when it was published, but it doesn’t make much sense as a gold standard.

I believe that a system that aces the whole evaluation could be made out of existing tools, and it wouldn’t have very much to do with semantic vectors. Given the analogy A : B :: C : D, it would just look up A and B in Wikipedia and Wiktionary, find connections between them, and return the thing that C is connected to in the same way. Using a pre-parsed version of Wikipedia and Wiktionary would help, and those are things I’ve been working with. You could add in a lemmatizer, but the best lemmatizers are basically condensed versions of Wiktionary anyway.

This would be a silly thing to make. It’s like telling a human student exactly what’s on the test, and letting them bring as many notes as they want. Nothing is left but a test of ability to look things up.

From a machine learning point of view, you might call it “training on the test set”, but I don’t think it’s quite the same thing. There’s no training step involved here. Call it “cramming for the test set” instead. The analogy evaluation is a test of whether your system knows facts and morphology, so knowing facts and morphology is how you succeed at it.

Let’s put this back in perspective, though. The reason the word2vec paper was remarkable is that word2vec wasn’t designed to know facts, or even to be able to make analogies at all. It was designed to predict words in the context of other words, and it happened to be able to make analogies. That was the cool part.

Now that we expect word vectors to be able to form analogies, let’s expect more from our analogies.

English tests for people and computers

Above, I compared a computer running an evaluation to a human learner taking a test. If you want to test whether a human understands analogies, you don’t ask them 10,000 questions about geography. You ask them a lot of different things. So I went looking for analogy tests for people.

I think these kind of analogy “equations” are falling out of favor in education, probably for good reason. They’re artificial and they have a lot to do with test-taking skills. They’re not on the SAT anymore, so if you really want to know whether a high-schooler gets analogies, now you use a separate test called the Miller Analogy Test. I think they’re still pretty reasonable for computers. Computers like equations, and they have mad test-taking skills.

Here are some simple analogies that a semantic representation should be able to make, which I found on a website of resources for English teachers:

  • mouth : eat :: feet : walk
  • awful : bad :: fantastic : good
  • brick : wall :: page : book
  • poor : money :: sad : happiness
  • June : July :: Monday : Tuesday
  • umbrella : rain :: sunscreen : sun

And here are some more difficult ones, from a test-prep book for the Miller Analogy Test:

  • articulate : speech :: coordinated : movement
  • inception : conclusion :: departure : arrival
  • scintillating : dullness :: boisterous : calm
  • elucidate : clarity :: illuminate : light
  • shard : pottery :: splinter : wood
  • attenuate : signal :: dampen : enthusiasm

These examples of analogies from tests also come with multiple-choice distractors, in contrast to the word2vec evaluation, where the vocabulary of all the questions is used as the set of distractors.

Unlike geographical facts, these questions don’t have answers that can simply be looked up. There’s no data set that would name the relationship between “articulate” and “speech” for you in such a way that you can apply the same relationship to “coordinated”. You need a system that can discover a representation of that relationship, and that’s what a good set of semantic vectors can do.

It seems that we can evaluate our semantic systems by giving them tests that were originally designed for people. This approach to semantic evaluation has been used, for example, by Peter Turney, who used SAT questions in “A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations” and related publications.

And now for the big problem: people who write test questions write them under extremely restrictive terms of use. I’d better hope fair use really exists so I can even quote twelve of them here. Turney’s results can no longer be reproduced, through no fault of his, because he is not allowed to distribute his test data.

It would be great if someone who wrote test-prep questions would cooperate with the NLP community and make some of their questions available as an evaluation. I tried e-mailing the website that had the first set of questions on it. I never got a response, and I assume they’re filtering my e-mail as “Strange AI guy” now.

Making it possible to evaluate analogies

There are some great data sets out there about word similarities. MEN-3000, Rare Words, and WordSim-353 are all good examples. They’re in convenient text formats, they’re usually split into development and test sets, and they’re free to redistribute so that your experiments are reproducible.

There should be a way to get analogies up to the same standard. I’ve heard that other people who do this kind of semantics are also looking for a good analogy evaluation. We could get an evaluation corpus the traditional way, with human effort, and divide up the task of making an analogy test for computers among researchers and their students. It wouldn’t be enough for one person or one research group to write all the questions, because they would only write the kinds of questions they expect to be able to handle.

If there were a grant that could fund this, we could more straightforwardly spend money on the problem: we could buy the rights to these test-prep materials from somebody, so that we can convert them into convenient evaluation data, use them, and release them under a Creative Commons license.

Whether their preference is for neural networks, semantic graphs, or logical inferences, many schools of thought on computational semantics agree that analogies are an interesting and relevant task. We should take the opportunity to make our progress on this task measurable and reproducible by obtaining an open, sufficiently general corpus of analogies.

Conceptnet Numberbatch: a new name for the best word embeddings you can download

Recently at Luminoso, we’ve been promoting one of the open-source, open-data products of our research: a set of semantic vectors that we made by combining ConceptNet with other data sources. As I’m launching this new ConceptNet blog, it’s a good time to promote it some more, as it shows why the knowledge in ConceptNet is more important than ever.

Semantic vectors (also known as word embeddings from a deep-learning perspective) let you compare word meanings numerically. Our vectors are measurably better for this than the well-known word2vec vectors (the ones you download from the archived word2vec project page that are trained on Google News), and it’s also measurably better than the GloVe vectors.

To be fair, this system takes word2vec and GloVe as inputs so that it can improve them. One great thing about vector representations is that you can put them together into an ensemble that’s better than its parts.

The name that we gave it when writing a paper about the system is quite a mouthful. The “ConceptNet Vector Ensemble”. I found myself stumbling over the name when giving updates on it at meetings, while trying to get people to not shorten it to “ConceptNet”, which is a much broader project. It’s hard to get this to catch on as an improvement over word2vec if it has such an anti-catchy name.

Last week, Google released an English parsing model named “Parsey McParseface”. Everybody has heard about it. Giving your machine-learning model a silly Internetty name seems to be a great idea.

And that’s why the ConceptNet Vector Ensemble is now named Conceptnet Numberbatch.

It even remains an accurate, descriptive name! I bet Google’s parser doesn’t even have a face.

What does Conceptnet Numberbatch do?

Conceptnet Numberbatch is a set of semantic vectors: it associates words and phrases in a variety of languages with lists of 600 numbers, representing the gist of what they mean.

Some of the information that these vectors represent comes from ConceptNet, a semantic network of knowledge about word meanings. ConceptNet is collected from a combination of expert-created resources, crowdsourcing, and games with a purpose.

If you want to apply machine learning to the meanings of words and sentences, you probably want your system to start out knowing what a lot of words mean. By comparing semantic vectors, you can find search results that are “near misses” that don’t exactly match the search term, you can tell when one sentence is a paraphrase of another sentence, and you can discover the general topics that are being talked about by finding clusters of vectors.

Here’s an example that we can step through. Suppose we want to ask Conceptnet Numberbatch whether Benedict Cumberbatch is more like an actor or an otter. We start by looking up the rows labeled cumberbatchactor, and otter in Numberbatch. This gives us a 600-dimensional unit vector for each of them. Here are all of them graphed component-by-component:

These are pretty hard for us to compare visually, but arrays of numbers are quite easy for computers to work with. The important thing here is that vectors that are similar will point in similar directions (which means they have a high dot product as unit vectors). When we look at them component-by-component here, that means that a vector is similar to another vector when they are positive in the same places and negative in the same places. We can visualize this similarity by multiplying the vectors component-wise:

The cumberbatch * actor plot shows a lot more positive components and fewer negative components than cumberbatch * otter, particularly near the left side. The term cumberbatch is like actor in many ways, and unlike it in very few ways. Adding up the component-wise products, we find that cumberbatch is 0.35 similar to actor on a scale from -1 to 1, and it’s only 0.04 similar to otter.

Another way to understand these vectors is to rank the semantic vectors that are most similar to them. Here are examples for the three vectors we looked at:

otter

/c/en/otter                  1.000000
/c/en/japanese_river_otter   0.993316
/c/en/european_otter         0.988882
/c/en/otterless              0.951721
/c/en/water_mammal           0.938959
/c/en/otterlike              0.872185
/c/en/otterish               0.869584
/c/en/lutrine                0.838774
/c/en/otterskin              0.833183
/c/en/waitoreke              0.694700
/c/en/musteline_mammal       0.680890
/c/en/raccoon_dog            0.608738

actor

/c/en/actor                  1.000001
/c/en/role_player            0.999875
/c/en/star_in_film           0.950550
/c/en/actorial               0.900689
/c/en/actorish               0.866238
/c/en/work_in_theater        0.853726
/c/en/star_in_movie          0.844339
/c/en/stage_actor            0.842363
/c/en/kiruna_stamell         0.813768
/c/en/actress                0.798980
/c/en/method_act             0.777413
/c/en/in_film                0.770334

cumberbatch

/c/en/cumberbatch            1.000000
/c/en/cumbermania            0.871606
/c/en/cumberbabe             0.853023
/c/en/cumberfan              0.837851
/c/en/sherlock               0.379741
/c/en/star_in_film           0.373129
/c/en/actor                  0.367241
/c/en/role_player            0.367171
/c/en/hiddlestoner           0.355940
/c/en/hiddleston             0.346617
/c/en/actorfic               0.344154
/c/en/holmes                 0.337961

We evaluated Numberbatch on several measures of semantic similarity. A system scores highly on these tests when it makes the same judgments about which words are similar to each other that a human would. Across the board, Numberbatch is the system with the most human-like similarity judgments. The code and data that support this are available on GitHub.

How does this fit into ConceptNet in general?

ConceptNet is a semantic network of knowledge about word meanings. Since 2007, long before anyone called these “word embeddings”, we’ve provided vector representations of the terms in ConceptNet that can be compared for similarity. We used to make these by decomposing the link structure of ConceptNet using SVD. Now, a variation on Faruqui et al.’s retrofitting does the job better, and that’s what Numberbatch does.

The current version of Numberbatch, 16.04, uses a transformed version of ConceptNet 5.4. It’s not available through the ConceptNet API — for now, you download Numberbatch separately from its own GitHub page.

ConceptNet 5.5 is going to arrive soon, and a new version of Numberbatch based on that data will be merged into its codebase.

Wait, why did the N become lowercase?

You sure ask the important questions, hypothetical reader. Keeping the N in ConceptNet capitalized would be more consistent, but it’d break the flow. You’d probably read “ConceptNet Numberbatch” in a way that sounds less like a double-dactyl name than “Conceptnet Numberbatch” does.

Capitalize the N if you want. Lowercase all the letters if you want. The orthography of these project names isn’t sacred anyway. ConceptNet itself originated from a project that could be called “OpenMind Commonsense”, “OpenMind CommonSense”, “Open Mind Commonsense”, or various other variations until we let it settle on four normal words, “Open Mind Common Sense”. (OMCS was named in the ’90s. Give everyone involved a break.)

Please explain the name and why otters are involved

There’s a fine Internet tradition of concocting names that sound very approximately like “Benedict Cumberbatch”, and now we’ve adopted one such name for our research. For more details, you should read A Linguist Explains the Rules of Summoning Benedict Cumberbatch on The Toast. Then, if you manage to come back from there, you should gaze upon Red Scharlach’s Otters Who Look Like Benedict Cumberbatch.

Conceptnet Numberbatch is entirely our own choice of name, and should not indicate affiliation with or endorsement by any person or any otter.

Coincidentally, back in the day, ConceptNet 3 was partly developed on a PowerMac named “otter”.

The particular otter at the top of this post was photographed by Bernard Landgraf, who has taken several excellent nature photos for Wikipedia. The photo is freely available under a Creative Commons Attribution-ShareAlike 3.0 license.

No otters were harmed in the production of this research.

An introduction to the ConceptNet Vector Ensemble

Originally published on April 6, 2016.

Here’s a big idea that’s taken hold in natural language processing: meanings are vectors. A text-understanding system can represent the approximate meaning of a word or phrase by representing it as a vector in a multi-dimensional space. Vectors that are close to each other represent similar meanings.

A fragment of a concept-cloud visualization of the ConceptNet Vector Ensemble (CNVE). Words that appear close to each other are similar.
A fragment of a concept-cloud visualization of the ConceptNet Vector Ensemble (CNVE).

Vectors are how Luminoso has always represented meaning. When we started Luminoso, this was seen as a bit of a crazy idea.

It was an exciting time when the idea of vectors as meanings was suddenly popularized by the Google research project word2vec. Now this isn’t considered a crazy idea anymore, it’s considered the effective thing to do.

Luminoso’s starting point — its model of word meanings when it hasn’t seen any of your documents — comes from a vector-based representation of ConceptNet 5. That gives it general knowledge about what words mean. These vectors are then automatically adjusted based on the specific way that words are used in your domain.

But you might well ask: if these newer systems such as word2vec or GloVe are so effective, should we be using them as our starting point?

As the girl in the Old El Paso commercial asks,

The best representation of word meanings we’ve seen — and we think it’s the best representation of word meanings anyone has seen — is our new ensemble that combines ConceptNet, GloVe, PPDB, and word2vec. It’s described in our paper, “An Ensemble Method to Produce High-Quality Word Embeddings“, and it’s reproducible using this GitHub repository.

We call this the ConceptNet Vector Ensemble. These domain-general word embeddings fill the same niche as, for example, the word2vec Google News vectors, but by several measures, they represent related meanings more like people do.

A comparison of some word-embedding systems on two measures of word relatedness. Our system, CNVE, is the red dot in the upper right.
A comparison of some word-embedding systems on two measures of word relatedness. Our system, CNVE, is the red dot in the upper right.

Expanding on “retrofitting”

Manaal Faruqui’s Retrofitting, from CMU’s Language Technologies Institute, is a very cool idea.

Every system of word vectors is going to reflect the set of data it was trained on, which means there’s probably more information from outside that data that could make it better. If you’ve got a good set of word vectors, but you wish there was more information it had taken into account — particularly a knowledge graph — you can use a fairly straightforward “retrofitting” procedure to adjust the vectors accordingly.

Starting with some vectors and adjusting them based on new information — that sure sounds like what I just described about what Luminoso does, right? Faruqui’s retrofitting is not the particular process we use inside Luminoso’s products, but the general idea is related enough to Luminoso’s proprietary process that working with it was quite natural for us, and we found that it does work well.

There’s one idea from our process that can be added to retrofitting easily: if you have information about words that weren’t in your vocabulary to start with, you should automatically expand your vector space to include them.

Faruqui describes some retrofitting combinations that work well, such as combining GloVe with WordNet. I don’t think anyone had tried doing anything like this with ConceptNet before, and it turns out to be a pretty powerful source of knowledge to add. And when you add this idea of automatically expanding the vocabulary, now you can also represent all the words and phrases in ConceptNet that weren’t in the vocabulary of your original vector space, such as words in other languages.

The multilingual knowledge in ConceptNet is particularly relevant here. Our ensemble can learn more about words based on the things they translate to in languages besides English, and it can represent those words in other languages with the same kind of vectors that it uses to represent English words.

There’s clearly more to be done to extend the full power of this representation to non-English languages. It would be better, for example, if it started with some text in other languages that it could learn from and retrofit onto, instead of relying entirely on the multilingual links in ConceptNet. But it’s promising that the Spanish vectors that our ensemble learns entirely from ConceptNet, starting from having no idea what Spanish is, perform better at word similarity than a system trained on the text of the Spanish Wikipedia.

On the other hand, you have GloVe

For some reason, everyone in this niche talks about word2vec and few people talk about the similar system GloVe, from Stanford NLP. We were more drawn to GloVe as something to experiment with, as we find the way it works clearer than word2vec.

When we compared word2vec and GloVe, we got better initial results from GloVe. Levy et al. report the opposite. I think what this shows is that a whole lot of the performance of these systems is in the fine details of how you use them. And indeed, when we tweak the way we use GloVe — particularly when we borrow a process from ConceptNet to normalize words to their root form — we get word similarities that are much better than word2vec and the original GloVe, even before we retrofit anything onto it.

You can probably guess the next step: “why don’t we use both?” word2vec’s most broadly useful vectors come from Google News articles, while GloVe’s come from reading the Web at large. Those represent different kinds of information. Both of them should be in the system. In the ConceptNet Vector Ensemble, we build a vector space that combines word2vec and GloVe before we start retrofitting.

The data flow of building the ConceptNet Vector Ensemble.

You can see that creating state-of-the-art word embeddings involves ideas from a number of different people. A few of them are our own — particularly ConceptNet 5, which is entirely developed at Luminoso these days, and the various ways we transformed word embeddings to make them work better together.

This is an exciting, fast-moving area of NLP. We’re telling everyone about our vectors because the openness of word-embedding research made them possible, and if we kept our own improvement quiet, the field would probably find a way to move on without it at the cost of some unnecessary effort.

These vectors are available for download under a Creative Commons Attribution Share-Alike license. If you’re working on an application that starts from a vector representation of words — maybe you’re working in the still-congealing field of Deep Learning methods for NLP — you should give the ConceptNet Vector Ensemble a try.

wordfreq 1.2 is better at Chinese, English, Greek, Polish, Swedish, and Turkish

Originally posted on October 29, 2015.
Wordfreq 1.2 example code
Examples in Chinese and British English. Click through for copyable code.

In a previous post, we introduced wordfreq, our open-source Python library that lets you ask “how common is this word?”

Wordfreq is an important low-level tool for Luminoso. It’s one of the things we use to figure out which words are important in a set of text data. When we get the word frequencies figured out in a language, that’s a big step toward being able to handle that language from end to end in the Luminoso pipeline. We recently started supporting Arabic in our product, and improved Chinese enough to take the “BETA” tag off of it, and having the right word frequencies for those languages was a big part of it.

I’ve continued to work on wordfreq, putting together more data from more languages. We now have 17 languages that meet the threshold of having three independent sources of word frequencies, which we consider important for those word frequencies to be representative.

Here’s what’s new in wordfreq 1.2:

  • The English word list has gotten a bit more robust and a bit more British by including SUBTLEX, adding word frequencies from American TV shows as well as the BBC.
  • It can fearlessly handle Chinese now. It uses a lovely pure-Python Chinese tokenizer, Jieba, to handle multiple-word phrases, and Jieba’s built-in wordlist provides a third independent source of word frequencies. Wordfreq can even smooth over the differences between Traditional and Simplified Chinese.
  • Greek has also been promoted to a fully-supported language. With new data from Twitter and OpenSubtitles, it now has four independent sources.
  • In some applications, you want to tokenize a complete piece of text, including punctuation as separate tokens. Punctuation tokens don’t get their own word frequencies, but you can ask the tokenizer to give you the punctuation tokens anyway.
  • We added support for Polish, Swedish, and Turkish. All those languages have a reasonable amount of data that we could obtain from OpenSubtitles, Twitter, and Wikipedia by doing what we were doing already.

When adding Turkish, we made sure to convert the case of dotted and dotless İ’s correctly. We know that putting the dots in the wrong places can lead to miscommunication and even fatal stabbings.

The language in wordfreq that’s still only partially supported is Korean. We still only have two sources of data for it, so you’ll see the disproportionate influence of Twitter on its frequencies. If you know where to find a lot of freely-usable Korean subtitles, for example, we would love to know.

Let’s revisit the top 10 words in the languages wordfreq supports. And now that we’ve talked about getting right-to-left right, let’s add a bit of code that makes Arabic show up with right-to-left words in left-to-right order, instead of middle-to-elsewhere order like it came out before.

Code showing the top ten words in each language wordfreq 1.2 supports.

Wordfreq 1.2 is available on GitHub and PyPI.

wordfreq: Open source and open data about word frequencies

Originally posted on September 1, 2015.

Often, in NLP, you need to answer the simple question: “is this a common word?” It turns out that this leaves the computer to answer a more vexing question: “What’s a word?”

Let’s talk briefly about why word frequencies are important. In many cases, you want to assign more significance to uncommon words. For example, a product review might contain the word “use” and the word “defective”, and the word “defective” carries way more information. If you’re wondering what the deal is with John Kasich, a headline that mentions “Kasich” will be much more likely to be what you’re looking for than one that merely mentions “John”.

For purposes like these, it would be nice if we could just import a Python package that could tell us whether one word was more common than another, in general, based on a wide variety of text. We looked for a while and couldn’t find it. So we built it.

wordfreq provides estimates of the frequencies of words in many languages, loading its data from efficiently-compressed data structures so it can give you word frequencies down to 1 occurrence per million without having to access an external database. It aims to avoid being limited to a particular domain or style of text, getting its data from a variety of sources: Google Books, Wikipedia, OpenSubtitles, Twitter, and the Leeds Internet Corpus.

The 10 most common words that wordfreq knows in 15 languages.
The 10 most common words that wordfreq knows in 15 languages. Yes, it can handle multi-character words in Chinese and Japanese; those just aren’t in the top 10. A puzzle for Unicode geeks: guess where the start of the Arabic list is.

Partial solutions: stopwords and inverse document frequency

Those who are familiar with the basics of information retrieval probably have a couple of simple suggestions in mind for dealing with word frequencies.

One is to come up with a list of stopwords, words such as “the” and “of” that are too common to use for anything. Discarding stopwords can be a useful optimization, but that’s far too blunt of an operation to solve the word frequency problem in general. There’s no place to draw the bright line between stopwords and non-stopwords, and in the “John Kasich” example, it’s not the case that “John” should be a stopword.

Another partial solution would be to collect all the documents you’re interested in, and re-scale all the words according to their inverse document frequency or IDF. This is a quantity that decreases as the proportion of documents a word appears in increases, reaching 0 for a word that appears in every document.

One problem with IDF is that it can’t distinguish a word that appears in a lot of documents because it’s unimportant, from a word that appears in a lot of documents because it’s very important to your domain. Another, more practical problem with IDF is that you can’t calculate it until you’ve seen all your documents, and it fluctuates a lot as you add documents. This is particularly an issue if your documents arrive in an endless stream.

We need good domain-general word frequencies, not just domain-specific word frequencies, because without the general ones, we can’t determine which domain-specific word frequencies are interesting.

Avoiding biases

The counts of one resource alone tend to tell you more about that resource than about the language. If you ask Wikipedia alone, you’ll find that “census”, “1945”, and “stub” are very common words. If you ask Google Books, you’ll find that “propranolol” is supposed to be 10 times more common than “lol” overall (and also that there’s something funny going on, so to speak, in the early 1800s).

If you collect data from Twitter, you’ll of course find out how common “lol” is. You also might find that the ram emoji “🐏” is supposed to be extremely common, because that guy from One Direction once tweeted “We are derby super 🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏”, and apparently every fan of One Direction who knows what Derby Super Rams are retweeted it.

Yes, wordfreq considers emoji to be words. Its Twitter frequencies would hardly be complete without them.

We can’t entirely avoid the biases that come from where we get our data. But if we collect data from enough different sources (not just larger sources), we can at least smooth out the biases by averaging them between the different sources.

What’s a word?

You have to agree with your wordlist on the matter of what constitutes a “word”, or else you’ll get weird results that aren’t supported by the actual data.

Do you split words at all spaces and punctuation? Which of the thousands of symbols in Unicode are punctuation? Is an apostrophe punctuation? Is it punctuation when it puts a word in single quotes? Is it punctuation in “can’t”, or in “l’esprit”? How many words is “U.S.” or “google.com”? How many words is “お早うございます” (“good morning”), taking into account that Japanese is written without spaces? The symbol “-” probably doesn’t count as a word, but does “+”? How about “☮” or “♥”?

The process of splitting text into words is called “tokenization”, and everyone’s got their own different way to do it, which is a bit of a problem for a word frequency list.

We tried a few ways to make a sufficiently simple tokenization function that we could use everywhere, across many languages. We ended up with our own ad-hoc rule including large sets of Unicode characters and a special case for apostrophes, and this is in fact what we used when we originally released wordfreq 1.0, which came packaged with regular expressions that look like attempts to depict the Flying Spaghetti Monster in text.

A particularly noodly regex.

But shortly after that, I realized that the Unicode Consortium had already done something similar, and they’d probably thought about it for more than a few days.

Word splitting in Unicode
Word splitting in Unicode. Not pictured: how to decide which of these segments count as “words”.

This standard for tokenization looked like almost exactly what we wanted, and the last thing holding me back was that implementing it efficiently in Python looked like it was going to be a huge pain. Then I found that the regex package (not the re package built into Python) contains an efficient implementation of this standard. Defining how to split text into words became a very simple regular expression… except in Chinese and Japanese, because a regular expression has no chance in a language where the separation between words is not written in any way.

So this is how wordfreq 1.1 identifies the words to count and the words to look up. Of course, there is going to be data that has been tokenized in a different way. When wordfreq gets something that looks like it should be multiple words, it will look them up separately and estimate their combined frequency, instead of just returning 0.

Language support

wordfreq supports 15 commonly-used languages, but of course some languages are better supported than others. English is quite polished, for example, while Chinese so far is just there to be better than nothing.

The reliability of each language corresponds pretty well with the number of different data sources we put together to make the wordlist. Some sources are hard to get in certain languages. Perhaps unsurprisingly, for example, not much of Twitter is in Chinese. Perhaps more surprisingly, not much of it is in German either.

The word lists that we’ve built for wordfreq represent the languages where we have at least two sources. I would consider the ones with two sources a bit dubious, while all the languages that have three or more sources seem to have a reasonable ranking of words.

  • 5 sources: English
  • 4 sources: Arabic, French, German, Italian, Portuguese, Russian, Spanish
  • 3 sources: Dutch, Indonesian, Japanese, Malay
  • 2 sources: Chinese, Greek, Korean

Compact wordlists

When we were still figuring this all out, we made several 0.x versions of wordfreq that required an external SQLite database with all the word frequencies, because there are millions of possible words and we had to store a different floating-point frequency for each one. That’s a lot of data, and it would have been infeasible to include it all inside the Python package. (GitHub and PyPI don’t like huge files.) We ended up with a situation where installing wordfreq would either need to download a huge database file, or build that file from its source data, both of which would consume a lot of time and computing resources when you’re just trying to install a simple package.

As we tried different ways of shipping this data around to all the places that needed it, we finally tried another tactic: What if we just distributed less data?

Two assumptions let us greatly shrink our word lists:

  • We don’t care about the frequencies of words that occur less than once per million words. We can just assume all those words are equally informative.
  • We don’t care about, say, 2% differences in word frequency.

Now instead of storing a separate frequency for each word, we group the words into 600 possible tiers of frequency. You could call these tiers “centibels”, a logarithmic unit similar to decibels, because there are 100 of them for each factor of 10 in the word frequency. Each of them represents a band of word frequencies that spans about a 2.3% difference. The data we store can then be simplified to “Here are all the words in tier #330… now here are all the words in tier #331…” and converted to frequencies when you ask for them.

Some tiers of word frequencies in English.
Some tiers of word frequencies in English.

This let us cut down the word lists to an entirely reasonable size, so that we can put them in the repository, and just keep them in memory while you’re using them. The English word list, for example, is 245 KB, or 135 KB compressed.

But it’s important to note the trade-off here, that wordfreq only represents sufficiently common words. It’s not suited for comparing rare words to each other. A word rarer than “amulet”, “bunches”, “deactivate”, “groupie”, “pinball”, or “slipper”, all of which have a frequency of about 1 per million, will not be represented in wordfreq.

Getting the package

wordfreq is available on GitHub, or it can be installed from the Python Package Index with the command pip install wordfreq. Documentation can be found in its README on GitHub.

wordfreq usage example
Comparing the frequency per million words of two spellings of “café”, in English and French.

ftfy (fixes text for you) 4.0: changing less and fixing more

Originally posted on May 21, 2015.

ftfy is a Python tool that takes in bad Unicode and outputs good Unicode. I developed it because we really needed it at Luminoso — the text we work with can be damaged in several ways by the time it gets to us. It’s become our most popular open-source project by far, as many other people have the same itch that we’re scratching.

The coolest thing that ftfy does is to fix mojibake — those mix-ups in encodings that cause the word más to turn into más or even más. (I’ll recap why this happens and how it can be reversed below.) Mojibake is often intertwined with other problems, such as un-decoded HTML entities (más), and ftfy fixes those as well. But as we worked with the ftfy 3 series, it gradually became clear that the default settings were making some changes that were unnecessary, and from time to time they would actually get in the way of the goal of cleaning up text.

ftfy 4 includes interesting new fixes to creative new ways that various software breaks Unicode. But it also aims to change less text that doesn’t need to be changed. This is the big change that made us increase the major version number from 3 to 4, and it’s fundamentally about Unicode normalization. I’ll discuss this change below under the heading “Normalization”.

Mojibake and why it happens

Mojibake is what happens when text is written in one encoding and read as if it were a different one. It comes from the Japanese word “•¶Žš‰»‚¯” — no, sorry, “文字化け” — meaning “character corruption”. Mojibake turns everything but basic ASCII characters into nonsense.

Suppose you have a word such as “más”. In UTF-8 — the encoding used by the majority of the Internet — the plain ASCII letters “m” and “s” are represented by the familiar single byte that has represented them in ASCII for 50 years. The letter “á”, which is not ASCII, is represented by two bytes.

Text: m á s
Bytes: 6d c3 a1 73

The problem occurs when these bytes get sent to a program that doesn’t quite understand UTF-8. This program probably thinks that every character is one byte, so it decodes each byte as a character, in a way that depends on the operating system it’s running on and the country it was set up for. (This, of course, makes no sense in an era where computers from all over the world can talk to each other.)

If we decode this text using Windows’ most popular single-byte encoding, which is known as “Windows-1252” and often confused with “ISO-8859-1”, we’ll get this:

Bytes: 6d c3 a1 73
Text: m à ¡ s

The real problem happens when this text needs to be sent back over the Internet. It may very well send the newly-weirdified text in a way that knows it needs to encode UTF-8:

Intended text: m á s
Actual text: m à ¡ s
Bytes: 6d c3 83 c2 a1 73

So, the word “más” was supposed to be four bytes of UTF-8, but what we have now is six bytes of what I propose to call “Double UTF-8”, or “WTF-8” for short.

WTF-8 is a very common form of mojibake, and the fortunate thing is that it’s reasonably easy to detect. Most possible sequences of bytes are not UTF-8, and most mojibake forms sequences of characters that are extremely unlikely to be the intended text. So ftfy can look for sequences that would decode as UTF-8 if they were encoded as another popular encoding, and then sanity-check by making sure that the new text looks more likely than the old text. By reversing the process that creates mojibake, it turns mojibake into the correct text with a rate of false positives so low that it’s difficult to measure.

Weird new mojibake

We test ftfy on live data from Twitter, which due to its diversity of languages and clients is a veritable petri dish of Unicode bugs. One thing I’ve found in this testing is that mojibake is becoming a bit less common. People expect their Twitter clients to be able to deal with Unicode, and the bugs are gradually getting fixed. The “you fail at Unicode” character � was 33% less common on Twitter in 2014 than it was in 2013.

Some software is still very bad at Unicode — particularly Microsoft products. These days, Microsoft is in many ways making its software play nicer in a pluralistic world, but they bury their head in the sand when it comes to the dominance of UTF-8. Sadly, Microsoft’s APIs were not designed for UTF-8 and they’re not interested in changing them. They adopted Unicode during its awkward coming-of-age in the mid ’90s, when UTF-16 seemed like the only way to do it. Encoding text in UTF-16 is like dancing the Macarena — you probably could do it under duress, but you haven’t willingly done it since 1997.

Because they don’t match the way the outside world uses Unicode, Microsoft products tend to make it very hard or impossible to export and import Unicode correctly, and easy to do it incorrectly. This remains a major reason that we need ftfy.

Although text is getting a bit cleaner, people are getting bolder about their use of Unicode and the bugs that remain are getting weirder. ftfy has always been able to handle some cases of files that use different encodings on different lines, but what we’re seeing now is text that switches between UTF-8 and WTF-8 in the same sentence. There’s something out there that uses UTF-8 for its opening quotation marks and Windows-1252 for its closing quotation marks, before encoding it all in UTF-8 again, “like this”. You can’t simply encode and decode that string to get the intended text “like this”.

ftfy 4.0 includes a heuristic that fixes some common cases of mixed encodings in close proximity. It’s a bit conservative — it leaves some text unfixed, because if it changed all text that might possibly be in a mixed encoding, it would lead to too many false positives.

Another variation of this is that ftfy looks for mojibake that some other well-meaning software has tried to fix, such as by replacing byte A0 with a space, because in Windows-1252 A0 is a non-breaking space. Previously, ftfy would have to leave the mojibake unfixed if one of its characters was changed. But if the sequence is clear enough, ftfy will put back the A0 byte so that it can fix the original mojibake.

Does this seem gratuitous? These are things that show up both in ftfy’s testing stream and in real data that we’ve had to handle. We want to minimize the cases where we have to tell a customer “sorry, your text is busted” and maximize the cases where we just deal with it.

Normalization

NFC (the Normalization Form that uses Composition) is a process that should be applied to basically all Unicode input. Unicode is flexible enough that it has multiple ways to write exactly the same text, and NFC merges them into the same sensible way. Here are two ways to write más, as illustrated by the ftfy.explain_unicode function.

This is the NFC normalized way:

U+006D m [Ll] LATIN SMALL LETTER M
U+00E1 á [Ll] LATIN SMALL LETTER A WITH ACUTE
U+0073 s [Ll] LATIN SMALL LETTER S

And this is a different way that’s not NFC-normalized (it’s NFD-normalized instead):

U+006D m [Ll] LATIN SMALL LETTER M
U+0061 a [Ll] LATIN SMALL LETTER A
U+0301 ́ [Mn] COMBINING ACUTE ACCENT
U+0073 s [Ll] LATIN SMALL LETTER S

If you want the same text to be represented by the same data, running everything through NFC normalization is a good idea. ftfy does that (unless you ask it not to).

Previous versions of ftfy were, by default, not just using NFC normalization, but the more aggressive NFKC normalization (whose acronym is quite unsatisfying because the K stands for “Compatibility”). For a while, it seemed like normalizing even more was even better. NFKC does things like convert fullwidth  letters into normal letters, and convert the single ellipsis character into three periods.

But NFKC also loses meaningful information. If you were to ask me what the leading cause of mojibake is, I might answer “Excel™”. After NFKC normalization, I’d instead be blaming something called “ExcelTM”. In cases like this, NFKC is hitting the text with too blunt a hammer. Even when it seems appropriate to normalize aggressively because we’re going to be performing machine learning on text, the resulting words such as “exceltm” are not helpful.

So in ftfy 4.0, we switched the default normalization to NFC. We didn’t want to lose the nice parts of NFKC, such as normalizing fullwidth letters and breaking up the kind of ligatures that can make the word “fluffiest” appear to be five characters long. So we added those back in as separate fixes. By not applying NFKC bluntly to all the text, we change less text that doesn’t need to be changed, even as we apply more kinds of fixes. It’s a significant change in the default behavior of ftfy, but we hope you agree that this is a good thing. A side benefit is that ftfy 4.0 is faster overall than 3.x, because NFC normalization can run very quickly in common cases.

Future-proofing emoji and other changes

ftfy’s heuristics depend on knowing what kind of characters it’s looking at, so it includes a table where it can quickly look up Unicode character classes. This table normally doesn’t change very much, but we update it as Python’s unicodedata gets updated with new characters, making the same table available even in previous versions of Python.

One part of the table is changing really fast, though, in a way that Python may never catch up with. Apple is rapidly adding new emoji and modifiers to the Unicode block that’s set aside for them, such as 🖖🏽, which should be a brown-skinned Vulcan salute. Unicode will publish them in a standard eventually, but people are using them now.

Instead of waiting for Unicode and then Python to catch up, ftfy just assumes that any character in this block is an emoji, even if it doesn’t appear to be assigned yet. When emoji burritos arrive, ftfy will be ready for them.

Developers who like to use the UNIX command line will be happy to know that ftfy can be used as a pipe now, as in:

curl http://example.com/api/data.txt | ftfy | sort | uniq -c

The details of all the changes can be found, of course, in the CHANGELOG.

Has ftfy solved a problem for you? Have you stumped it with a particularly bizarre case of mojibake? Let us know in the comments or on Twitter.