ftfy (fixes text for you) 4.4 and 5.0

ftfy is Luminoso’s open-source Unicode-fixing library for Python.

Luminoso’s biggest open-source project is ConceptNet, but we also use this blog to provide updates on our other open-source projects. And among these projects, ftfy is certainly the most widely used. It solves a problem a lot of people have with “no faffing about”, as a grateful e-mail I received put it.

When you use the ftfy.fix_text() function, it detects and fixes such problems as mojibake (text that was decoded in the wrong encoding), accidental HTML escaping, curly quotes where you expected straight ones, and so on. (You can also selectively disable these fixes, or run them as separate functions.)

Here’s an example that fixes some multiply-mangled Unicode that I actually found on the Web:

>>> print(ftfy.fix_text("¯\\_(ツ)_/¯"))
¯\_(ツ)_/¯

Another example, from a Twitter-bot gone wrong:

>>> print(ftfy.fix_text("#правильноепитание"))
#правильноепитание

So we’re proud to present two new releases of ftfy, versions 4.4 and 5.0. Let’s start by talking about the big change:

A control panel labeled in Polish, with a big red button with the text 'Drop Python 2 support' overlaid. Photo credit: “The Big Red Button” by włodi, used under the CC-By-SA 2.0 license

That’s right: as of version 4.4, ftfy is better at dealing with encodings of Eastern European languages! After all, sometimes your text is in Polish, like the labels on this very serious-looking control panel. Or maybe it’s in Czech, Slovak, Hungarian, or a language with similar accented letters.

Before Unicode, people would handle these alphabets using a single-byte encoding designed for them, like Windows-1250, which would be incompatible with other languages. In that encoding, the photographer’s name is the byte string w\xb3odi. But now the standard encoding of the Web is UTF-8, where the same name is w\xc5\x82odi.

The encoding errors you might encounter due to mixing these up used to be underrepresented in the test data I collected. You might end up with the name looking like “wĹ‚odi” and ftfy would just throw up its hands like ¯\\_(ツ)_/¯. But now it understands what happened to that name and how to fix it.

Oh, but what about that text I photoshopped onto the button?

The same image, cropped to just the 'Drop Python 2 support' button.

Yeah, I was pulling your leg a bit by talking about the Windows-1250 thing first.

ftfy 5.0 is the same as ftfy 4.4, but it drops support for Python 2. It also gains some tests that we’re happier to not have to write for both versions of Python. Depending on how inertia-ful your use of Python is, this may be a big deal to you.

Three at last!

Python 3 has a string type that’s a pretty good representation of Unicode, and it uses it consistently throughout its standard library. It’s a great language for describing Unicode and how to fix it. It’s a great language for text in general. But until now, we’ve been writing ftfy in the unsatisfying language known as “Python 2+3”, where you can’t take advantage of anything that’s cleaner in Python 3 because you still have to do it the Python 2.7 way also.

So, following the plan we announced in April 2015, we released two versions at the same time. They do the same thing, but ftfy 5.0 gets to have shorter, simpler code.

It seems we even communicated this to ftfy’s users successfully. Shortly after ftfy 5.0 appeared on PyPI, the bug report we received wasn’t about where Python 2 support went, it was about a regression introduced by the new heuristics. (That’s why 4.4.1 and 5.0.1 are out already.)

There’s more I plan to do with ftfy, especially fixing more kinds of encoding errors, as summarized by issue #18. It’ll be easier to make it happen when I can write the fix in a single language.

But if you’re still on Python 2 — possibly due to forces outside your control — I hope I’ve left you with a pretty good option. Thousands of users are content with ftfy 4, and it’s not going away.

One more real-world example

>>> from ftfy.fixes import fix_encoding_and_explain
>>> fix_encoding_and_explain("NapĂ\xadšte nám !")
('Napíšte nám !', [('encode', 'sloppy-windows-1250', 2), ('decode', 'utf-8', 0)])

How Luminoso made ConceptNet into the best word vectors, and won at SemEval

I have been telling people for a while that ConceptNet is a valuable source of information for semantic vectors, or “word embeddings” as they’ve been called since the neural-net people showed up in 2013 and renamed everything. Let’s call them “word vectors”, even though they can represent phrases too. The idea is to compute a vector space where similar vectors represent words or phrases with similar meanings.

In particular, I’ve been pointing to results showing that our precomputed vectors, ConceptNet Numberbatch, are the state of the art in multiple languages. Now we’ve verified this by participating in SemEval 2017 Task 2, “Multilingual and Cross-lingual Semantic Word Similarity”, and winning in a landslide.

A graph of the SemEval multilingual task results, showing the Luminoso system performing above every other system in every language, except for two systems that only submitted results in Farsi.

Performance of SemEval systems on the Multilingual Word Similarity task. Our system, in blue, shows its 95% confidence interval.

A graph of the SemEval cross-lingual task results, showing the Luminoso system performing above every other system in every language pair.

Performance of SemEval systems on the Cross-lingual Word Similarity task. Our system, in blue, shows its 95% confidence interval.

SemEval is a long-running evaluation of computational semantics. It does an important job of counteracting publication bias. Most people will only publish evaluations where their system performs well, but SemEval allows many groups to compete head-to-head on an evaluation they haven’t seen yet, with results released all at the same time. When SemEval results come out, you can see a fair comparison of everyone’s approach, with positive and negative results.

This task was a typical word-relatedness task, the same kind that we’ve been talking about in previous posts. You get a list of pairs of words, and your system has to assess how related they are, which is a useful thing to know in NLP applications such as search, text classification, and topic detection. The score is how well your system’s responses correlate with the responses that people give.

The system we submitted was not much different from the one we published and presented at AAAI 2017 and that we’ve been blogging about. It’s the product of the long-running crowd-sourcing and linked-data effort that has gone into ConceptNet, and lots of research here at Luminoso about how to make use of it.

At a high level, it’s an ensemble method that glues together multiple sources of vectors, using ConceptNet as the glue, and retrofitting (Faruqui, 2015) as the glue gun, and also building large parts of the result entirely out of the glue, a technique which worked well for me in elementary school when I had to make a diorama.

The primary goal of this SemEval task was to submit one system that performed well in multiple languages, and we did the best by far in that. Some systems only attempted one or two languages, and at least get to appear in the breakdown of the results by language. I notice that the QLUT system (I think that’s the Qilu University of Technology) is in a statistical tie with us in English, but submitted no other languages, and that two Farsi-only systems did better than us in Farsi.

On the cross-lingual results (comparing words between pairs of languages), no other system came close to us, even in Farsi, showing the advantage of ConceptNet being multilingual from the ground up.

The “baseline” system submitted by the organizers was Nasari, a knowledge-graph-based system previously published in 2016. Often the baseline system is a very simplistic technique, but this baseline was fairly sophisticated and demanding, and many systems couldn’t outperform it. The organizers, at least, believe that everyone in this field should be aware of what knowledge graphs can do, and it’s your problem if you’re not.

Don’t take “OOV” for an answer

The main thing that our SemEval system added, on top of the ConceptNet Numberbatch data you can download, is a strategy for handling out-of-vocabulary words. In the end, so many NLP evaluations come down to how you unk your OOVs. Wait, I’ll explain.

Most machine learning over text considers words as atomic units. So you end up with a particular vocabulary of words your system has learned about. The test data will almost certainly contain some words that the system hasn’t learned; those words are “Out of Vocabulary”, or “OOV”.

(There are some deep learning techniques now that go down to the character level, but they’re messier. And they still end up with a vocabulary of characters. I put the Unicode snowman ☃ into one of those systems and it segfaulted.)

Some publications use the dramatic cop-out of skipping all OOV words in their evaluation. That’s awful. Please don’t do that. I could make an NLP system whose vocabulary is the single word “chicken”, and that would get it a 100% score on some OOV-skipping evaluations, but the domain of text it could understand would be quite limited (Zongker, 2002).

In general, when a system encounters an OOV word, there has to be some strategy for dealing with it. Perhaps you replace all OOV words with a single symbol for unknown words, “unk”, a strategy common enough to have become a verb.

SemEval doesn’t let you dodge OOV words: you need to submit some similarity value for every pair, even if your system has no idea. “Unking” would not have worked very well for comparing words. It seemed to us that a good OOV strategy would make a noticeable difference in the results. We made a couple of assumptions:

  • The most common OOV words are inflections or slight variations of words that are known.
  • Inflections are suffixes in most of the languages we deal with, so the beginning of the word is more important than the end.
  • In non-English languages, OOV words may just be borrowings from English, the modern lingua franca.

So, in cases where it doesn’t help to use our previously published OOV strategy of looking up terms in ConceptNet and replacing them with their neighbors in the graph, we added these two OOV tricks:

  • Look for the word in English instead of the language it’s supposed to be in.
  • Look for known words that have the longest common prefix with the unknown word.

This strategy made a difference of about 10 percent in the results. Without it, our system still would have won at the cross-lingual task, but would have narrowly lost to the HCCL system on the individual languages. But we’re handicapping ourselves here: everyone got to decide on their OOV strategy as part of the task. When the SemEval workshop happens, I’ll be interested to see what strategies other people used.

What about Google and Facebook?

When people talk about semantic vectors, they generally aren’t talking about what a bunch of small research groups came up with last month. They’re talking about the big names, particularly Google’s word2vec and Facebook’s fastText.

Everyone who makes semantic vectors loves to compare to word2vec, because everyone has heard of it, and it’s so easy to beat. This should not be surprising: NLP research did not stop in 2014, but word2vec’s development did. It’s a bit hard to use word2vec as a reference point in SemEval, because if you want non-English data in word2vec, you have to go train it yourself. I’ve done that a few times, with awful results, but I’m not sure those results are representative, because of course I’m using data I can get myself, and the most interesting thing about word2vec is that you can get the benefit of it being trained on Google’s wealth of data.

A more interesting comparison is to fastText, released by Facebook Research in 2016 as a better, faster way to learn word vectors. Tomas Mikolov, the lead author on word2vec, is now part of the fastText team.

fastText has just released pre-trained vectors in a lot of languages. It’s trained only on Wikipedia, which should be a warning sign that the data is going to have a disproportionate fascination with places where 20 people live and albums that 20 people have listened to. But this lets us compare how fastText would have done in SemEval.

The fastText software has a reasonable OOV strategy — it learns about sub-word sequences of characters, and falls back on those when it doesn’t know a word — but as far as I can tell, they didn’t release the sub-word information with their pre-trained vectors. Lacking the ability to run their OOV strategy, I turned off our own OOV strategy to make a fair comparison:

Luminoso performs comfortably above word2vec and fastText in this graph.

Comparison of released word vectors on the SemEval data, without using any OOV strategy.

Note that word2vec is doing better than fastText, due to being trained on more data, but it’s only in English. Luminoso’s ConceptNet-based system, even without its OOV strategy, is doing much better than these well-known systems. And when I experiment with bolting ConceptNet’s OOV onto fastText, it only gets above the baseline system in German.

Overcoming skepticism and rejection in academic publishing

Returning to trying to publish academically, after being in the startup world for four years, was an interesting and frustrating experience. I’d like to gripe about it a bit. Feel free to skip ahead if you don’t care about my gripes about publishing.

When we first started getting world-beating results, in late 2015, we figured that they would be easy to publish. After all, people compare themselves to the “state of the art” all the time, so it’s the publication industry’s job to keep people informed about the new state of the art, right?

We got rejected three times. Once without even being reviewed, because I messed up the LaTeX boilerplate and the paper had the wrong font size. Once because a reviewer was upset that we weren’t comparing to a particular system whose performance had already been superseded (I wonder if it was his). Once because we weren’t “novel” and were just a “bag of tricks” (meanwhile, the fastText paper has “Bag of Tricks” in its title). In the intervening time, dozens of papers have claimed to be the “state of the art” with numbers lower than the ones we blogged about.

I gradually learned that how the result was framed was much more important than the actual result. It makes me appreciate what my advisors did in grad school; I used to have less interesting results than this sail through the review process, and their advice on how to frame it probably played a large role.

So this time, I worked on a paper that could be summarized with “Here’s data you can use! (And here’s why it’s good)”, instead of with “Our system is better than yours! (Here’s the data)”. AAAI finally accepted that paper for their 2017 conference, where we’ve just presented it and maybe gotten a few people’s attention, particularly with the shocking news that ConceptNet still exists.

The fad-chasers of machine learning haven’t picked up on ConceptNet Numberbatch either, maybe because it doesn’t have “2vec” in the name. (My co-worker Joanna has claimed “2vec” as her hypothetical stage name as a rapper.) And, contrary to the example of systems that are better at recognizing cat pictures, Nvidia hasn’t yet added acceleration for the vector operations we use to their GPUs. (I jest. Mostly in that you wouldn’t want to do something so memory-heavy on a GPU.)

At least in the academic world, the idea that you need knowledge graphs to support text understanding is taking hold from more sources than just us. The organizers’ baseline system (Nasari) used BabelNet, a knowledge graph that looks a lot like ConceptNet except for its restrictive license. Nasari beat a lot of the other entries, but not ours.

But academia still has its own built-in skepticism that a small company can really be the world leader in vector-based semantics. The SemEval results make it pretty clear. I’ll believe that academia has really caught up when someone graphs against us instead of word2vec the next time they say “state of the art”. (And don’t forget to put error bars or a confidence interval on it!)

How do I use ConceptNet Numberbatch?

To make it as straightforward as possible:

  • Work through any tutorial on machine learning for NLP that uses semantic vectors.
  • Get to the part where they tell you to use word2vec. (A particularly enlightened tutorial may tell you to use GloVe 1.2.)
  • Get the ConceptNet Numberbatch data, and use it instead.
  • Get better results that also generalize to other languages.

One task where we’ve demonstrated this ourselves is in solving analogy problems.

Whether this works out for you or not, tell us about it on the ConceptNet Gitter.

How does Luminoso use ConceptNet Numberbatch?

Luminoso provides software as a service for text understanding. Our data pipeline starts out with its “background knowledge”, which is very similar to ConceptNet Numberbatch, so that it has a good idea of what words mean before it sees a single sentence of your data. It then reads through your data and refines its understanding of what words and phrases mean based on how they’re used in your data, allowing it to accurately understand jargon, common misspellings, and domain-specific meanings of words.

If you rely entirely on “deep learning” to extract meaning from words, you need billions of words before it starts being accurate. Collecting billions of words is difficult, and the text you collect is probably not the text you really want to understand.

Luminoso starts out knowing everything that ConceptNet, word2vec, and GloVe know and works from there, so it can learn quickly from the smaller number of documents that you’re actually interested in. We package this all up in a visualization interface and an API that lets you understand what’s going on in your text quickly.

Announcing ConceptNet 5.5 and conceptnet.io

ConceptNet is a large, multilingual knowledge graph about what words mean.

This is background knowledge that’s very important in NLP and machine learning, and it remains relevant in a time when the typical thing to do is to shove a terabyte or so of text through a neural net. We’ve shown that ConceptNet provides information for word embeddings that isn’t captured by purely distributional techniques like word2vec.

At Luminoso, we make software for domain-specific text understanding. We use ConceptNet to provide a base layer of general understanding, so that our machine learning can focus on quickly learning what’s interesting about text in your domain, when other techniques have to re-learn how the entire language works.

ConceptNet 5.5 is out now, with features that are particularly designed for improving word embeddings and for linking ConceptNet to other knowledge sources.

The new conceptnet.io

With the release of ConceptNet 5.5, we’ve relaunched its website at conceptnet.io to provide a modern, easy-to-browse view of the data in ConceptNet.

The old site was at conceptnet5.media.mit.edu, and I applaud the MIT Media Lab sysadmins for the fact that it keeps running and we’ve even been able to update it with new data. But none of us are at MIT anymore — we all work at Luminoso now, and it’s time for ConceptNet to make the move with us.

ConceptNet improves word embeddings

Word embeddings represent the semantics of a word or phrase as many-dimensional vectors, which are pre-computed by a neural net or some other machine learning algorithm. This is a pretty useful idea. We’ve been doing it with ConceptNet since before the term “word embeddings” was common.

When most developers need word embeddings, the first and possibly only place they look is word2vec, a neural net algorithm from Google that computes word embeddings from distributional semantics. That is, it learns to predict words in a sentence from the other words around them, and the embeddings are the representation of words that make the best predictions. But even after terabytes of text, there are aspects of word meanings that you just won’t learn from distributional semantics alone.

To pick one example, word2vec seems to think that because the terms “Red Sox” and “Yankees” appear in similar sentences, they mean basically the same thing. Not here in Boston, they don’t. Same deal with “high school” and “elementary school”. We get a lot of information from the surrounding words, which is the key idea of distributional semantics, but we need more than that.

When we take good word embeddings and add ConceptNet to them, the results are state-of-the-art on several standard evaluations of word embeddings, even outperforming recently-released systems such as FastText.

numberbatch-comparison-graph

Comparing the performance of available word-embedding systems. Scores are measured by Spearman correlation with the gold standard, or (for SAT analogies) by the proportion of correct answers. The orange bar is the embeddings used in ConceptNet 5.5.

We could achieve results like this with ConceptNet 5.4 as well, but 5.5 has a big change in its representation that makes it a better match for word embeddings. In previous versions, English words were all reduced to a root form before they were even represented as a ConceptNet node. There was a node for “write”, and no node for “wrote”; a node for “dog”, and no node for “dogs”. If you had a word in its inflected form, you had to reduce it to a root form (using the same algorithm as ConceptNet) to get results. That helped for making the data more strongly-connected, but made it hard to use ConceptNet with other things.

This stemming trick only ever applied to English, incidentally. We never had a consistent way to apply it to all languages. We didn’t even really have a consistent way to apply it to English; any stemmer is either going to have to take into account the context of a sentence (which ConceptNet nodes don’t have) or be wrong some of the time. (Is “saw” a tool or the past tense of “see”?) The ambiguity and complexity just become unmanageable when other languages are in the mix.

So in ConceptNet 5.5, we’ve changed the representation of word forms. There are separate nodes for “dog” and “dogs”, but they’re connected by the “FormOf” relation, and we make sure they end up with very similar word vectors. This will make some use cases easier and others harder, but it corrects a long-standing glitch in ConceptNet’s representation, and incidentally makes it easier to directly compare ConceptNet 5.5 with other systems such as word2vec.

Solving analogies like a college applicant

analogy-screenshot

ConceptNet picks the right answer to an SAT question.

One way to demonstrate that your word-embedding system has a good representation of meaning is to use it to solve word analogies. The usual example, pretty much a cliché by now, is “man : woman :: king : queen”. You want those word vectors to form something like a parallelogram in your vector space, indicating that the relationships between these words are parallel to each other, even if the system can’t explain in words what the relationship is. (And I really wish it could.)

In an earlier post, Cramming for the Test Set, I lamented that the Google analogy data that everyone’s been using to evaluate their word embeddings recently is unrepresentative, and it’s a step down in quality from what Peter Turney has been using in his analogy research since 2005. I did not succeed in finding a way to open up some good analogy data under a Creative Commons license, but I did at least contact Turney to get his data set of SAT questions.

The ConceptNet Numberbatch word embeddings, built into ConceptNet 5.5, solve these SAT analogies better than any previous system. It gets 56.4% of the questions correct. The best comparable previous system, Turney’s SuperSim (2013), got 54.8%. And we’re getting ever closer to “human-level” performance on SAT analogies — while particularly smart humans can of course get a lot more questions right, the average college applicant gets 57.0%.

We can aspire to more than being comparable to a mediocre high school student, but that’s pretty good for an AI so far!

The Semantic Web: where is it now?

By now, the words “Semantic Web” probably either make you feel sad, angry, or bored. There were a lot of promises about how all we needed to do was get everyone to put some RDF and OWL in their XML or whatever, and computers would get smarter. But few people wanted to actually do this, and it didn’t actually accomplish much when they did.

But there is a core idea of the Semantic Web that succeeded. We just don’t call it the Semantic Web anymore: we call it Linked Data. It’s the idea of sharing data, with URLs, that can explain what it means and how to connect it to other data. It’s the reason Gmail knows that you have a plane flight coming up and can locate your boarding pass. It’s the reason that editions of Wikipedia in hundreds of languages can be maintained and updated. I hear it also makes databases of medical research more interoperable, though I don’t actually know anything about that. Given that there’s this shard of the Semantic Web that does work, how about we get more semantics in it by making sure it works well with ConceptNet?

The new conceptnet.io makes it easier to use ConceptNet as Linked Data. You can get results from its API in JSON-LD format, a new format for sharing data that blows up some ugly old technologies like RDF+XML and SPARQL, and replaces them with things people actually want to use, like JSON and REST. You can get familiar with the API by just looking at it in your Web browser — when you’re in a browser, we do a few things to make it easy to explore, like adding hyperlinks and syntax highlighting.

When I learned about JSON-LD, I noticed that it would be easy to switch ConceptNet to it, because the API looked kind of like it already. But what really convinced me to make the switch was a strongly-worded rant by W3C member Manu Sporny, which both fans and foes of Semantic Web technologies should find interesting, called JSON-LD and Why I Hate the Semantic Web”. A key quote:

If you want to make the Semantic Web a reality, stop making the case for it and spend your time doing something more useful, like actually making machines smarter or helping people publish data in a way that’s useful to them.

Sounds like a good plan to me. We’ve shown a couple of ways that ConceptNet is making machines smarter than they could be without it, and some applications should be able to benefit even more by linking ConceptNet to other knowledge bases such as Wikidata.

Find out more

ConceptNet 5.5 can be found on the Web and on GitHub.

The ConceptNet documentation has been updated for ConceptNet 5.5, including an FAQ.

If you have questions or want more information, you can visit our new chat room on Gitter.

wordfreq 1.5: More data, more languages, more accuracy

wordfreq is a useful dataset of word frequencies in many languages, and a simple Python library that lets you look up the frequencies of words (or word-like tokens, if you want to quibble about what’s a word). Version 1.5 is now available on GitHub and PyPI.

wordfreq can rank the frequencies of nearly 400,000 English words. These are some of them.

wordfreq can rank the frequencies of nearly 400,000 English words. These are some of them.

These word frequencies don’t just come from one source; they combine many sources to take into account many different ways to use language.

Some other frequency lists just use Wikipedia because it’s easy, but then they don’t accurately represent the frequencies of words outside of an encyclopedia. The wordfreq data combines whatever data is available from Wikipedia, Google Books, Reddit, Twitter, SUBTLEX, OpenSubtitles, and the Leeds Internet Corpus. Now we’ve added one more source: as much non-English text as we could possibly find in the Common Crawl of the entire Web.

Including this data has led to some interesting changes in the new version 1.5 of wordfreq:

  • We’ve got enough data to support 9 new languages: Bulgarian, Catalan, Danish, Finnish, Hebrew, Hindi, Hungarian, Norwegian Bokmål, and Romanian.
  • Korean has been promoted from marginal to full support. In fact, none of the languages are “marginal” now: all 27 supported languages have at least three data sources and a tokenizer that’s prepared to handle that language.
  • We changed how we rank the frequencies of words when data sources disagree. We used to use the mean of the frequencies. Now we use a weighted median.

Fixing outliers

Using a weighted median of word frequencies is an important change to the data. When the Twitter data source says “oh man you guys ‘rt’ is a really common word in every language”, and the other sources say “No it’s not”, the word ‘rt’ now ends up with a much lower value in the combined list because of the median.

wordfreq can still analyze formal or informal writing without its top frequencies being spammed by things that are specific to one data source. This turned out to be essential when adding the Common Crawl: when text on the Web is translated into a lot of languages, there is an unreasonably high chance that it says “log in”, “this website uses cookies”, “select your language”, the name of another language, or is related to tourism, such as text about hotels and restaurants. We wanted to take advantage of the fact that we have a crawl of the multilingual Web, without making all of the data biased toward words that are overrepresented in that crawl.

A typical "select your language" dropdown.

The reason the median is weighted is so we can still compare frequencies of words that don’t appear in a majority of sources. If a source has never seen a word, that could just be sampling noise, so its vote of 0 for what the word’s frequency should be counts less. As a result, there are still source-specific words, just with a lower frequency than they had in wordfreq 1.4:

>>> # Some source data has split off "n't" as a separate token
>>> wordfreq.zipf_frequency("n't", 'en', 'large')
2.28

>>> wordfreq.zipf_frequency('retweet', 'en', 'large')
1.57

>>> wordfreq.zipf_frequency('eli5', 'en', 'large')
1.45

Why use only non-English data in the Common Crawl?

Mostly to keep the amount of data manageable. While the final wordfreq lists are compressed down to kilobytes or megabytes, building these lists already requires storing and working with a lot of input.

There are terabytes of data in the Common Crawl, and while that’s not quite “big data” because it fits on a hard disk and a desktop computer can iterate through it with no problem, counting every English word in the Common Crawl would involve intermediate results that start to push the “fits on a hard disk” limit. English is doing fine because it has its own large sources, such as Google Books.

More data in more languages

A language can be represented in wordfreq when there are 3 large enough, free enough, independent sources of data for it. If there are at least 5 sources, then we also build a “large” list, containing lower-frequency words at the cost of more memory.

There are now 27 languages that make the cut. There perhaps should have been 30: the only reason Czech, Slovak, and Vietnamese aren’t included is that I neglected to download their Wikipedias before counting up data sources. Those languages should be coming soon.

Here’s another chart showing the frequencies of miscellaneous words, this time in all the languages:

A chart of selected word frequencies in 27 languages.

Getting wordfreq in your Python environment is as easy as pip install wordfreq. We hope you find this data useful in helping computers make sense of language!

Yes, people do want pre-computed word embeddings

The very informative tutorial by Vlad Niculae on Word Mover’s Distance in Python includes this step:

We could train the embeddings ourselves, but for meaningful results we would need tons of documents, and that might take a while. So let’s just use the ones from the word2vec team.

I couldn’t have asked for a better justification for ConceptNet and Luminoso in two sentences.

When presenting new results from Conceptnet Numberbatch, which works way better than word2vec alone, one objection is that the embeddings are pre-computed and aren’t based on your data. (Luminoso is a SaaS platform that retrains them to your data, in the cases where you do need that.)

Pre-baked embeddings are useful. People are resigning themselves to use word2vec’s pre-baked embeddings because they don’t know they can have better ones. I dream of the day when someone writing a new tutorial like this says “So let’s just use Conceptnet Numberbatch.”