wordfreq 1.2 is better at Chinese, English, Greek, Polish, Swedish, and Turkish

Originally posted on October 29, 2015.
Wordfreq 1.2 example code
Examples in Chinese and British English. Click through for copyable code.

In a previous post, we introduced wordfreq, our open-source Python library that lets you ask “how common is this word?”

Wordfreq is an important low-level tool for Luminoso. It’s one of the things we use to figure out which words are important in a set of text data. When we get the word frequencies figured out in a language, that’s a big step toward being able to handle that language from end to end in the Luminoso pipeline. We recently started supporting Arabic in our product, and improved Chinese enough to take the “BETA” tag off of it, and having the right word frequencies for those languages was a big part of it.

I’ve continued to work on wordfreq, putting together more data from more languages. We now have 17 languages that meet the threshold of having three independent sources of word frequencies, which we consider important for those word frequencies to be representative.

Here’s what’s new in wordfreq 1.2:

  • The English word list has gotten a bit more robust and a bit more British by including SUBTLEX, adding word frequencies from American TV shows as well as the BBC.
  • It can fearlessly handle Chinese now. It uses a lovely pure-Python Chinese tokenizer, Jieba, to handle multiple-word phrases, and Jieba’s built-in wordlist provides a third independent source of word frequencies. Wordfreq can even smooth over the differences between Traditional and Simplified Chinese.
  • Greek has also been promoted to a fully-supported language. With new data from Twitter and OpenSubtitles, it now has four independent sources.
  • In some applications, you want to tokenize a complete piece of text, including punctuation as separate tokens. Punctuation tokens don’t get their own word frequencies, but you can ask the tokenizer to give you the punctuation tokens anyway.
  • We added support for Polish, Swedish, and Turkish. All those languages have a reasonable amount of data that we could obtain from OpenSubtitles, Twitter, and Wikipedia by doing what we were doing already.

When adding Turkish, we made sure to convert the case of dotted and dotless İ’s correctly. We know that putting the dots in the wrong places can lead to miscommunication and even fatal stabbings.

The language in wordfreq that’s still only partially supported is Korean. We still only have two sources of data for it, so you’ll see the disproportionate influence of Twitter on its frequencies. If you know where to find a lot of freely-usable Korean subtitles, for example, we would love to know.

Let’s revisit the top 10 words in the languages wordfreq supports. And now that we’ve talked about getting right-to-left right, let’s add a bit of code that makes Arabic show up with right-to-left words in left-to-right order, instead of middle-to-elsewhere order like it came out before.

Code showing the top ten words in each language wordfreq 1.2 supports.

Wordfreq 1.2 is available on GitHub and PyPI.

wordfreq: Open source and open data about word frequencies

Originally posted on September 1, 2015.

Often, in NLP, you need to answer the simple question: “is this a common word?” It turns out that this leaves the computer to answer a more vexing question: “What’s a word?”

Let’s talk briefly about why word frequencies are important. In many cases, you want to assign more significance to uncommon words. For example, a product review might contain the word “use” and the word “defective”, and the word “defective” carries way more information. If you’re wondering what the deal is with John Kasich, a headline that mentions “Kasich” will be much more likely to be what you’re looking for than one that merely mentions “John”.

For purposes like these, it would be nice if we could just import a Python package that could tell us whether one word was more common than another, in general, based on a wide variety of text. We looked for a while and couldn’t find it. So we built it.

wordfreq provides estimates of the frequencies of words in many languages, loading its data from efficiently-compressed data structures so it can give you word frequencies down to 1 occurrence per million without having to access an external database. It aims to avoid being limited to a particular domain or style of text, getting its data from a variety of sources: Google Books, Wikipedia, OpenSubtitles, Twitter, and the Leeds Internet Corpus.

The 10 most common words that wordfreq knows in 15 languages.
The 10 most common words that wordfreq knows in 15 languages. Yes, it can handle multi-character words in Chinese and Japanese; those just aren’t in the top 10. A puzzle for Unicode geeks: guess where the start of the Arabic list is.

Partial solutions: stopwords and inverse document frequency

Those who are familiar with the basics of information retrieval probably have a couple of simple suggestions in mind for dealing with word frequencies.

One is to come up with a list of stopwords, words such as “the” and “of” that are too common to use for anything. Discarding stopwords can be a useful optimization, but that’s far too blunt of an operation to solve the word frequency problem in general. There’s no place to draw the bright line between stopwords and non-stopwords, and in the “John Kasich” example, it’s not the case that “John” should be a stopword.

Another partial solution would be to collect all the documents you’re interested in, and re-scale all the words according to their inverse document frequency or IDF. This is a quantity that decreases as the proportion of documents a word appears in increases, reaching 0 for a word that appears in every document.

One problem with IDF is that it can’t distinguish a word that appears in a lot of documents because it’s unimportant, from a word that appears in a lot of documents because it’s very important to your domain. Another, more practical problem with IDF is that you can’t calculate it until you’ve seen all your documents, and it fluctuates a lot as you add documents. This is particularly an issue if your documents arrive in an endless stream.

We need good domain-general word frequencies, not just domain-specific word frequencies, because without the general ones, we can’t determine which domain-specific word frequencies are interesting.

Avoiding biases

The counts of one resource alone tend to tell you more about that resource than about the language. If you ask Wikipedia alone, you’ll find that “census”, “1945”, and “stub” are very common words. If you ask Google Books, you’ll find that “propranolol” is supposed to be 10 times more common than “lol” overall (and also that there’s something funny going on, so to speak, in the early 1800s).

If you collect data from Twitter, you’ll of course find out how common “lol” is. You also might find that the ram emoji “🐏” is supposed to be extremely common, because that guy from One Direction once tweeted “We are derby super 🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏🐏”, and apparently every fan of One Direction who knows what Derby Super Rams are retweeted it.

Yes, wordfreq considers emoji to be words. Its Twitter frequencies would hardly be complete without them.

We can’t entirely avoid the biases that come from where we get our data. But if we collect data from enough different sources (not just larger sources), we can at least smooth out the biases by averaging them between the different sources.

What’s a word?

You have to agree with your wordlist on the matter of what constitutes a “word”, or else you’ll get weird results that aren’t supported by the actual data.

Do you split words at all spaces and punctuation? Which of the thousands of symbols in Unicode are punctuation? Is an apostrophe punctuation? Is it punctuation when it puts a word in single quotes? Is it punctuation in “can’t”, or in “l’esprit”? How many words is “U.S.” or “google.com”? How many words is “お早うございます” (“good morning”), taking into account that Japanese is written without spaces? The symbol “-” probably doesn’t count as a word, but does “+”? How about “☮” or “♥”?

The process of splitting text into words is called “tokenization”, and everyone’s got their own different way to do it, which is a bit of a problem for a word frequency list.

We tried a few ways to make a sufficiently simple tokenization function that we could use everywhere, across many languages. We ended up with our own ad-hoc rule including large sets of Unicode characters and a special case for apostrophes, and this is in fact what we used when we originally released wordfreq 1.0, which came packaged with regular expressions that look like attempts to depict the Flying Spaghetti Monster in text.

A particularly noodly regex.

But shortly after that, I realized that the Unicode Consortium had already done something similar, and they’d probably thought about it for more than a few days.

Word splitting in Unicode
Word splitting in Unicode. Not pictured: how to decide which of these segments count as “words”.

This standard for tokenization looked like almost exactly what we wanted, and the last thing holding me back was that implementing it efficiently in Python looked like it was going to be a huge pain. Then I found that the regex package (not the re package built into Python) contains an efficient implementation of this standard. Defining how to split text into words became a very simple regular expression… except in Chinese and Japanese, because a regular expression has no chance in a language where the separation between words is not written in any way.

So this is how wordfreq 1.1 identifies the words to count and the words to look up. Of course, there is going to be data that has been tokenized in a different way. When wordfreq gets something that looks like it should be multiple words, it will look them up separately and estimate their combined frequency, instead of just returning 0.

Language support

wordfreq supports 15 commonly-used languages, but of course some languages are better supported than others. English is quite polished, for example, while Chinese so far is just there to be better than nothing.

The reliability of each language corresponds pretty well with the number of different data sources we put together to make the wordlist. Some sources are hard to get in certain languages. Perhaps unsurprisingly, for example, not much of Twitter is in Chinese. Perhaps more surprisingly, not much of it is in German either.

The word lists that we’ve built for wordfreq represent the languages where we have at least two sources. I would consider the ones with two sources a bit dubious, while all the languages that have three or more sources seem to have a reasonable ranking of words.

  • 5 sources: English
  • 4 sources: Arabic, French, German, Italian, Portuguese, Russian, Spanish
  • 3 sources: Dutch, Indonesian, Japanese, Malay
  • 2 sources: Chinese, Greek, Korean

Compact wordlists

When we were still figuring this all out, we made several 0.x versions of wordfreq that required an external SQLite database with all the word frequencies, because there are millions of possible words and we had to store a different floating-point frequency for each one. That’s a lot of data, and it would have been infeasible to include it all inside the Python package. (GitHub and PyPI don’t like huge files.) We ended up with a situation where installing wordfreq would either need to download a huge database file, or build that file from its source data, both of which would consume a lot of time and computing resources when you’re just trying to install a simple package.

As we tried different ways of shipping this data around to all the places that needed it, we finally tried another tactic: What if we just distributed less data?

Two assumptions let us greatly shrink our word lists:

  • We don’t care about the frequencies of words that occur less than once per million words. We can just assume all those words are equally informative.
  • We don’t care about, say, 2% differences in word frequency.

Now instead of storing a separate frequency for each word, we group the words into 600 possible tiers of frequency. You could call these tiers “centibels”, a logarithmic unit similar to decibels, because there are 100 of them for each factor of 10 in the word frequency. Each of them represents a band of word frequencies that spans about a 2.3% difference. The data we store can then be simplified to “Here are all the words in tier #330… now here are all the words in tier #331…” and converted to frequencies when you ask for them.

Some tiers of word frequencies in English.
Some tiers of word frequencies in English.

This let us cut down the word lists to an entirely reasonable size, so that we can put them in the repository, and just keep them in memory while you’re using them. The English word list, for example, is 245 KB, or 135 KB compressed.

But it’s important to note the trade-off here, that wordfreq only represents sufficiently common words. It’s not suited for comparing rare words to each other. A word rarer than “amulet”, “bunches”, “deactivate”, “groupie”, “pinball”, or “slipper”, all of which have a frequency of about 1 per million, will not be represented in wordfreq.

Getting the package

wordfreq is available on GitHub, or it can be installed from the Python Package Index with the command pip install wordfreq. Documentation can be found in its README on GitHub.

wordfreq usage example
Comparing the frequency per million words of two spellings of “café”, in English and French.

ftfy (fixes text for you) version 3.0

Originally posted on August 26, 2013.

About a year ago, we blogged about how to ungarble garbled Unicode in a post called Fixing common Unicode mistakes with Python — after they’ve been made. Shortly after that, we released the code in a Python package called ftfy.

You have almost certainly seen the kind of problem ftfy fixes. Here’s a shoutout from a developer who found that her database was full of place names such as “BucureÅŸti, Romania” because of someone else’s bug. That’s easy enough to fix:

>>> from ftfy import fix_text

>>> print(fix_text(u'BucureÅŸti, Romania'))
Bucureşti, Romania

>>> fix_text(u'Sokal’, L’vivs’ka Oblast’, Ukraine')
"Sokal', L'vivs'ka Oblast', Ukraine"

A reddit commenter has helpfully reminded me of the technical name for this phenomenon, which is mojibake.

We’ve kept developing this code because of how directly useful it is. Today, we’re releasing version 3.0 of ftfy. We’ve made it run faster, made it start up faster, made it fix more kinds of problems, and reduced its rate of false positives to near zero, so that now we can just run it on any text anyone sends us.

(I know that “near zero” is not a useful description of an error rate. To be more precise: We test ftfy by running the live stream of Twitter through it and looking at the changes it makes. Since the last bugfix, it has handled over 7,000,000 tweets with no false positives.)

We’ve also made sure that the code runs on both Python 2 and Python 3, and gives equivalent results on all versions, even when the text contains “astral characters” such as emoji that are handled inconsistently in Python 2.

You can get ftfy from GitHub or by using your favorite Python package manager, such as:

pip install ftfy

If ftfy is useful to you, we’d love to hear how you’re using it. You can reply to the comments here or e-mail us at info@luminoso.com.

Fixing Unicode mistakes and more: the ftfy package

Originally posted on August 24, 2012.

There’s been a great response to my earlier post, Fixing common Unicode mistakes with Python. This is clearly something that people besides me needed. In fact, someone already made the code into a web site, at fixencoding.com. I like the favicon.

I took the suggestion to split the code into a new standalone package. It’s now called ftfy, standing for “fixes text for you”. You can install it with pip install ftfy.

I observed that I was doing interesting things with Unicode in Python, and yet I wasn’t doing it in Python 3, which basically makes me a terrible person. ftfy is now compatible with both Python 2 and Python 3.

Something else amusing happened: At one point, someone edited the previous post and WordPress barfed HTML entities all over its text. All the quotation marks turned into “, for example. So, for a bit, that post was setting a terrible example about how to handle text correctly!

I took that as a sign that I should expand ftfy so that it also decodes HTML entities (though it will leave them alone in the presence of HTML tags). While I was at it, I also made it turn curly quotes into straight ones, convert Windows line endings to Unix, normalize Unicode characters to their canonical forms, strip out terminal color codes, and remove miscellaneous control characters. The original fix_bad_unicode is still in there, if you just want the encoding fixer without the extra stuff.

Fixing common Unicode mistakes with Python after they’ve been made

Originally posted on August 20, 2012.

Update: not only can you fix Unicode mistakes with Python, you can fix Unicode mistakes with our open source Python package ftfy. It’s on PyPI and everything.

You have almost certainly seen text on a computer that looks something like this:

If numbers aren’t beautiful, I don’t know what is. –Paul Erdős

Somewhere, a computer got hold of a list of numbers that were intended to constitute a quotation and did something distinctly un-beautiful with it. A person reading that can deduce that it was actually supposed to say this:

If numbers aren’t beautiful, I don’t know what is. –Paul Erdős

Here’s what’s going on. A modern computer has the ability to display text that uses over 100,000 different characters, but unfortunately that text sometimes passes through a doddering old program that believes there are only the 256 that it can fit in a single byte. The program doesn’t even bother to check what encoding the text is in; it just uses its own favorite encoding and turns a bunch of characters into strings of completely different characters.

Now, you’re not the programmer causing the encoding problems, right? Because you’ve read something like Joel Spolsky’s The Absolute Minimum Every Developer Absolutely, Positively Must Know About Unicode And Character Sets or the Python Unicode HOWTO and you’ve learned the difference between text and bytestrings and how to get them right.

But the problem is that sometimes you might have to deal with text that comes out of other code. We deal with this a lot at Luminoso, where the text our customers want us to analyze has often passed through several different pieces of software, each with their own quirks, probably with Microsoft Office somewhere in the chain.

So this post isn’t about how to do Unicode right. It’s about a tool we came up with for damage control after some other program does Unicode wrong. It detects some of the most common encoding mistakes and does what it can to undo them.

Continue reading “Fixing common Unicode mistakes with Python after they’ve been made”