It’s Thursday, April 18th, 2019. Which means, for Americans and the world, it’s officially Mueller Time!
That said, it’s a workday and I don’t really have time to read 440 or so pages written by lawyers, even legendary lawyers, like the crew on the Special Counsel’s Office. What I do have time for is a word cloud. I’ve never really made a word cloud before, so let’s get to it! Here’s one flavor of the finished product. Simple, elegant, and quite Soviet.
Step one: Find some words!
Find some words. (Done.)
While the original report is not searchable, a variety of folks have now offered searchable versions. I found mine on Google Drive.
Copy/Paste the words from a searchable PDF into a powerful word processor.
Notepad didn’t work for me. For some reason the clipboard was too big for Notepad. I first used OneNote, but I was worried it wouldn’t support the next step.
So, for this I resorted to Notepad++ because my next step is to get this into a single column of words (and garbage characters, symbols, and numbers)
In Notepad++ I performed a find/replace on all the spaces ‘ ‘ to convert them to carriage returns.
This was my reference.
Open my file in Tableau Prep for cleaning.
Rename my column “Text”
Clean -> Remove Punctuation
Clean -> Remove Numbers
Create a calculated field to clean up some of the weird remaining special characters
Sort the remaining text in descending order. This will naturally do so by count([Text]) and exclude the common English language words that aren’t pertinent. Examples would include common articles, helper verbs, etc. “the”, “to”,”of",”and”,”that”,...
This part’s got a bit of subjectivity to it, but my frame of reference is that ambiguous words or even less evocative words, like pronouns, “him”, “his”, “I”, aren’t helpful to the viz. I made this an iterative process.
Lastly, and this is something one could also do in Tableau, but is useful to understand how to do in Tableau Prep Builder, I’m going to filter out the words with singular instances. I know this data set is going to have a long tail and much of these instances are either garbage data, due to the nature of reading in a PDF image and converting it to text, or it’s just not going to appear on the word cloud anyways, so this keeps the data set a bit more manageable.
First, and this is the interesting bit, duplicate the [Text] Field, because in Prep Builder, you need one instance to Group By and one instance to Count() or otherwise to aggregate.
Then, add an Aggregate step, and put the original [Text] in the Group By side and the duplicated [Text] in the Aggregate side and set it to Count.
I then Joined the original Data set to the Count, because what I really want is to let Tableau do the aggregations after I filter out the least commonly used words.
Add another clean step and filter by calculation. I opted for >= 10 because I want this data set to perform reasonably well.
If you still see ‘common’ words which aren’t interesting, revisit Step 5 or exclude them in Tableau.
Output to a Hyper file and build out your Word Cloud in Tableau.
I also opted to filter out additional ‘ambiguous’ words in Tableau Desktop.
I built a filter in Tableau Desktop to further limit the # of words by count. I settled on about 60#, which kept the viz manageable.
You can find my workbook on Tableau Public: https://public.tableau.com/profile/josephschafer#!/vizhome/MuellerReportWordCloud/MuellerReportWordCloudrwb