You are currently browsing the tag archive for the ‘Grammarly’ tag.

I’d presumed it’s trivial to show that good grammar can improve your chances of success — not that good grammar is an indication of ability, but merely that having good grammar skills lends an appearance of credibility and competence that may or may not be backed up with actual skills for the task at hand. I strongly suspect, for instance, that a resume written in accordance with the basic rules of English grammar will be more likely to bring its writer an interview, all else being equal. Rather like legacy status in an application to an Ivy League school — except with an at-least-tenuous link to ability — I’ve imagined it serves as a little bonus.*

But having recently seen a few ham-handed attempts at this yield results approximately as convincing as a child’s insistence that their imaginary friend was the one who knocked over the vase, I’m beginning to re-think my presumption.

For instance, I’ve recently found this terrible post and infographic from Grammarly that purports to show that — well, it’s a little hard to say, because they’ve managed to write 500-some words without ever having a clear thesis. The infographic reports the grammatical error rates for three pairs of competing companies, and juxtaposes this with corporate data on the three pairs, presumably to look for correlations between the two.

I believe their claim is that fewer grammar mistakes are made by more successful companies. That’s a pretty weak claim, seeing as it doesn’t even require causation. We’d see this pattern if greater success led to improved grammar, perhaps by having money to hire editors; we’d see it if better grammar increased the company’s performance; we’d see it if the two were caused by an unobserved third variable. That said, the study won’t even find evidence for this tepid claim, and perhaps that is why they carefully fail to make the claim explicit.

The post tells the reader that “major errors undermine the brand’s credibility” and that investors “may judge” them for it, but even these weak statements are watered down by the concluding paragraphs. This restraint from overstating their case is hardly laudable; it’s clear that the reader is intended to look at these numbers and colors, this subtle wrinkled-paper background on the infographic, and draw the conclusion that Grammarly has stopped short of: you need a (i.e., their) grammar checker or you will lose market share!**

[The infographic's conclusion]

The only testable claim in the infographic’s conclusion (“they must demonstrate accurate writing!”) isn’t borne out by the 1500 pixels preceding it.

It might not seem worth bothering with a breakdown of the bad science going on in this infographic. Alas, the results were uncritically echoed in a Forbes blog post, and the conclusions were only strengthened in the re-telling. So let’s look at exactly why this analysis fails to establish anything more than that people will see proof of their position in any inconclusive data.

Let’s start by looking at the data underpinning the experiment. The company took 400 (!) words from the most recent LinkedIn postings (!) of three (!) pairs (!) of competing multinational corporations. We’re not even looking at the equivalent of a single college admission essay from each company, in an age where companies are producing more publicly consumable text than ever before.

Not to mention, I looked at the LinkedIn posts from Coke, one of the companies tested. Nine of their last ten posts were, in their entirety: “The Coca-Cola Company is hiring: [position] in [location]”. The tenth was “Coke Studio makes stars out of singers in India [link]”. How do you assess grammaticality from such data?

Awesome Data, Great Jobs!

Awesome Data, Great Jobs!

Well, let’s suppose the data is appropriate and see what results we get from it. Remember: the hypothesis is that lower error rates are correlated with higher corporate success (e.g., market share, revenue). Do we see that in the head-to-head comparisons?

  • The first comparison is between Coke and Pepsi. Pepsi has more errors than Coke, and, fitting the hypothesis, Coke has a higher market share! But Pepsi has higher revenues, as the infographic notes (and then dismisses because it doesn’t fit the narrative). So we start with inconclusive data.
  • The second comparison is between Google and Facebook. Google makes fewer errors and has higher corporate success. Let’s take this one at face value: evidence in favor.
  • The third comparison is between Ford and GM. Ford makes fewer errors but is worse on every financial metric than GM. “However, these numbers are close”, the infographic contends. Evidence against.

So we have three comparisons. In one, which company is more successful is ambiguous. The two “decisive” comparisons are split. The data is literally equal in favor and in opposition to the conclusion. It is insulting that anyone could present such an argument and ask someone to believe it. If a student handed this in as an assignment, I would fail them without hesitation.***

What’s richest about this to me is that the central conceit of this study is that potential consumers will judge poor grammar skills as indicative of poor capability as a company. I’ve never found convincing evidence that bad grammar is actually indicative of poor ability outside of writing; the construction crew that put together my house probably don’t know when whom can be used, but my house is a lot more stable than it would be if Lynne Truss and I were the ones cobbling it together. But for all those people out there saying that good grammar is indicative of good logic, this clearly runs counter to that claim. Grammarly’s showing itself incapable of making an reasoned argument or marshalling evidence to support a claim, yet their grammar is fine. How are poor logic skills not a more damning inability than poor grammar skills, especially when “poor grammar” often means mistakenly writing between you and I?

The Kyle Wienses out there will cluck their tongues and think “I would never hire someone with bad grammar”, without even thinking that they’ve unquestioningly swallowed far worse logic. Sure enough, the Forbes post generated exactly the comments you’d expect:

“I figuratively cringe whenever grammar worthy of decayed shower scum invades my reading; it makes you wonder just how careful the company is of other corporate aspects (oh, gee, I don’t know, say, quality as well)”

With comments like that, maybe these people are getting the company that best reflects them: superficial and supercilious, concerned more with window-dressing to appear intelligent than with actually behaving intelligently.


*: I, of course, don’t mean that being obsessive about different than or something is relevant, but rather higher-level things like subject-verb agreement or checking sentence structures.

**: Though Grammarly makes an automated grammar checker, it wasn’t used to assemble this data. Nor was it run on this data, so we don’t know if it would even provide a solution to help out these grammatically deficient brands.

***: I don’t mean to imply that this would be convincing if only the data were better and all three comparisons went the right way. There’s no statistical analysis, not even a whiff of it, and there’s no way you could convince me of any conclusion from this experiment as currently devised. But at least if the comparisons went the right way, I could understand jumping the gun and saying you’ve found evidence. As it is, it’s imagining a gun just to try to jump it.

Post Categories

The Monthly Archives

About The Blog

A lot of people make claims about what "good English" is. Much of what they say is flim-flam, and this blog aims to set the record straight. Its goal is to explain the motivations behind the real grammar of English and to debunk ill-founded claims about what is grammatical and what isn't. Somehow, this was enough to garner a favorable mention in the Wall Street Journal.

About Me

I'm Gabe Doyle, currently an assistant professor at San Diego State University, in the Department of Linguistics and Asian/Middle Eastern Languages, and a member of the Digital Humanities. Prior to that, I was a postdoctoral scholar in the Language and Cognition Lab at Stanford University. And before that, I got a doctorate in linguistics from UC San Diego and a bachelor's in math from Princeton.

My research and teaching connects language, the mind, and society (in fact, I teach a 500-level class with that title!). I use probabilistic models to understand how people learn, represent, and comprehend language. These models have helped us understand the ways that parents tailor their speech to their child's needs, why sports fans say more or less informative things while watching a game, and why people who disagree politically fight over the meaning of "we".



@MGrammar on twitter

If you like email and you like grammar, feel free to subscribe to Motivated Grammar by email. Enter your address below.

Join 973 other subscribers

Top Rated