You are currently browsing the category archive for the ‘language change’ category.

It’s a dark night; you’re in an unfamiliar city, slightly lost, but pretty sure you’ll know where you are if you just get to the next corner. The streets are quiet. A stranger steps out of the gloom in front of you, and announces that certain words don’t mean what you think they mean. They’re words that you use but have never really felt comfortable with, words that you use mostly because you’ve heard them in set phrases, words like plethora.

Plethora, you wonder, could it be I’m using it wrong? That niggling uncertainty kicks in, the same niggling uncertainty that’s pushed you to educate yourself all these years. It creeps further, darkening your mind. Have I been using words wrong? Your breath quickens — how many others have thought heard me say them before this stranger came up and told me I was wrong? Have I used one of them lately?  Have I been judged? Your pulse races. Did I just say one? — is, is that why this stranger materialized to announce it was wrong?

The stranger says more words are being used wrong, by others, by you. These words are more common, common enough to be known but not common enough to be well-known: myriad, enormity. Oh God, you think, I’ve used those words in business writing! The uncertainty changes into certainty, certainty that you are wrong, and worse, that people know it. Important people know it. That’s why you haven’t been promoted, it’s why your friends were laughing that one time and didn’t say why. The stranger has you now. The stranger knows the dark spots on your soul. The stranger is almost touching you now, so close, so close. Your eyes meet. The stranger’s eyes widen; this is it, the final revelation.  Do you dare listen?  You can’t listen, you must listen:

“And you’re using allow wrong, too!”

At which point the spell is broken — because c’mon, you’re not using allow wrong. You’d definitely have noticed that. You push the stranger out of the way, and realize your hotel’s just on the next block.

In the unfamiliar city of the Internet, I encountered such a stranger: Niamh Kinsella, writer of the listicle “14 words you’ve been using incorrectly this whole time“. Kinsella argues that your usage doesn’t fit with the true definition of these words, by which she usually means an early, obsolete, or technical meaning of the word.

Her first objection is to plethora, which she defines as “negative word meaning a glut of fluid”. And so it was in the 1500s, when it entered the language as a medical term. This medical meaning persists in the present day, but additional figurative meanings branched off of it long ago — so long ago, in fact, that one of the meanings branched off, flourished for 200 years, and still had enough time to fade into obsolescence by now. The extant figurative meaning, the one that most everyone means when they use plethora, is antedated to 1835 by the Oxford English Dictionary, at which point it was usually a bad thing (“suffering under a plethora of capital”, the OED quotes). But by 1882 we see the modern neutral usage: “a perfect plethora of white and twine-colored thick muslin”.

The second objection is to myriad, and here Kinsella deviates by ignoring the early usage. She hectors: “It’s an adjective meaning countless and infinite. As it’s an adjective, it’s actually incorrect to say myriad of.” But in fact myriad entered English as a noun, either as a transliteration of the Greek term for “ten thousand”, or as an extension of that very large number to mean “an unspecified very large number” (both forms are antedated by the OED to the same 1555 work). The adjectival form doesn’t actually appear until two centuries later, the 1700s. Both nominal and adjectival forms have been in use from their inception to the present day; claiming that one or the other is the only acceptable form is just silly.*

There’s no point in continuing this after the third objection, which is to using allow in cases that do not involve the explicit granting of permission. To give you an idea of what folly this is, think of replacements for allows in a supposedly objectionable sentence like “A functional smoke alarm allows me to sleep peacefully.” The first ones that come to my mind are lets, permits, gives me the ability, and enables. That’s the sign of a solid semantic shift; four of my top five phrasings of the sentence are all verbs of permission with the permission shifted to enablement. Kinsella herself has no beef with it when she isn’t aiming to object, judging by her lack of objection to an article headlined “Are we allowed optimism now?”.

This enablement usage isn’t new, either; the OED cites “His condition would not allow of his talking longer” from 1732. (Permit without permission is antedated even further back, to 1553.) This oughtn’t even to be up for debate; even if it were completely illogical — which, as an example of consistent semantic drift, it’s not — the fact that it is so standard in English means that it is, well, standard. It is part of English, and no amount of insisting that it oughtn’t to makes a difference. It’s similar to the occasional objection I see to Aren’t I?: even if I agreed it didn’t make sense, virtually every (non-Scottish/Irish) English speaker uses it in place of amn’t I?, so it’s right. End of discussion.

Why do we fall for this over and over again? Why do we let people tell us what language is and isn’t based on assertions that never have any references (Kinsella cites no dictionaries) and rarely hold up to cursory investigation? I don’t know, but my guess is that it appeals to that universal mixture of insecurity and vanity that churns inside each of us.

We are convinced that we must be doing everything wrong, or — and perhaps worse — that we’re doing most things right but there’s some unexpected subset of things that we have no idea we’re doing wrong. So if someone tells us we’re wrong, especially if they candy coat it by saying that it’s not our fault, that everyone’s wrong on this, well, we just assume that our insecurities were right — i.e, that we were wrong. But then, aware of this new secret knowledge, these 14 weird tricks of language use, our vanity kicks in. Now we get to be the ones to tell others they’re wrong. Knowing these shibboleths gives you the secret knowledge of the English Illuminati. Between our predisposition to believe we’re wrong, our desire to show others up by revealing they’re wrong, and our newfound membership in this elite brotherhood, what incentive do we have to find out that these rules are hogwash? All that comes out of skepticism is, well, this: me, sitting on my laptop, writing and rewriting while the sun creeps across a glorious sky on a beautiful day that I could have been spending on the patio of my favorite coffee shop, approaching my fellow patrons, dazzling them with my new conversation starter: “I bet you use plethora wrong. Allow me to explain.”

*: In fact, Kinsella undermines her own definition of “countless and infinite” in her supposedly correct example by using “countless and infinite” to describe the finite set of stars in the universe, so maybe she’s just in love with the sound of her own hectoring.

I have it on bad authority that English has died. You may have heard the linguistic Chicken Littles milling about Internet, each trying to come up with a more hyperbolic statement about the death of the language — or perhaps even society as a whole — because twerk is now a real word”, whatever that’s supposed to mean. Ben Zimmer has a nice run-down of this “perfect lexicographical storm”, and if you’ve been lucky enough to have missed out on it, let me offer a few sample Tweets:

The last one’s best because it really couldn’t be more wrong. No one has the power to make something “officially” a word,* and it wasn’t the Oxford English Dictionary but the Oxford Dictionaries Online that added these entries. (The differences between the OED and ODO are detailed here.) I mean, seriously, if you’re going to lecture someone, can’t you at least put in the little effort it takes to be right?

For some reason, many media outlets can’t, at least not when they’ve got new dictionary entries on the brain. The wrong dictionary is cited, the new entries are never read,** and the purpose of a dictionary is always misunderstood — which is to record common words, not exclude them.

In light of all the misinformation out there, let’s calm down and look at what’s actually happened, why it’s happened, and what it means.

What has happened? The Oxford Dictionaries Online (ODO), in one of their quarterly updates, added a set of new definitions to their online dictionary, including ones for emoji, cake pop, and, yes, twerk. The ODO “offers guidance on how the English language is used today, based on the Oxford English Corpus. Words can be removed when they are no longer used”, as noted on their page explaining that the ODO and OED are not the same thing.

Nothing has “become a word”, nothing has been “officially” recognized, nor “added to the language”. One dictionary — one that focuses on contemporary usage — has added these words so that people who are unaware of them or unaware of how they’re used (me, in cases like balayage) can find out from a more reliable source than Urban Dictionary. The words already existed and were in common enough use that a group of lexicographers decided that their definitions should be noted and made available.

Why did this happen? Angus Stevenson explains in the ODO announcement:

“New words, senses, and phrases are added to Oxford Dictionaries Online when we have gathered enough independent evidence from a range of sources to be confident that they have widespread currency in English. [...] Each month, we add about 150 million words to our corpus database of English usage examples collected from sources around the world. We use this database to track and verify new and emerging words and senses on a daily basis.”

These words were added for one reason: they are currently sufficiently common that the lexicographers at ODO feel it will be useful for people to be able to find out what these words mean and how they are used. This does not imply that the lexicographers like or dislike these words, nor that they want to see them used more or less. In the same way that a meteorologist is compelled to state the expected weather regardless of whether they’d prefer something else, so too are the lexicographers bound to the language we give them, like it or lump it.***

['conk' in my desk dictionary]

A century ago, conk could have been a contentious addition, yet within a decade of its appearance, Rudyard Kipling was using it.

What does it mean? Well, let’s start with what it doesn’t mean. It doesn’t mean that these words are in “the dictionary”, because there is no “the” dictionary; there are a wide range of dictionaries, with different purposes and different criteria for adding entries. There is no central authority on English, so nothing’s ever “officially” a word or not. It also doesn’t mean that you have to like these words, nor that you have to use them or understand them. It doesn’t mean that all future dictionaries will now be forced to include these words in perpetuity, regardless of the lifespan of the words.

English is the same today as was two days ago; it’s just a little better documented. The ODO’s update means that if you choose to use these words, other people will be able to find out what they mean, and if other people choose to use them, you will be able to find out what they mean. For the words that show staying power, more and more dictionaries will contain them, and those words that don’t will disappear. (The OED does not remove words once they’re in, but many dictionaries do, including the ODO at the center of the current dust-up.)

Lastly, if you’re worried that defining selfie and supercut and their ilk makes our generation look silly, or self-involved, or obsessed with stupid Internet trifles, well, maybe we are. Change begins at home; stop clicking on cat videos and waging arguments through memes. Stop making Miley Cyrus the top news story in place of Syria and the NSA and things that matter. Talk about ideas instead of contrived distractions. Dictionaries are reflections of our time; one can’t blame the mirror for an ugly face.

[A disclaimer: I am a linguist, not a lexicographer. If you are a lexicographer, we'd all love to hear any additional insights you have, and of course, please correct me if I've mischaracterized anything. If you are not a lexicographer but are interested in hearing more about lexicography, you can't go wrong with Ben Zimmer's or Kory Stamper's writings.]


*: This whole idea of “X is (not) a word” doesn’t even make sense anyway — see discussions by Arnold Zwicky and Stan Carey. A word is a word if it is used with a consistent meaning by some group of language users. For linguists, we have different possible definitions of a word (orthographic words, phonological words, etc.), so the matter’s actually pretty complicated — are idioms words, for instance?

**: In 2011, the actual OED did add a new entry for heart, v., based on its slang usage for “love”. The OED’s announcement noted the new form derived in part from the famous “I♥NY” logo, but nowhere in the entry does ♥ or <3 appear. That didn't stop Time, the Daily Mail, and many others from claiming that the OED had added its first graphical/symbolic entry and clucking their tongues as expected.

***: My impression is that lexicographers like more than they lump, as you can tell from the excitement of their update announcement.

If someone were to lend me a time machine and ask me to go back and figure out exactly what first set me down my road to dedicated descriptivism, I would first ask them if perhaps there wasn’t a better use for this marvelous contraption. But if they persisted, the coordinates I’d start with would be my elementary school days. I suspect it was some time around then that I first asked for permission to do something and was met with one of the archetypal prescriptions.

“Can I go to the bathroom?”, I surely must have asked, and just as surely a teacher must have answered, “I don’t know, can you?”

The irritation that I felt at this correction was so severe that even though I can’t remember when this happened, nor who did it to me, I still can call to mind the way it made me seethe. It was clear to me that the pedant was wrong, but I couldn’t figure out quite how to explain it. So, at the risk of sounding like I’m trying to settle a two-decade-old grudge, let’s look at whether it makes sense to correct this. I say that the answer is no — or at the very least, that one oughtn’t to correct it so snootily.

Let’s examine the “error” that the authority figure is correcting.  Can, we are told, addresses the ability to do something, whereas may addresses permission.  Mom said I can count to ten means that dear ol’ Mum believes in my ability to count to ten, although she may not want me to do so; Mom said I may count to ten means that Mum is allowing me to do so, although she need not believe that I am able to.*

At any given time, there are a lot of things that one is capable of doing (can do) and a lot of things that one is permitted to do (may do), and a few things that fall into both categories.  The prescriptivist idea is that there is a fairly clear distinction between the two categories, though, and so it is important to distinguish them.

Except, well, it’s not so important after all; can and may were tightly intertwined in early English, and were never fully separated.  The OED lists an obsolete usage [II.4a] of may as meaning “be able; can”.  This is first attested in Old English, and continues through to at least 1645.  Furthermore, may meaning “expressing objective possibility” [II.5] is attested from Old English to the present day (although it is noted as being rare now).  Examples of these are given in (1) and (2).  So we see that may does not always address the issue of permission, that may has encroached upon can‘s territory at times in the past and continues to do so to this day.

(1) No man may separate me from thee. [1582]
(2) Youth clubs may be found in all districts of the city. [1940]

As for can, there’s no historical evidence I found of it referring to permission in the distant past.  Back then, may was apparently the dominant one, stealing usages from can.  The OED gives a first citation for can meaning “to be allowed to” in 1879, by Alfred, Lord Tennyson, and does call the usage colloquial, at least on the British side of the pond.  But still, we’ve got it attested 130 years ago by a former Poet Laureate of the UK.  That’s a pretty good lineage for the permission usage.

Furthermore, I think (at least in contemporary American English) that the may I usage is old-fashioned to the point of sounding stilted or even affected outside of highly formal contexts. Just to back up my intuition, here’s the Google Books N-grams chart comparing May I go and Can I go:

can-may

You can see there’s a changeover in the mid-1960s, when the usage levels of May I finish plunging and Can I starts rocketing away. As you well know, this sort of fairly sudden change in relative frequency tends to generate a backlash against the newly-prominent form as a sign of linguistic apocalypse, so there’s no real surprise that people would loudly oppose permissive Can I. As always, the loud opposition to it is one of the surest signs that it’s passed a point of no return. By my youth, Can I was ensconced as the question of choice, and nowadays, I doubt many of our kids are getting being corrected on it — though it remains prominent enough in our zeitgeist to function as a set-up for a range of uninspired jokes.

So historically, what can we say of can and may and permission and ability? We’ve seen something of a historical switch. In the distant past, may could indicate either permission or ability, while can was restricted to ability. Over time, may‘s domain has receded, and can‘s has expanded. In modern usage, can has taken on permission senses as well as its existing ability senses. May, on the other hand, has become largely restricted to the permission sense, although there are some “possibility”-type usages that still touch on ability, especially when speaking of the future:

(3) We may see you at Breckenridge then.

The can expansion is a bit recent in historical terms, but that still means it’s been acceptable for over a hundred years — judging by the Tennyson citation — and commonplace for the last fifty or so. The recency explains the lingering resentment at permissive can, but it doesn’t justify it. Permissive can is here to stay, and there’s no reason to oppose it.**

*: Not to telegraph my argument, but even here I find Mom said I can count to sound more like a statement of permission than ability.

**: I have some thoughts on whether it’s really even possible to draw a clear line between permission and ability — in essence addressing the question of whether the smearing together of can and may is an accident or inevitability. I’ll try to put them together at some point & link to them, but given my history of failing to follow through with follow-up posts, I’m not going to leave it as only a possibility, not a promise.

This blog was linked to a while ago in a Reddit discussion of uninterested and disinterested. (My opinion on them is that uninterested is restricted to the “unconcerned” meaning, while disinterested can mean either “unconcerned” or “impartial”, and that’s an opinion based on both historical and modern usage. In fact, despite the dire cries that people are causing the two words to smear together, it actually looks like the distinction between them is growing over time.)

The reason I bring this up again is that one of the Redditors was proposing that having a strong distinction could make sense, because:

“Some people draw a distinction between disinterested and uninterested. There is nothing to lose and perhaps subtlety to be gained by using that distinction yourself. Therefore observing the distinction should always be recommended.”

But I’ve already asked my question about this in the title: is there really nothing to lose? Is there no cost to maintaining a strict distinction between words? Or, more generally, is there no cost to maintaining a grammar rule?

Well, in a myopic sense, no, there’s nothing much to lose by having the rule. In the case of uninterested and disinterested, it would be hard to argue that not being able to use disinterested to mean “unconcerned” is a substantial loss. It can be done, though: I, for instance, am a great lover of alliteration, and as a result, I like to have synonyms with as many different initial letters as possible. There’s a cost, small though it may be, to not having disinterested available as I’m constructing sentences. But that’s a triviality.

A more substantial consequence is that it introduces a discontinuity in the historical record. If we decide that from now on disinterested only means “impartial”, then historical and current uses of the “unconcerned” sense will be opaque to people taught the hard-and-fast rule. That’s problematic because, despite the belief of some people that this is an illiterate usage, it’s actually common even for good writers to use. This, again, isn’t a big problem; we regularly understand misused words, especially ones whose intended meanings are very close to their actual meanings. Saying that we can’t have a rule of grammar because sometimes it isn’t followed is the sort of whateverism that people accuse descriptivists of, not a reasonable concern.*

No, the true cost is a higher-level cost: the overhead of having another distinction. This might also seem trivial. After all, we have tons and tons of usage rules and distinctions, and a lexical distinction like this is really little more than remembering a definition. But let me illustrate my point with an example I recently saw on Tumblr (sorry for the illegibility):

The distinction here is well-established: affect is almost always the verb, effect almost always the noun.** Yet here we see that it is costly to maintain the distinction. First, it’s costly to remember which homophone goes in which role. Second, it’s costly to make an error, as people may mock you for it. Third, it’s very easy to get it wrong, as the replier did here.

If there were really no downside to adding an additional rule, we’d expect to see every possibly useful distinction be made. We’d expect, for instance, to have a clear singular/plural second-person distinction in English (instead of just you). I’d expect to see an inclusive/exclusive first-person plural distinction as well, as I sometimes want to establish whether I’m saying we to include the person I’m speaking to or not. The subjunctive wouldn’t be disappearing, nor would whom.

But all distinctions are not made. In fact, relatively few of the possible distinctions we could make at the word level are made. And that suggests that even if the reasons I’ve listed for not maintaining a lot of distinctions aren’t valid, there must be something that keeps us from making all the distinctions we could make.

So next time someone says “there oughta be a rule”, think about why there isn’t. Rules aren’t free, and only the ones whose benefits outweigh their costs are going to be created and maintained. The costs and benefits change over time, and that’s part of why languages are forever changing.


*: Of course, if the distinction is regularly violated, then it’s hardly whateverist to say that it doesn’t exist.

**: Affect is a noun in psychology, effect a verb meaning “to cause” that is largely reviled by prescriptivists.

Post Categories

The Monthly Archives

About The Blog

A lot of people make claims about what "good English" is. Much of what they say is flim-flam, and this blog aims to set the record straight. Its goal is to explain the motivations behind the real grammar of English and to debunk ill-founded claims about what is grammatical and what isn't. Somehow, this was enough to garner a favorable mention in the Wall Street Journal.

About Me

I'm Gabe Doyle, a graduate student/doctoral candidate in Linguistics at UC San Diego. I have a Bachelor's in math from Princeton and a Master's in linguistics from UCSD.

In my research, I look at how humans manage one of their greatest learning achievements: the acquisition of language. I build computational models of how people can learn language with cognitively-general processes and as few presuppositions as possible.

I focus on learning problems that have traditionally been viewed as difficult, such as combining multiple information sources or learning without negative data or ungrammatical examples. My dissertation models how children can use multiple cues to segment words from child-directed speech, and how phonological constraints can be inferred based on what children do and don't hear adults say.



@MGrammar on twitter

Recent Tweets

If you like email and you like grammar, feel free to subscribe to Motivated Grammar by email. Enter your address below.

Join 736 other followers

Top Rated

Follow

Get every new post delivered to your Inbox.

Join 736 other followers

%d bloggers like this: