You are currently browsing the category archive for the ‘words’ category.

If someone were to lend me a time machine and ask me to go back and figure out exactly what first set me down my road to dedicated descriptivism, I would first ask them if perhaps there wasn’t a better use for this marvelous contraption. But if they persisted, the coordinates I’d start with would be my elementary school days. I suspect it was some time around then that I first asked for permission to do something and was met with one of the archetypal prescriptions.

“Can I go to the bathroom?”, I surely must have asked, and just as surely a teacher must have answered, “I don’t know, can you?”

The irritation that I felt at this correction was so severe that even though I can’t remember when this happened, nor who did it to me, I still can call to mind the way it made me seethe. It was clear to me that the pedant was wrong, but I couldn’t figure out quite how to explain it. So, at the risk of sounding like I’m trying to settle a two-decade-old grudge, let’s look at whether it makes sense to correct this. I say that the answer is no — or at the very least, that one oughtn’t to correct it so snootily.

Let’s examine the “error” that the authority figure is correcting.  Can, we are told, addresses the ability to do something, whereas may addresses permission.  Mom said I can count to ten means that dear ol’ Mum believes in my ability to count to ten, although she may not want me to do so; Mom said I may count to ten means that Mum is allowing me to do so, although she need not believe that I am able to.*

At any given time, there are a lot of things that one is capable of doing (can do) and a lot of things that one is permitted to do (may do), and a few things that fall into both categories.  The prescriptivist idea is that there is a fairly clear distinction between the two categories, though, and so it is important to distinguish them.

Except, well, it’s not so important after all; can and may were tightly intertwined in early English, and were never fully separated.  The OED lists an obsolete usage [II.4a] of may as meaning “be able; can”.  This is first attested in Old English, and continues through to at least 1645.  Furthermore, may meaning “expressing objective possibility” [II.5] is attested from Old English to the present day (although it is noted as being rare now).  Examples of these are given in (1) and (2).  So we see that may does not always address the issue of permission, that may has encroached upon can‘s territory at times in the past and continues to do so to this day.

(1) No man may separate me from thee. [1582]
(2) Youth clubs may be found in all districts of the city. [1940]

As for can, there’s no historical evidence I found of it referring to permission in the distant past.  Back then, may was apparently the dominant one, stealing usages from can.  The OED gives a first citation for can meaning “to be allowed to” in 1879, by Alfred, Lord Tennyson, and does call the usage colloquial, at least on the British side of the pond.  But still, we’ve got it attested 130 years ago by a former Poet Laureate of the UK.  That’s a pretty good lineage for the permission usage.

Furthermore, I think (at least in contemporary American English) that the may I usage is old-fashioned to the point of sounding stilted or even affected outside of highly formal contexts. Just to back up my intuition, here’s the Google Books N-grams chart comparing May I go and Can I go:

can-may

You can see there’s a changeover in the mid-1960s, when the usage levels of May I finish plunging and Can I starts rocketing away. As you well know, this sort of fairly sudden change in relative frequency tends to generate a backlash against the newly-prominent form as a sign of linguistic apocalypse, so there’s no real surprise that people would loudly oppose permissive Can I. As always, the loud opposition to it is one of the surest signs that it’s passed a point of no return. By my youth, Can I was ensconced as the question of choice, and nowadays, I doubt many of our kids are getting being corrected on it — though it remains prominent enough in our zeitgeist to function as a set-up for a range of uninspired jokes.

So historically, what can we say of can and may and permission and ability? We’ve seen something of a historical switch. In the distant past, may could indicate either permission or ability, while can was restricted to ability. Over time, may‘s domain has receded, and can‘s has expanded. In modern usage, can has taken on permission senses as well as its existing ability senses. May, on the other hand, has become largely restricted to the permission sense, although there are some “possibility”-type usages that still touch on ability, especially when speaking of the future:

(3) We may see you at Breckenridge then.

The can expansion is a bit recent in historical terms, but that still means it’s been acceptable for over a hundred years — judging by the Tennyson citation — and commonplace for the last fifty or so. The recency explains the lingering resentment at permissive can, but it doesn’t justify it. Permissive can is here to stay, and there’s no reason to oppose it.**

*: Not to telegraph my argument, but even here I find Mom said I can count to sound more like a statement of permission than ability.

**: I have some thoughts on whether it’s really even possible to draw a clear line between permission and ability — in essence addressing the question of whether the smearing together of can and may is an accident or inevitability. I’ll try to put them together at some point & link to them, but given my history of failing to follow through with follow-up posts, I’m not going to leave it as only a possibility, not a promise.

Advertisements

This blog was linked to a while ago in a Reddit discussion of uninterested and disinterested. (My opinion on them is that uninterested is restricted to the “unconcerned” meaning, while disinterested can mean either “unconcerned” or “impartial”, and that’s an opinion based on both historical and modern usage. In fact, despite the dire cries that people are causing the two words to smear together, it actually looks like the distinction between them is growing over time.)

The reason I bring this up again is that one of the Redditors was proposing that having a strong distinction could make sense, because:

“Some people draw a distinction between disinterested and uninterested. There is nothing to lose and perhaps subtlety to be gained by using that distinction yourself. Therefore observing the distinction should always be recommended.”

But I’ve already asked my question about this in the title: is there really nothing to lose? Is there no cost to maintaining a strict distinction between words? Or, more generally, is there no cost to maintaining a grammar rule?

Well, in a myopic sense, no, there’s nothing much to lose by having the rule. In the case of uninterested and disinterested, it would be hard to argue that not being able to use disinterested to mean “unconcerned” is a substantial loss. It can be done, though: I, for instance, am a great lover of alliteration, and as a result, I like to have synonyms with as many different initial letters as possible. There’s a cost, small though it may be, to not having disinterested available as I’m constructing sentences. But that’s a triviality.

A more substantial consequence is that it introduces a discontinuity in the historical record. If we decide that from now on disinterested only means “impartial”, then historical and current uses of the “unconcerned” sense will be opaque to people taught the hard-and-fast rule. That’s problematic because, despite the belief of some people that this is an illiterate usage, it’s actually common even for good writers to use. This, again, isn’t a big problem; we regularly understand misused words, especially ones whose intended meanings are very close to their actual meanings. Saying that we can’t have a rule of grammar because sometimes it isn’t followed is the sort of whateverism that people accuse descriptivists of, not a reasonable concern.*

No, the true cost is a higher-level cost: the overhead of having another distinction. This might also seem trivial. After all, we have tons and tons of usage rules and distinctions, and a lexical distinction like this is really little more than remembering a definition. But let me illustrate my point with an example I recently saw on Tumblr (sorry for the illegibility):

The distinction here is well-established: affect is almost always the verb, effect almost always the noun.** Yet here we see that it is costly to maintain the distinction. First, it’s costly to remember which homophone goes in which role. Second, it’s costly to make an error, as people may mock you for it. Third, it’s very easy to get it wrong, as the replier did here.

If there were really no downside to adding an additional rule, we’d expect to see every possibly useful distinction be made. We’d expect, for instance, to have a clear singular/plural second-person distinction in English (instead of just you). I’d expect to see an inclusive/exclusive first-person plural distinction as well, as I sometimes want to establish whether I’m saying we to include the person I’m speaking to or not. The subjunctive wouldn’t be disappearing, nor would whom.

But all distinctions are not made. In fact, relatively few of the possible distinctions we could make at the word level are made. And that suggests that even if the reasons I’ve listed for not maintaining a lot of distinctions aren’t valid, there must be something that keeps us from making all the distinctions we could make.

So next time someone says “there oughta be a rule”, think about why there isn’t. Rules aren’t free, and only the ones whose benefits outweigh their costs are going to be created and maintained. The costs and benefits change over time, and that’s part of why languages are forever changing.


*: Of course, if the distinction is regularly violated, then it’s hardly whateverist to say that it doesn’t exist.

**: Affect is a noun in psychology, effect a verb meaning “to cause” that is largely reviled by prescriptivists.

I’ve mentioned my fondness for compiling historical grammatical errors as a reminder that we are not, point of fact, destroying what used to be a perfect language. Previously, I’d found unnecessary quotation marks in a 1960 World Series celebration, it’s for its in a 1984 John Mellencamp video, and an apostrophe incorrectly marking a plural in a famous 1856 editorial cartoon. But these were all punctuation-based errors. Today’s is a proper grammatical error, and one that people full-throatedly bemoan nowadays.

I found this error by admitting to myself that I am secretly an old man, and coming to terms with it by spending much of the summer sitting in parks, reading books on naval history and international relations. One of them, Nathaniel Philbrick’s Sea of Glory, tells the story of the U.S. Exploring Expedition, who discovered Antarctica and created the country’s first accurate naval charts for the Pacific islands. It’s a good book, but then it turned great by having two interesting old quotes four pages apart.

In the first, the Expedition is approaching Fiji and takes on another pilot due to the many coral reefs in the area:

“Wilkes felt it necessary to secure yet another experienced pilot at Tonga named Tom Granby. ‘You will find when we get to the Islands,’ Wilkes assured Granby, ‘that I know as much about them as you do.’ Granby smiled. ‘You may know all about them on paper,’ he replied, ‘but when you come to the goings in and goings out, you will see who knows best, you or myself.'”

Myself here is clearly non-standard, as no first-person pronoun has appeared anywhere in the sentence. The standard rule for reflexives, known as Principle A in Government and Binding theory, and discussed in pretty much every introductory syntax class, is that a reflexive must be bound in its governing category. Or, to say it in a more theory-agnostic and somewhat looser way, the coreferent of the reflexive (I/me for myself) has to appear within the smallest clause that contains the reflexive, and structurally “above” the reflexive. The syntactic specifics they depend on which syntactic theory you’re adhering to, but luckily they don’t really matter here; there’s no possible coreferent anywhere within the sentence, so any standard definition of Principle A will label the sentence ungrammatical.

Turning from this syntactic jungle to the Fijian jungle, a few pages later the Expedition lands on an island and hikes to its peak:

“Almost two years at sea had left them ill-prepared for such a demanding hike. ‘I have seldom witnessed a party so helpless as ourselves appeared,’ Wilkes wrote, ‘in comparison with the natives and white residents, who ran over the rocks like goats.'”

Again, it’s obvious that this is a non-standard usage, since no first-person plural noun phrase appears in the sentence to justify the reflexive.

Now, I’ve been marking these as non-standard rather than incorrect, and there’s a reason for this that is more than a desire to be non-judgmental. These supposedly erroneous uses of reflexives are widespread — so much so that I’d argue they’re at least borderline acceptable in many people’s forms of Informal Spoken English. That means that they ought to be explainable, that there ought to be some option in the rules of English that allow you to consider these uses acceptable without having to change much else in the language. I’m going to speculate for the rest of this post, so feel free to bail out here.

But before you bail, let me just brag about where I get to read.

Here’s my idea, which I don’t think is novel.* Reflexives are allowed only when, in some sense, there’s a sufficiently salient coreferent for the reflexive. Salience is standardly assessed syntactically, meaning that a coreferent appears structurally above the reflexive, and close enough to remain salient when the reflexive appears. But there is pragmatic salience as well, for people and things who haven’t been explicitly mentioned but remain prominent in the discourse all the same. And what is more pragmatically salient than the speaker? In both of these cases, it seems that the speaker is thinking of themselves as sufficiently salient to trigger the reflexive.

My intuition is that there are more instances of inappropriate reflexives for first person (myself, ourselves) than second person (yourself), and more of either than for third person (himself, herself, itself, themselves). I did a quick corpus search on COCA for sentence-initial As for *self, and the intuition wasn’t fully borne out; as for myself was the most common, but combined as for him/herself showed up almost as often (64 to 60), and as for yourself only registered one instance. So maybe I’m totally off-base on the specifics.** But something is going on that allows so many people to view reflexives as standard in positions that we don’t expect to see them, and like this or not, that needs explained.


*: If you know of any references to discussions about this issue, please share. I’m not primarily a syntactician, and didn’t see anything in a cursory search of the literature, but I really doubt this discussion hasn’t been had before.

**: I think the as for *self construction may be a special case. Most of the third-person uses look to be about how some third party views themself, and while one can state one’s own introspections and speculate about a third party’s, it’s a little bit weird to tell someone their own introspections. That could artificially deflate the second-person counts.

I think the best explanation of this construction may be as an indicator that we are switching mental spaces, if you’re familiar with that theory. Saying as for Xself establishes a new mental space focused on X and their inner workings or opinions, rather than the more generic mental space of the rest of the conversation. Sorry, I’m really going down a rabbit hole here.

You know I hate it when people mock English-as-a-second-language speakers for their grammatical missteps. If your sense of humor is so unrefined as to find ESL speakers’ errors jestworthy, I think you’re a boor. Internet society doesn’t think the same, but then again, Internet society also thinks it’s acceptable to shout “FIRST!” in a comment thread and that being racist when you know better is somehow subversive.

So I hope you won’t think me hypocritical for mocking someone whose knowledge of English is clearly lacking. There’s a key difference, though, in that English is this person’s native language. On an old post talking about one of the only, I recently got this comment:

“‘One of the only’ is poor grammar because ‘one of’ implies plural and ‘the only’ implies one. ‘One of the one’ doesn’t do much for logic.”

No.

If you have gone a sizable portion of your life speaking and hearing English (which I assume one has to have to be bloviating on what’s poor grammar) and you think that only implies one, then you do not know English. And yet, this is a common misconception:

“How can something be ‘one of the only’ when ‘only’ means ‘one?'”

“‘One of the only’ – could this be correct usage? ‘Only’ means ‘alone, solely.'”

Only refers to one or sole and has no meaning.”

Guys, I don’t know where you think you’ve gotten the authority to lecture people on English, but if you can’t understand the meaning of only, you do not have that authority.* Sure, in some situations, only refers to a single item, as in:

(1a) This is my only stick of gum. Do not eat it.

But only really means “this and no more”, where “this” can be singular or plural or mass. I could just as readily say:

(1b) These are my only sticks of gum. Do not eat them.

You absolutely cannot be fluent in English and not have been exposed to perfectly acceptable usages of plural only. Google Books N-grams shows that over the past 200 years of published works, one in every 100,000 pairs of words is only two. Including only 3/4/5 gets us up to 1 in 50,000. Given that a person hears around that many words each day, and that there are many other uses of plural only, it’s a conservative estimate to say that a fluent English speaker is exposed to plural only at least once a day.

Non-singular only isn’t questionable, it isn’t obscure, it isn’t rare, it isn’t debatable. Only does not mean or imply or refer to “one” in general. If you think it does, you are not sufficiently informed to correct anyone’s usage.


*: Which is weird, because even some authors who are well-regarded by the literary set (though not by linguists) claim this. Richard Lederer & Richard Dowis’s book “Sleeping Dogs Don’t Lay” contains an absurd assertion that one of the only both is oxymoronic and new. Neither is true, not even a little, and yet Lederer is the author of a newspaper column as well as tens of books on English.

Post Categories

The Monthly Archives

About The Blog

A lot of people make claims about what "good English" is. Much of what they say is flim-flam, and this blog aims to set the record straight. Its goal is to explain the motivations behind the real grammar of English and to debunk ill-founded claims about what is grammatical and what isn't. Somehow, this was enough to garner a favorable mention in the Wall Street Journal.

About Me

I'm Gabe Doyle, currently a postdoctoral scholar in the Language and Cognition Lab at Stanford University. Before that, I got a doctorate in linguistics from UC San Diego and a bachelor's in math from Princeton.

In my research, I look at how humans manage one of their greatest learning achievements: the acquisition of language. I build computational models of how people can learn language with cognitively-general processes and as few presuppositions as possible. Currently, I'm working on models for acquiring phonology and other constraint-based aspects of cognition.

I also examine how we can use large electronic resources, such as Twitter, to learn about how we speak to each other. Some of my recent work uses Twitter to map dialect regions in the United States.



@MGrammar on twitter

Recent Tweets

If you like email and you like grammar, feel free to subscribe to Motivated Grammar by email. Enter your address below.

Join 961 other followers

Top Rated

Advertisements
%d bloggers like this: