You are currently browsing the category archive for the ‘logic’ category.

People pop in fairly regularly to complain about “one of the only”, which I’m just really not that interested in. Usually the complaints are in response to my argument a few years ago that it was perfectly grammatical and interpretable (specifically rebutting Richard Lederer’s silly claim that only is equivalent to one and therefore is inappropriate for referring to multiple items). I haven’t gotten as many only=one complaints lately, but I’ve now received a new objection, presented as part of a comment by Derek Schmidt:

When [only] precedes a noun used in plural, it implies that there are no other similar items that belong to the list. “The only kinds of writing utensils on my desk are pencils and pens and highlighters.” […] But I have many of those pens, so if someone asked if they could borrow a pen, and I said, “No, that’s one of the only writing utensils on my desk!” that would be a little disingenuous and if someone was standing at my desk and saw the number of writing utensils, they would be baffled and think me a fool. Rightly so. Because they would understand it (logically, even) as meaning “that’s one of the few”, which is very false. So… “one of the only” means about as much as “one of them”.

To buttress his point, he referred me to a grammar column in the Oklahoman, which I never grow tired of noting was once called the “Worst Newspaper in America” by the Columbia Journalism Review. That was 14 years ago now, and I sometimes wonder if it is fair to keep bringing this up. Then I read Gene Owens’s grammar column in it and I wish the CJR had been harsher.*

About one example of “one of the only”, Owens writes:

“Now I can understand if he were the only English speaker or if he were only one of a few English speakers,” Jerry said, “but I don’t know how he could be one of the only English speakers.” That’s easy, Jerry. If he was any English speaker at all, he was one of the only English speakers in the area. In fact, he was one of the only English speakers in the world. […] The TV commentator probably meant “one of the few English speakers in the area.” But even if the colonel was “one of the many English speakers in the area,” he still was one of the only ones.

It continues on in this vein for a while, and but his point seems to be approximately the same as Schmidt’s, boiling down to the following statements:

  • It is grammatical to say “one of the only”.
  • It is used regularly in place of “one of the few”.
  • Examining it literally, one could say “one of the only” to describe something that there are many of.
  • This would be a strange situation to use it in.
  • Therefore “one of the only” oughtn’t be used in the case where it wouldn’t be strange.

Up till the last sentence, I agree. In fact, I don’t think any of those points are controversial.** But the last sentence is a big leap, and one that we demonstrably don’t make in language. Would it be silly of me to say:

(1) I have three hairs on my head.

Thankfully I’m still young and hirsute enough to have many more than three hairs on my head, and I think we’d all agree it would be a silly statement. But, parsing it literally, it is true: I do have three hairs on my head, though in addition I have another hundred thousand. In case this is such a weird setting that you don’t agree it’s literally true, here’s another example:

(2) Some of the tomatoes I purchased are red.

If I show you the bin of cherry tomatoes I just bought, and they’re all red, am I lying? No, not literally. But I am being pragmatically inappropriate — you expect “some” to mean “some but not all”, just as you expect “three” to generally mean “three and no more”. These are examples of what’s known as a scalar implicature: we expect people to use the most restrictive form available (given their knowledge of the world), even though less restrictive forms may be consistent too.***

To return to Schmidt’s example, it may be truthful but absurd to protest that one of 30 pens on my desk is “one of my only pens”. But just because the truth value is the same when I protest that one of two pens on my desk is “one of my only pens”, this doesn’t mean that the pragmatic appropriateness doesn’t change either. Upon hearing “one of the only”, the listener knows, having never really heard this used to mean “one of many”, that pragmatically it will mean “one of the (relatively) few”.

There is, perhaps, nothing in the semantics to block its other meanings, but no one ever uses it as such, just as no one ever says they have three hairs when they have thousands. This is a strong constraint on the construction, one that people on both sides of the argument can agree on. I guess the difference is whether you view this usage restriction as evidence of people’s implicit linguistic knowledge (as I do) or as evidence of people failing to understand their native language (as Schmidt & Owens do).

Finally, and now I’m really splitting hairs, I’m not convinced that “one of the only” can always be replaced by “one of the few”, as the literalists suggest. If we’re being very literal, at what point do we have to switch off of few? I wouldn’t have a problem with saying “one of the only places where you can buy Cherikee Red“, even if there are hundreds of such stores, because relative to the number of stores that don’t sell it, they’re few. But saying “one of the few” when there’s hundreds? It doesn’t bother me, but I’d think it’d be worse to a literalist than using “one of the only”, whose only problem is that it is too true.

Summary: If a sentence could theoretically be used to describe a situation but is never used to describe such a situation, that doesn’t mean that the sentence is inappropriate or ungrammatical. It means that people have strong pragmatic constraints blocking the usage, exactly the sort of thing that we need to be aware of in a complete understanding of a language.

*: I am being unfair. Owens’s column is at least imaginative, and has an entire town mythos built up over the course of his very short columns. But I never understand what grammatical point he’s trying to make in them, and as far as I can tell, I’d disagree with it if I did. As for the “worst newspaper” claim, this was largely a result of the ownership of the paper by the Gaylord family, who thankfully sold it in 2011, though the CJR notes it’s still not great.

**: Well, it might be pragmatically appropriate to use “one of the few” in cases where the number of objects is large in absolute number but small relative to the total, such as speaking about a subset of rocks on the beach or something.  I’m not finding a clear example of this, but I don’t want to rule it out.

***: Scalar implicatures were first brought to my attention when one of my fellow grad students (now a post-doc at Yale), Kate Davidson, was investigating them in American Sign Language. Here’s an (I hope fairly accessible and interesting) example of her research in ASL scalar implicature.

I was reading through a brief response by Erin Brenner to Bill Walsh’s contention that the try and X construction should be opposed. (You know, like “I’ll try and write a new post sometime soon.”) Walsh’s basic point:

“‘I have to enforce this peeve,’ [Walsh] said. ‘You try to do something. To try and do something is to (a) try to do it, and (b) do it, which is not the intended meaning of the phrase.’

And Brenner’s:

The problem I have with Walsh’s reasoning is that try and is an idiom. There’s no point in trying to make sense of an idiom’s grammar; an idiom has its own unique (‘peculiar,’ says the American Heritage Dictionary) grammar. It doesn’t have to make literal sense.”

I agree with Brenner here. Sure, try and X doesn’t seem to make much sense.* But it doesn’t matter if it makes sense; if we’re trying to study language, we don’t get to say “I don’t understand this data” and throw it away. We’re stuck with the fact that people say and write try and X (the OED even offers an example from Paradise Regained, and Google Books has one from 1603) and it feels natural to most people.

When Walsh says that “To try and do something is to (a) try to do it, and (b) do it”, it’s clear what he’s getting at, but he’s wrong because that’s not what it means. What it means is what people use it to mean, and people overwhelmingly use it to mean (approximately) “try to do something”. That’s how language works; if everyone thinks a construction means X, then it means X.

It’s a similar problem with could care less, which people exasperatedly complain should be couldn’t care less. Of course it “should”. But everyone understands could care less to mean what it’s used to mean (with the possible exception of non-native English speakers and obstinate native speakers). And whenever most everyone agrees on what something means, whether it be a word or a phrase or an idiom, that’s right, no matter how illogical it seems.

That might sound weird. But if we’re going to treat language as something to be studied, as a science, then that ties our hands a bit. Quantum mechanics is a hot mess, and it sure would have been easier if it were Newtonian physics all the way down. But physicists don’t get to say, “nah, that’s crazy, let’s just keep using Newtonian models.”** Taxonomists don’t get to say “nope, platypuses are too strange, we just won’t classify them.” And so on.


Thankfully, taxonomists don’t have to classify Psyduck.

You can have an unassailable argument for why we shouldn’t be able to get the meaning out of a word or phrase or construction, but if everyone understands it, your argument is wrong. This is an essential fact of language. There are rules in language, but if the language itself breaks them, then it’s a shortcoming of the rule, not of the language.

So what can we say about try and? We can try to put together an explanation for how the unexpected meaning arose, looking at possible ancestries for it, possible analogical routes that might have spurred it. We can classify where and when and how it’s used (it’s generally informal, for instance). But when it comes time to figure out how it makes sense, it could well end up that all we can do is throw up our hands, call it an idiom, and move on. After all, what’s really interesting about language (at least for linguists) is the higher-level stuff like phonology or syntax or computational psycholinguistics; idioms are just the charming baubles that catch our eyes.

Of course, none of this means that one can’t be against an idiom — only that its supposed illogic is one of the weakest reasons to oppose it. I don’t have a problem with Walsh correcting try and in situations where it’s inappropriate or likely to cause confusion (e.g., formal writing or writing directed at an international audience). I do the same with non-literal literally, not because it’s confusing or incomprehensible or uneducated or new — it’s not — but because it feels cheap and hyperbolic to me, especially when used regularly. But these are stylistic choices, not grammatical ones. They aren’t returning logic to language.

*: It makes a little bit more sense when you think of the construction as an analogue of come and X or go and X, and realize that and in this situation is indicating dependence between the attempt and the action rather than simultaneity. The seeming noncompositionality of the construction may be in part due to language change, as the MWDEU notes that various related constructions (e.g., begin and) were common in the past, and thus when try and emerged, the dependent sense of and may have been more productive. In fact, the MWDEU hypothesizes that try and may predate try to.

**: Of course, they can do this when they’re staying at macroscopic scales, where quantum effects are undetectable — and thank God for that, or I’d’ve never survived college physics classes.

I hate when someone starts a monologue by needlessly invoking a dictionary definition for some word. Few openings can ruin a graduation speech faster than “Webster’s defines ‘scholarship’ as …”. (Even the Yahoo! Answers community knows this.) For most common words, the dictionary definition is just a simplified, neutered form of the rich definition that native speakers have in their heads. There’s no need to tell me less about a word than I already know.

Unfortunately, I simply can’t come up with another way to start today’s post. I recently ran across this analysis of can’t help but, an idiom that (if you can believe it) the author finds illogical:

“Try to avoid the can’t help but construction. While it has been around for a while, most grammarians agree that it’s not the most logical construction. It’s considered to be a confused mix of the expressions can but and can’t help.”

Before we try to “logically” analyze idioms, let’s reflect for a moment what an idiom is. Here it comes — The Oxford English Dictionary defines an idiom (in its third noun sense) as:

A form of expression, grammatical construction, phrase, etc., used in a distinctive way in a particular language, dialect, or language variety; spec. a group of words established by usage as having a meaning not deducible from the meanings of the individual words.”

I’ve bolded that last bit because that’s the key point: an idiom is an idiom when its meaning is well-known among users of the language but does not come from strict interpretations of the words themselves. If you say someone has idiomatically kicked the bucket, there’s no bucket, there’s no kicking motion, and it actually means they died. Logical analysis of kick the bucket won’t get you anywhere near the actual meaning.

With that in mind, let’s look at can’t help but. Surely, most fluent English speakers — including those who disparage it as “illogical” — know what it means. If that meaning can be deduced from the words and syntax of the construction, then hooray, it’s fine, because it’s grammatical. If that meaning cannot be deduced from the words and syntax of the construction, then hooray, it’s still fine, because it fits exactly the definition of an idiom. It doesn’t matter if the meaning is deducible or “logical”, whatever that means. (For some thoughts on why I put “logical” in quotation marks when talking of grammatical logic, see Emily Morgan’s post on the logic of language.)

You might think that I’ve done some rhetorical sleight-of-hand in the last paragraph by saying that can’t help but either makes sense or is an idiom. What if it isn’t an idiom, but just an illogical corruption of can help but? I’ve got two thoughts on that.

The first is a simple matter of history. The OED records the use of can’t help but starting in 1894, but I’m finding it in Google Books further back than that. Here are examples from 1852 [Uncle Tom’s Cabin], 1834, and 1823. Similar investigation antedates can help but around the same time, 1842 and 1834. There’s no clear evidence that one form predates the other, so there’s no evidence that cannot help but is a corruption of the correct form.

The second point is that the supposedly logical alternatives can help but and can’t help make no more sense than cannot help but. I don’t understand the above claim that can’t help but is “not the most logical construction”. Maybe it isn’t; I’ll grant that it’s not as immediately interpretable as “I am walking” or something. But if can’t help but isn’t logical, why are the alternatives can help but and can’t help logical? What meaning is there for help that makes can’t help eating the cake mean “can’t stop myself from eating”? Whatever it is, it’s strictly idiomatic; you couldn’t, for example, write “I am helping eat the cake” with the meaning “I’m stopping myself from eating the cake”. In fact, it means exactly the opposite!*

For confirmation, I checked in the OED, and this meaning occurs only in these idioms. So can help but and can’t help aren’t “logical” either; they’re the result of people applying idiomatic knowledge to the interpretation of the construction. As soon as you expect help to mean something other than its standard aid-related usages, you’re going idiomatic, and logic pretty much goes out the window.

This is a long way of arguing that can help but and can’t help but are both grammatically reasonable. Shouldn’t we decide on one form over the other? Well, no. I know that prescriptivists love doing that, but it’s not the way language really works. The fact of the matter is that both are common, and in the opinion of the Merriam-Webster Dictionary of English Usage, both are standard.

But if that still won’t placate you, if you simply must be told which one is better, the perhaps surprising answer is that it’s the “illogical” can’t help but. The Corpus of Historical American English has 243 examples of can not help but to a mere 6 of can help but, and Google N-grams shows cannot help but dominating since 1840. (And personally, can help but doesn’t exist in my idiolect.) If you want to write the more common form, go with can’t help but. If can help but seems better to you, go with that.

Summary: Can’t help but is a perfectly standard idiom, meaning “can’t stop myself from”. It’s also the more common choice, historically and contemporarily, over can help but, even though both options are grammatical and standard in English. (Can’t help Xing is fine too, of course.)

*: Furthermore, doesn’t can’t help Xing have the potential to be even more confusing than can’t help but? If I say “I can’t help putting together your bike today”, am I saying that I can’t do it or I can’t stop myself from doing it?

One of the major problems I have with hard-line prescriptivists is that they follow their convictions to the point of absurdity, arguing that something completely standard ought to be changed because it doesn’t conform to a rule they’ve decided is inviolable. Today’s example is aren’t I.

Yes, I has a problem. Well, it’s not so much a problem with I, but with its companion am. Unlike the other conjugated forms of to be, am doesn’t form a contraction with not. Are and is are flexible, contracting equally readily with a pronoun (we’re) or the negation (isn’t). But am apparently fancies itself too good to consort with a debased negation. And so we find a hole in the English language, a word that should exist but doesn’t: amn’t.

Unlike am, English as a whole is flexible, and so another word (aren’t) pulls overtime and fills the hole. And this earns the ire of the accountants of the English language, who fume and fuss that this isn’t in the job description of aren’t. Didn’t they negotiate an agreement between subjects and verbs that aren’t can work with you and we and they and other plural subjects, but not with I?

So there is a hole in English, and there is a word that fills it. But filling the hole requires breaking a common rule in English. What do you do? If you are like pretty much every speaker of English, you break that rule. But there are those who put rules above reasonability and consider aren’t I bad grammar. Let’s look into the matter.

History. Aren’t is first attested in the Oxford English Dictionary in 1794. Google Books offers examples from 1726 and 1740. All of these are instances with you or they as the subject. As for aren’t I:

(1a) Aren’t I rich? You know I am! Aren’t I handsome? Look at me. [1878]
(1b) “I’ve got threepence,” she said, “Aren’t I lucky?” [1876]
(1c) “Aren’t I?” seems to be thought the correct thing; but why should we say “Aren’t I” any more than “I are not”? [1872]

Aren’t I appears in Google Books by the 1870s, and writing is conservative with respect to spoken usage, so aren’t I likely appeared in speech much earlier. In the earliest attestation — (1c) from 1872 — aren’t I was already perceived as standard. No one still alive today spoke pre-aren’t-I English. So if it’s been standard for 130 years, why wouldn’t it be fine still? Here are some possible (but misguided) objections to it.

Logic? The primary objection to aren’t I is that it has subject-verb disagreement. You wouldn’t say I aren’t, so you can’t say aren’t I. The first part of that is correct, but the second doesn’t follow. After all, if I aren’t being incorrect blocks aren’t I, why doesn’t are not you being incorrect block aren’t you?

You can’t apply simple logic to language and expect there to be no exceptions. Emily Morgan has noted before that the logic of language is far more complex than prescriptivists make it out to be.

Informality? One site claims that aren’t I is unacceptable in formal writing. But that’s the case for all contractions, not just aren’t I, because they’re informal transcriptions of speech. The fact that aren’t I doesn’t appear in formal writing is no more a condemnation of it than the fact that aren’t you doesn’t appear in formal writing. (And, by the way, both do appear in formal writing.)

Alternatives. Now, let’s say you’re unconvinced that we should leave well enough alone, and you really want to fix aren’t I. How are you going to do it? Look at the prominent alternatives that are available for aren’t I: am I not, amn’t I, ain’t I. Am I not is fine if you’re being poetic or intensely formal or need to stress the negation, but in most cases, it’s going to sound completely unnatural and overly stuffy. Amn’t I is perfectly fine if you are Irish or Scottish, where it persists as a standard form, but it’s exceedingly rare outside of those Englishes, and you’ll look affected if you use it in another dialect. Furthermore, it’s hard to pronounce the neighboring m and n distinctly, so people may think you’re using ain’t I instead. Ain’t I, of course, used to be a standard form, and Fowler himself fought in its favor, but nowadays is one of the most condemned words in the English language, one that will make even most moderate prescriptivists write you off as ill-bred.

The fact of the matter is that there is no other option that is acceptable in most English dialects and at an appropriate formality level. This is why aren’t I has taken hold.

Suppletion & Syncretism. I want to conclude with two final reasons why aren’t I shouldn’t concern you: suppletion & syncretism. Suppletion is a specific type of irregularity, where one irregular form fills in (or overtakes) the regular form. Usually, suppletion is talking about a case where the irregular form is from an unrelated paradigm: e.g., better instead of gooder in English, or mejor instead of más bueno in Spanish. No one complains that better is wrong because gooder follows the rules better. With aren’t I, the suppletive form is only from a different part of the paradigm, not a whole different paradigm, but the basic idea is the same. There is a seemingly regular rule (add n’t to the conjugated verb) that in one instance is ignored in favor of an irregular form. If you want aren’t I done away with, you ought to want to see better consigned to the scrap heap as well.

Furthermore, it’s only suppletion from a contemporary perspective. Actually, we’re dealing with syncretism, where two distinct syntactic forms happen to look identical. David Crystal has a very nice explanation of the history behind aren’t I, which came from people mistaking an’t for aren’t in non-rhotic (“silent-r“) dialects. Genealogically, the aren’t in aren’t I and the aren’t in aren’t you aren’t the same. Which means that, technically speaking, aren’t I isn’t an example of subject-verb disagreement; it’s a case of mistaken identity of one aren’t for another.

Summary: No, aren’t I isn’t incorrect. It’s been in use for at least 130 years, the alternatives are all insufficient, and the “logical” arguments against it are fallacious. It’s no more incorrect than using better instead of gooder.

Post Categories

The Monthly Archives

About The Blog

A lot of people make claims about what "good English" is. Much of what they say is flim-flam, and this blog aims to set the record straight. Its goal is to explain the motivations behind the real grammar of English and to debunk ill-founded claims about what is grammatical and what isn't. Somehow, this was enough to garner a favorable mention in the Wall Street Journal.

About Me

I'm Gabe Doyle, currently a postdoctoral scholar in the Language and Cognition Lab at Stanford University. Before that, I got a doctorate in linguistics from UC San Diego and a bachelor's in math from Princeton.

In my research, I look at how humans manage one of their greatest learning achievements: the acquisition of language. I build computational models of how people can learn language with cognitively-general processes and as few presuppositions as possible. Currently, I'm working on models for acquiring phonology and other constraint-based aspects of cognition.

I also examine how we can use large electronic resources, such as Twitter, to learn about how we speak to each other. Some of my recent work uses Twitter to map dialect regions in the United States.

@MGrammar on twitter

Recent Tweets

If you like email and you like grammar, feel free to subscribe to Motivated Grammar by email. Enter your address below.

Join 932 other followers

Top Rated

%d bloggers like this: