You are currently browsing the category archive for the ‘language change’ category.

I’m a little surprised that I’ve been blogging for almost five years now and never got around to talking about whether there’s a difference between the words disinterested and uninterested. I suppose I’ve avoided it because the matter has already been excellently discussed by many others, and I didn’t think I needed to add my voice to that choir. But now it’s become something of a glaring omission in my mind, so it’s time to fix that.

Let’s skip to the end and fill in the middle later: there is a difference, but in Mark Liberman’s words, it’s “emergent and incomplete, rather than traditional and under siege”. For some people, there’s a clean separation, for others an overlap. In the language in general, uninterested is limited to the “unconcerned” meaning, while disinterested can mean either “unconcerned” or “unbiased”.

How do two distinct meanings arise from such similar words? The problem lies at the root — namely, interest, which can be with (1a) or without (1b) bias:

(1a) I espouse a relatively dull orthodox Christianity and my interest in Buddhism is strictly cultural, aesthetic.
(1b) Upon consignment of your car, it’s in my interest to do everything possible to present your car to potential buyers.

So, when one adds a negative prefix to interest(ed), is it merely disavowing concern, or bias as well? I don’t know of any inherent difference between dis- and un- that would solve that question, and historically, no one else seemed to either. Though I don’t have relative usage statistics, the Oxford English Dictionary cites both forms with both meanings early in their history:

(2a) How dis-interested are they of all Worldly matters, since they fling their Wealth and Riches into the Sea. [c1677-1684]
(2b) The soul‥sits now as the most disinterested Arbiter, and impartial judge of her own works, that she can be. [1659]

(2c) He is no cold, uninterested, and uninteresting advocate for the cause he espouses. [1722]
(2d) What think you of uninterested Men, who value the Publick Good beyond their own private Interest? [1709]

But we both know that it’s no longer the 18th century, and I strongly suspect that you find (2d) to be a bit odd. The OED agrees, and marks this meaning (uninterested as “unbiased”) as obsolete. I looked over the first 50 examples of uninterested in COCA (Corpus of Contemporary American English) as well and found no examples like (2d). If it still exists, it’s rare or dialectal. Uninterested meaning “unconcerned” (2c) is, of course, alive and well.

So really, it’s not a question of whether people are confusing uninterested and disinterested, but rather a question of whether disinterested has two possible meanings. We’re certainly told that they are, and that it is imperative that disinterested be kept separate. For instance:

The constant misuse of disinterested for uninterested is breaking down a very useful distinction of meaning.”

Is it really? Suppose disinterested could just as easily take either meaning, and that this somehow rendered it unusable.* You’d still be able to use unbiased, impartial, objective, or unprejudiced for the one meaning, and indifferent, unconcerned, and uninterested for the other. We’re not losing this distinction at all.

Setting aside such misguided passion, let’s look at how disinterested actually is (and has been) used. As we saw in (2a) & (2b), disinterested started out being used for both meanings. This persisted, according to the Merriam-Webster Dictionary of English Usage (MWDEU), through the 19th century without complaint. Noah Webster’s 1828 American Dictionary disinterestedly lists both senses, and it’s not until 1889 that MWDEU finds the first complaint. Opposition to disinterested for “unconcerned” appears to have steadily grown since then, especially in America.

But despite all the grousing, “unbiased” disinterested is hardly in dire straits. MWDEU’s searches found that 70% of all uses of disinterested in their files between 1934 and the 1980s were of this sense, and that this percentage actually increased during the 1980s. Furthermore, the MWDEU notes that the use of disinterested for “unconcerned” usually has a subtle difference from uninterested. Disinterested is often used to indicate that someone has lost interest as opposed to having been uninterested from the start.** This fits with other un-/dis- pairs, such as unarmed/disarmed.

Summary: Far from losing an existing distinction, it seems that we’re witnessing a distinction emerging. Uninterested is now restricted to an “unconcerned” meaning. Disinterested covers impartiality, but it also can take the “uninterested” meaning, often indicating specifically that interest has been lost. Because many people object to this sense of disinterested, you may want to avoid it if you’re uninterested in a fight. Will the distinction ever fully emerge, and the overlap be lost? Would that this desk were a time desk…


*: I think it goes without saying that having multiple meanings does not make a word unusable. In case it doesn’t, consider the much more confusing words fly, lead, and read.

**: Compare, for instance, I grew disinterested to I grew uninterested. I definitely prefer the former.

**: MWDEU notes that while the distributions of the two senses overlap, it’s more clear than people let on; “unbiased” disinterested tends to modify an abstract noun like love, whereas “unconcerned” disinterested tends to modify humans, and appear with in in tow.

It’s National Grammar Day, so as usual, I’m taking the opportunity to look back on some of the grammar myths that have been debunked here over the last year. But before I get to that, let’s talk briefly about language change.

Language changes. There’s no question about that — just look at anything Chaucer wrote and it’s clear we’re no longer speaking his language. These aren’t limited to changes at the periphery, but at the very core of the language. Case markings that were once crucial have been lost, leaving us with subject/object distinctions only for pronouns (and even then, not all of them). Negation, tense marking, verbal moods, all these have changed, and they continue to do so now.

Some people take the stance that language change is in and of itself bad, that it represents a decline in the language. That’s just silly; surely Modern English is no worse than Old English in any general sense.

Others take a very similar, though much more reasonable, stance: that language change is bad because consistency is good. We want people to be able to understand us in the future. (I’m thinking here of the introductory Shakespeare editions I read in high school, where outdated words and phrases were translated in footnotes.)

So yes, consistency is good — but isn’t language change good, too? We weed out words that we no longer need (like trierarch, the commander of a trireme). We introduce new words that are necessary in the modern world (like byte or algorithm). We adapt words to new uses (like driving a car from driving animals). This doesn’t mean that Modern English is inherently better than Old English, but I think it’s hard to argue Modern English isn’t the better choice for the modern world.

Many writers on language assume that the users of a language are brutes who are always trying to screw up the language, but the truth is we’re not. Language users are trying to make the best language they can, according to their needs and usage. When language change happens, there’s a reason behind it, even if it’s only something seemingly silly like enlivening the language with new slang. So the big question is: is the motivation for consistency more or less valid than the motivation for the change?

I think we should err on the side of the change. Long-term consistency is nice, but it’s not of primary importance. Outside of fiction and historical accounts, we generally don’t need to be able to extract the subtle nuances from old writing. Hard though it may be to admit it, there is very little that the future is going to need to learn from us directly; we’re not losing too much if they find it a little harder to understand us.

Language change, though, can move us to a superior language. We see shortcomings in our native languages every time we think “I wish there was a way to say…” A language is probably improved by making it easier to say the things that people have to or want to say. And if a language change takes off, presumably it takes off because people find it to be beneficial. When a language change appears, there’s presumably a reason for it; when it’s widely adopted, there’s presumably a compelling reason for it.

The benefits of consistency are fairly clear, but the exact benefit or motivation for a change is more obscure. That’s why I tend to give language change the benefit of the doubt.

Enough of my philosophizing. Here’s the yearly clearinghouse of 10 busted grammar myths. (The statements below are the reality, not the myth.)

Each other and one another are basically the same. You can forget any rule about using each other with two people and one another with more than two. English has never consistently imposed this restriction.

There is nothing wrong with I’m good. Since I was knee-high to a bug’s eye, I’ve had people tell me that one must never say “I’m good” when asked how one is doing. Well, here’s an argument why that’s nothing but hokum.

The S-Series: Anyway(s), Backward(s), Toward(s), Beside(s). A four-part series on words that appear both with and without a final s. Which ones are standard, and where?

Amount of is just fine with count nouns. Amount of with a count noun (e.g., amount of people) is at worst a bit informal. The combination is useful for suggesting that the pluralized count noun is best thought of as a mass or aggregation.

Verbal can mean oral. In common usage, people tend to use verbal to describe spoken language, which sticklers insist is more properly described as oral. But outside of certain limited contexts where light ambiguity is intolerable, verbal is just fine.

Twitter’s hashtags aren’t destroying English. I’ve never been entirely clear why, but many people insist that whatever the newest form of communication is, it’s going to destroy the language. Whether it’s the telegraph, the telegram, text messages, or Twitter, the next big thing is claimed to be the nail in English’s coffin. And yet, English survives.

Changing language is nothing at all like changing math. Sometimes people complain that allowing language to change due to common usage would be like letting triangles have more than 180 degrees if enough people thought they did. This is bosh, and here’s why.

And a few myths debunked by others:

Whom is moribund and that’s okay. (from Mike Pope) On rare occasions, I run across someone trying very hard to keep whom in the language, usually by berating people who haven’t used it. But the truth is that it’s going to leave the language, and there’s no reason to worry. Mike Pope explains why.

Uh, um, and other disfluencies aren’t all bad. (from Michael Erard, at Slate) One of the most interesting psycholinguistic papers I read early in grad school was one on the idea that disfluencies were informative to the listener, by warning them of a complicated or unexpected continuation. Michael Erard discusses some recent research in this vein that suggests we ought not to purge the ums from our speech.

Descriptivism and prescriptivism aren’t directly opposed. (from Arrant Pedantry) At times, people suggest that educated linguists are hypocritical for holding a descriptivist stance on language while simultaneously knowing that some ways of saying things are better (e.g., clearer, more attractive) than others. Jonathon Owen shines some light on this by representing the two forces as orthogonal continua — much more light than I’ve shone on it with this summary.

Some redundant stuff isn’t really redundant. (from Arnold Zwicky, at Language Log) I’m cheating, because this is actually a post from more than five years ago, but I found it within the last year. (This is an eleventh myth anyway, so I’m bending rules left and right.) Looking at pilotless drones, Arnold Zwicky explains how an appositive reading of adjectives explains away some seeming redundancies. If pilotless drones comes from the non-restrictive relative clause “drones, which are pilotless”, then there’s no redundancy. A bit technical, but well worth it.

Want to see somewhere between 10 and 30 more debunked myths? Check out some or all of the last three years of NGD posts: 2011, 2010, and 2009.

The English subjunctive may well be dying, but I am shedding no tears for it. This unconcern is, perhaps, a minority view amongst men of letters, for whom saying if I were instead of if I was is often a marker of a proper education, but I’m comforted by the fact that it is the majority view amongst users of English.

The subjunctive, if you’re not familiar with it, is a verbal mood* that appears in a variety of languages. It’s prominent in Romance languages (if you’ve taken French or Spanish, you’ve surely encountered it), and it exists to various extents in other Indo-European languages as well, including English. The basic idea of the subjunctive mood is that it expresses something counter to reality. For instance, one might say:

(1) If Alicia were the President, she’d get Party Down back on the air.

Normally, you’d say “Alicia was”; “Alicia were” would be a misconjugation. But because we’re talking about a counterfactual situation (Alicia is not really the president), we can use the subjunctive mood instead. And in the subjunctive mood, the present tense of the verb to be is were, regardless of the subject.

Often you’ll see people using the regular present tense in these situations, writing in (1) “if Alicia was the President”. That’s because the English subjunctive is pretty weak. It can be used in counterfactual situations, but it generally isn’t required. Because it’s optional and subtle (it looks just like the plural indicative forms of most verbs), it’s no surprise it’s disappearing.

Many grammarians wail and gnash teeth for this loss, and try to explain how important the subjunctive is.** Some explain that the subjunctive stresses the counterfactual nature of the situation, as though if you saw “if Alicia was president” in (1), you’d be thinking “I don’t know Alicia was president!”. Of course no one thinks this, because the counterfactuality is already established by the use of if.

What’s interesting to me, though, is that are some situations where the subjunctive is obligatory. And I say obligatory here meaning that I don’t get the right meaning out of the sentence if the subjunctive isn’t used. One occurred to me during a little monologue I was having in my head as I walked across campus the other day:

(2a) He’s obsessed with the idea that everybody admire him.
(2b) He’s obsessed with the idea that everybody admires him.

In (2a), with the subjunctive, our nameless character hopes that everybody admires him, suggesting a dearth of self-esteem. In (2b), with the indicative, our nameless character believes that everybody admires him, suggesting an overabundance of self-esteem.*** Here’s another one that just came to me, and here not using the subjunctive seems very awkward (although I’ve found examples of it in the corpus):

(3a) I require that it be done tomorrow
(3b) ?I require that it is done tomorrow

So, you might say, how can I idly declare the subjunctive on its way out while I also declare its necessity? Well, quite simply, if it disappears, we’ll do something else. In the case of (3b), it seems that this indicative form is gaining traction. As for (2a), by just changing the word idea to hope or desire, we get the same irrealis reading as (2a) without requiring the subjunctive. When language change happens, it doesn’t become impossible to say something. It just becomes impossible to say it the old way.

The worst case scenario is that the meanings of (2a) and (2b) get said the same way (with the indicative form admires), that they become a little bit ambiguous, and that we have to rely on context to tell them apart. Even that isn’t a bad situation, since we already do that with so many other things in language. The difference is critical in our current form of English, but it probably won’t be in future forms.


*: The subjunctive is properly called a mood, not a tense, because it exists across tenses; there are past, present, and future subjunctives. This Wikipedia article has some good info on this. The “standard” mood of English is known as the indicative, because it indicates what is really there.

**: I’m especially fond of the Academy of Contemporary English’s thoughts on the matter: “[Not using the subjunctive forms] is so common, in fact, that few people realise that they are using bad English when they mix them up. The difference is of the utmost importance [...]“

NB: when only a few people notice a language distinction, it is not important, let alone of the utmost importance.

***: I won’t spoil the minor mystery by revealing which of the two I was actually thinking.

Hiding from my dissertation in a little alcove under the stairs on the bottom floor of the library, I was scanning through a book of grammar gripes. One of them was the common objection to transitive usage of the verb graduate. For instance, people will sometimes say:

(1) Now that they’ve graduated high school they can set their goals on college.

Those of an older bent will be more familiar with an intransitive usage where the graduated institution appears in an ablative* prepositional phrase:

(2) Yesterday the heir to the Notorious B.I.G. throne, young Tyanna graduated from high school at an undisclosed location.

And, you may be thinking, darn right! It’s graduated from, and it’s always been, and the kids are screwing up the language again. And it’s true that the transitive form in (1) is newer and seems to be gaining in popularity.** But it turns out that graduate from isn’t the original form, either. It used to be graduated at, as in this 1871 example:

(3) He graduated at Williams College in 1810, and studied theology with the Rev. Samuel Austin, DD, of Worcester, Mass.

So already, just going back 140 years, we’ve seen transitions from graduated at to graduated from to the plain graduated. But there’s an even more substantial change in the history of graduate. Graduating used to be something a school did to its students, not something the students did to the school. One was graduated at some school — witness this 1827 list of folks that Harvard graduated, such as:

(4) Jabez Chickering, Esq., son of Rev. Jabez Chickering, was graduated at Harvard University, in 1804; and settled in the profession of law in this town.

I’ve put together a Google Books N-grams graph illustrating the changes over time:

[The history of graduate]

Interestingly, it looks like the forms in (3) and (4) were both in use throughout 19th century American English. That’s a bit surprising because the two forms assign different roles to their subjects, but it just goes to show that grammatical ambiguity is tolerable when there’s no chance of confusing the roles. (It’s always clear that the person is getting the degree, and the university issuing it.) We see was graduated at start dropping off in the second half of the 19th century, graduated at remaining strong until the early 20th century, and graduated from taking off from there.

So while I graduated high school may not yet be standard, it will be, and there’s nothing wrong with it. It just isn’t what people used to say. For whatever reason, the younger generation likes to change how graduation works. There’s no reason to fret over it; it’ll change, and life will go on, and our kids will be just as grumpy as us when their kids re-reinvent the word’s usage.

*: Ablative is one of a set of words describing the cases that can be marked in a language. Ablative in particular indicates motion away from something; Wikipedia has a list of these, including such fun ones as illative and inessive. (Valid only for certain definitions of “fun”.)

**: I’m a little surprised, but I don’t see any clear evidence in Google Books N-grams or the Corpus of Historical English of the transitive usage growing faster than the ablative intransitive. I suspect this is due to a strong avoidance of the transitive usage in writing, which both of these corpora are based on.

Post Categories

The Monthly Archives

About The Blog

A lot of people make claims about what "good English" is. Much of what they say is flim-flam, and this blog aims to set the record straight. Its goal is to explain the motivations behind the real grammar of English and to debunk ill-founded claims about what is grammatical and what isn't. Somehow, this was enough to garner a favorable mention in the Wall Street Journal.

About Me

I'm Gabe Doyle, a graduate student/doctoral candidate in Linguistics at UC San Diego. I have a Bachelor's in math from Princeton and a Master's in linguistics from UCSD.

In my research, I look at how humans manage one of their greatest learning achievements: the acquisition of language. I build computational models of how people can learn language with cognitively-general processes and as few presuppositions as possible.

I focus on learning problems that have traditionally been viewed as difficult, such as combining multiple information sources or learning without negative data or ungrammatical examples. My dissertation models how children can use multiple cues to segment words from child-directed speech, and how phonological constraints can be inferred based on what children do and don't hear adults say.



@MGrammar on twitter

Recent Tweets

If you like email and you like grammar, feel free to subscribe to Motivated Grammar by email. Enter your address below.

Join 769 other followers

Top Rated

Follow

Get every new post delivered to your Inbox.

Join 769 other followers

%d bloggers like this: