You are currently browsing the category archive for the ‘disagreements’ category.

goofy recently posted at bradshaw of the future about momentarily and some strange advice Grammar Girl sent out about it. Her advice:

“Don’t use momentarily to mean “in a moment”; you may confuse people. If you mean in a moment, say or write that. There’s no need to use momentarily in such cases, and doing so will irritate language purists.”

A quick note first: both the “in a moment” and “for a moment” meanings of momentarily have been around for 140 years, so the purists are completely unjustified in their complaint. Also, sure, there’s no need to use momentarily here, but then, there’s no need to ever use any given word. You can always paraphrase or re-write the sentence.

But the real question is two-fold: whether the benefits of using a questionable word outweighs its costs, and whether there’s a better word. You might think of this as a satisficing condition and an optimization condition.* And I suspect — although I don’t know if anyone’s studying this, or what they’ve found — that there’s some sort of a switch-off between the two methods depending on what production task you’re doing. When speed is one’s primary concern, presumably it’s sufficient to check that the word is beneficial; only when one has the luxury of time does full optimization kick in.

So is momentarily costly — i.e., will it confuse readers? goofy makes a good point about the potential confusion:

“If it’s more common for people to use momentarily to mean ‘in a moment’, then why advise people not to use it that way? It seems that Grammar Girl is essentially saying ‘don’t speak like everyone else in your speech community speaks.’ This seems counterproductive. […] it might confuse people – but if most people already use it that way, why should it be confusing?”

He gives the example of a pilot saying “we’ll land momentarily”, and notes that no one except for an uncooperative speaker will think “that means ‘for a moment’!” But one might harbor doubts. Maybe no one will end up with that interpretation, but maybe they’ll be distracted by it during interpretation. Yeah, that’s certainly possible — but listeners are more adept at ignoring irrelevant ambiguities that we tend to give them credit for.

The famous example from introductory linguistics classes of this is Time flies like an arrow. The first time someone sees this sentence, it just sounds like a standard aphorism, and the only meaning they’re likely to seriously consider is “time moves in a swift manner, akin to an arrow”. But this sentence is ambiguous, of course, as almost all sentences are. Many of the words have different senses and different parts of speech that they can take on.

If we switch from a Noun-Verb-Preposition reading of time flies like to an Noun-Noun-Verb one, we get: “‘Time flies’ (as opposed to houseflies or gadflies) appreciate an arrow”. There’s also a Verb-Noun-Preposition reading, yielding an imperative: “as though you were an arrow, record the time the flies take to complete a task”. There are other interpretations, too, but none of these is likely enough, given our world-knowledge and parsing probabilities, to register in our minds. We can reasonably expect that Time flies like an arrow will be correctly understood, without time lost to alternative interpretations, by any audience that isn’t actively looking for implausible interpretations.

So too should we expect momentarily to be correctly understood; claiming to have difficulty with it marks the complainer, not the speaker, as the one who doesn’t understand language. As an editor, one generally ought to foolproof writing, looking for and eliminating potential (even if fairly unlikely) misinterpretations. But there’s a difference between editing to protect fools from ambiguity and editing to protect uncooperative readers from ambiguity. The former is difficult, but generally doable. The latter is often simple, but generally worthless.**

Let me conclude with a good question from Jonathon Owen in the comments on goofy’s post:

“And if the problem is simply that purists will be annoyed, why not direct our efforts to teaching the purists not to be annoyed rather than teaching everyone else to avoid offending this very small but very vocal set of peevers?”

*: “Satisificing” is an idea I’m fond of, though one that doesn’t get talked about much outside of human decision-making tasks. In the familiar optimization strategy, you’re trying to find the best of all possible options, whereas a satisficing strategy is just looking for any option that’s better than some threshold. For instance, if you go to the store with two dollars and need to buy milk, you can optimize by comparing multiple sub-$2 cartons before picking the best of that lot, or you can employ a satisifice by buying the first carton that costs less than two dollars.

Satisificing is generally faster and, if I remember my undergrad psych classes correctly, is common in human decision-making processes, especially when time is of the essence.

**: One exception, presumably, is in legal writing/contracts.

I’ve been noticing a lot of aspersions being cast against against the comma splice recently. A quick sampling:

The dreaded comma splice rears its ugly head again.”

Splices are the worst, namely because there are probably over a hundred other ways to combine the clauses correctly”*

My senior English teacher marked down any paper with even a single comma splice by two letter grades [… It] gave me a terror-loathing of comma splices that has never left me.”

A comma splice, also known more judgmentally as a comma fault, error, or blunder, occurs when a writer joins two independent clauses with only a comma. One might write, for instance:

(1) I'm going to the store, I'll be back soon.

Sure, there are lots of other ways to join the clauses above (I suspect less than 100), such as a semi-colon, a dash, or a comma with a conjunction. The trouble is that each of the options carries with it a certain feel: the semicolon feels a bit formal, the dash a bit distant, the conjunction a bit unnecessary. The comma splice is light and airy, a gentle joining that fits the breezy style I wanted in that sentence.

But alas, that breeziness is abhorred by many English users, whether due to fear of punishment or their personal preferences. I can see where they're coming from, and surely you can too. Comma splices are often misused; the simplicity of their splice rarely sounds good with bulky clauses or ones that don't have an obvious connection. Continually using comma splices can make your writing sound like a bouquet of run-ons, and there's always the danger of confusion in using comma splices with clauses that look like lists.

But there's nothing inherently wrong, dreadful, or ungrammatical about a comma splice. In fact, if there's anything bad to be said about the comma splice, it's that it's old-fashioned.

Comma splices were unexceptional in the 18th century; the Merriam-Webster Dictionary of English Usage offers examples from Daniel Defoe, Jonathan Swift, and Benjamin Franklin. You might object that punctuation was in flux in those days. It’s a fair point, although I could rejoin that punctuation remains in flux through the present day. But also, we find that even as the punctuation system of English came together in the 19th century, comma splices remained common in letters. In fact, the earliest complaint against the comma splice found by the MWDEU staff only dates back to 1917.

That’s the historical side. So what about the informality? That 19th century shift mentioned above is an early indication of the emerging informality of the splice; its continued appearances in letters but drop-off in published works suggests a growing opinion that it was informal. Stan Carey’s post on comma splices serves in part as a repository for modern splices, and most of his examples feel informal as well.**

I really like this splice, as it softens the command.

So what caused the change in perception? The MWDEU offers a potential explanation that I find reasonable: the very idea of the comma splice is based on the brief pauses in speech that have no equivalent in formal writing. Older English punctuation systems were more a system of marking how long of pauses would be used if the passage were spoken than the mostly-semantic/syntactic punctuation system we now have. Informal writing also tends to be punctuated more like speech; many of the punctuation choices I make in writing this blog, for instance, are motivated by how I’d say what I’ve written. Formal writing in the modern English punctuation system asks for more explanatory punctuation, and so the comma splice fell by its wayside. Sounds like a plausible hypothesis to me, though I don’t know of a good way to test it.

And that brings up the crux of why comma splices are demonized. They are informal, which means that virtually all style guides will be against them. (An aside: why are there no style guides for informal writing? I’d say it’s because it’s easy and obvious to write informally, but looking at how people write emails and comments and blogs, it certainly seems a lot of people could use guidance in translating from the voice in their heads to words on a screen.)

Of course, it’s fair for style guides to oppose informal things, as far as it goes. The problem is that style guides tend to do a poor job of saying “you only need to worry about this in formal writing”, and their readers do an even worse job at stopping themselves from applying any piddling rule from their preferred stylebook to the whole of English.

Speaking of which: E. B. White, he of Strunk & White and The Elements of Style, illustrates the need to deviate from style guides in informal situations. The fifth Elementary Rule of Usage in their book is Do not join independent clauses with a comma. In a 1963 letter, White wrote:

“Tell Johnny to read Santayana for a little while, it will improve his sentence structure.”

Now there’s a man who knows not to be pushed around by style guides.

Summary: Comma splices were perfectly normal in 18th century punctuation. Starting the 19th century, as English punctuation codified, they were left somewhat on the outside, possibly due to their close connection to speech. They remain standard for informal writing, especially when short, closely connected clauses are being spliced. There is nothing inherently wrong with a comma splice, although when overused or used by a tin-eared writer, they can sound like run-ons.

*: I’m especially fond of this one, since it sounds like the problem with comma splices is just that there are other options, not that there are better options. I love the ambiguity in the scope of other, and whether it covers “correctly”.

**: Stan also has some good advice on how and when he’d use or avoid comma splices, though our opinions differ a bit.

I was reading through a brief response by Erin Brenner to Bill Walsh’s contention that the try and X construction should be opposed. (You know, like “I’ll try and write a new post sometime soon.”) Walsh’s basic point:

“‘I have to enforce this peeve,’ [Walsh] said. ‘You try to do something. To try and do something is to (a) try to do it, and (b) do it, which is not the intended meaning of the phrase.’

And Brenner’s:

The problem I have with Walsh’s reasoning is that try and is an idiom. There’s no point in trying to make sense of an idiom’s grammar; an idiom has its own unique (‘peculiar,’ says the American Heritage Dictionary) grammar. It doesn’t have to make literal sense.”

I agree with Brenner here. Sure, try and X doesn’t seem to make much sense.* But it doesn’t matter if it makes sense; if we’re trying to study language, we don’t get to say “I don’t understand this data” and throw it away. We’re stuck with the fact that people say and write try and X (the OED even offers an example from Paradise Regained, and Google Books has one from 1603) and it feels natural to most people.

When Walsh says that “To try and do something is to (a) try to do it, and (b) do it”, it’s clear what he’s getting at, but he’s wrong because that’s not what it means. What it means is what people use it to mean, and people overwhelmingly use it to mean (approximately) “try to do something”. That’s how language works; if everyone thinks a construction means X, then it means X.

It’s a similar problem with could care less, which people exasperatedly complain should be couldn’t care less. Of course it “should”. But everyone understands could care less to mean what it’s used to mean (with the possible exception of non-native English speakers and obstinate native speakers). And whenever most everyone agrees on what something means, whether it be a word or a phrase or an idiom, that’s right, no matter how illogical it seems.

That might sound weird. But if we’re going to treat language as something to be studied, as a science, then that ties our hands a bit. Quantum mechanics is a hot mess, and it sure would have been easier if it were Newtonian physics all the way down. But physicists don’t get to say, “nah, that’s crazy, let’s just keep using Newtonian models.”** Taxonomists don’t get to say “nope, platypuses are too strange, we just won’t classify them.” And so on.

[Psyduck]

Thankfully, taxonomists don’t have to classify Psyduck.

You can have an unassailable argument for why we shouldn’t be able to get the meaning out of a word or phrase or construction, but if everyone understands it, your argument is wrong. This is an essential fact of language. There are rules in language, but if the language itself breaks them, then it’s a shortcoming of the rule, not of the language.

So what can we say about try and? We can try to put together an explanation for how the unexpected meaning arose, looking at possible ancestries for it, possible analogical routes that might have spurred it. We can classify where and when and how it’s used (it’s generally informal, for instance). But when it comes time to figure out how it makes sense, it could well end up that all we can do is throw up our hands, call it an idiom, and move on. After all, what’s really interesting about language (at least for linguists) is the higher-level stuff like phonology or syntax or computational psycholinguistics; idioms are just the charming baubles that catch our eyes.

Of course, none of this means that one can’t be against an idiom — only that its supposed illogic is one of the weakest reasons to oppose it. I don’t have a problem with Walsh correcting try and in situations where it’s inappropriate or likely to cause confusion (e.g., formal writing or writing directed at an international audience). I do the same with non-literal literally, not because it’s confusing or incomprehensible or uneducated or new — it’s not — but because it feels cheap and hyperbolic to me, especially when used regularly. But these are stylistic choices, not grammatical ones. They aren’t returning logic to language.

*: It makes a little bit more sense when you think of the construction as an analogue of come and X or go and X, and realize that and in this situation is indicating dependence between the attempt and the action rather than simultaneity. The seeming noncompositionality of the construction may be in part due to language change, as the MWDEU notes that various related constructions (e.g., begin and) were common in the past, and thus when try and emerged, the dependent sense of and may have been more productive. In fact, the MWDEU hypothesizes that try and may predate try to.

**: Of course, they can do this when they’re staying at macroscopic scales, where quantum effects are undetectable — and thank God for that, or I’d’ve never survived college physics classes.

Gender-neutral language really burns some people’s beans. One common argument against gender-neutral language is that it’s something new. See, everyone was fine with generic he up until [insert some turning point usually in the 1960s or 1970s], which means concerns about gender neutrality in language are just manufactured complaints by “arrogant ideologues” or people over-concerned with “sensitivity”, and therefore ought to be ignored.

I have two thoughts on this argument. The first: so what? Society progresses, and over time we tend to realize that certain things we used to think were just fine weren’t. The fact that we didn’t see anything wrong with it before doesn’t mean we were right then and wrong now. Furthermore, women have gained power and prominence in many traditionally male-dominated areas, so even if gender-neutral language had been unnecessary in the past (e.g., when all Congressmen were men), that wouldn’t mean it’s a bad idea now.

But my second thought is this: the very premise is wrong. Concerns about gender-neutral language date back far beyond our lifetimes. Here are a few examples:

Freshmen. In the mid-19th century, the first American women’s colleges appeared. One of the earliest of these, Elmira College, had to figure out what to call the first year students, i.e. freshmen. For its first ten years, Elmira referred to this class as the protomathians, before deciding to return to the established usage. Rutgers, similarly, proposed novian to replace “freshman” when they began accepting female students.

Mankind. You can go pretty far back in English and see examples of mankind being viewed as non-gender-neutral. This led some authors who wanted to avoid any confusion about whether they were including women to use the phrase “mankind and womankind”; here’s Anthony Trollope doing so in 1874, and other people’s attestations from 1858, 1843, 1783, and 1740. This suggests that mankind was viewed as sufficiently likely to be non-generic as to cause at least hesitation if not confusion. In some sense, this is sort of an early generic he or she. Speaking of which…

He or she. He or she really gets people’s goats, and to some extent I can see why; it’s not short and simple like pronouns standardly are, and it can throw off the rhythm of the sentence. (This is why I prefer singular they.) Given that it’s ungainly, you might suspect, as most people do, that this is a new usage that only appeared once it was too politically incorrect to ignore women. But while it only started getting popular in the 70s, it’s been used much longer than that. Here it appears 19 times in two paragraphs in an 1864 book of Mormon Doctrine. Turning from religion to law, here it is in an 1844 Maryland law, and here it is in various British laws from 1815. Here’re examples from Acts passed by the First American Congress in 1790, and so on and so on.

Person as a morpheme. Another common complaint is about supposedly ugly new words like salesperson or chairperson or firefighter.* But such gender-neutralized forms were already being created as needed before the 1970s. Here’s salesperson used 100 times in a book from 1916.** Here’s another example, in the title of an article discussing paying commission to salespeople back in 1919. The OED offers even older examples, with tradesperson in 1886 and work-person in 1807.

Singular they. I know I sound like a broken record on this point, but singular they — using they in place of generic he for singular referents of unknown gender — has been around a long, long time. Henry Churchyard’s site lists off examples spanning from 1400 to the present day, with a special focus on Jane Austen’s 75 singular uses of their.

In conclusion, I’m definitely not saying that gender-neutral language was as prominent in the past as it is today. I’m just saying that when someone says that everyone was fine with non-neutral English up until the 1970s, they’re wrong. Clearly people were concerned about this before then, and adjusted the language to be gender-neutral when it seemed appropriate. This is not something totally new; it is not unprecedented; it is not a dastardly attempt to undermine the English language. It is just an expansion of an existing concern about English usage.


*: I just want to jump in and note that I find firefighter more precise and cooler-sounding than fireman; then again, I may have some unresolved issues with the latter term stemming from the difficulties I had in beating Fire Man when playing Mega Man.

**: The first part of this book is even titled “The Salesperson and Efficient Salesmanship”, showing gradient gender-neutrality decision-making, where gender-neutral forms are used when the gender is prominent or easily removed, and non-neutral forms when the gender is subtler or difficult to remove.

Post Categories

The Monthly Archives

About The Blog

A lot of people make claims about what "good English" is. Much of what they say is flim-flam, and this blog aims to set the record straight. Its goal is to explain the motivations behind the real grammar of English and to debunk ill-founded claims about what is grammatical and what isn't. Somehow, this was enough to garner a favorable mention in the Wall Street Journal.

About Me

I'm Gabe Doyle, currently a postdoctoral scholar in the Language and Cognition Lab at Stanford University. Before that, I got a doctorate in linguistics from UC San Diego and a bachelor's in math from Princeton.

In my research, I look at how humans manage one of their greatest learning achievements: the acquisition of language. I build computational models of how people can learn language with cognitively-general processes and as few presuppositions as possible. Currently, I'm working on models for acquiring phonology and other constraint-based aspects of cognition.

I also examine how we can use large electronic resources, such as Twitter, to learn about how we speak to each other. Some of my recent work uses Twitter to map dialect regions in the United States.



@MGrammar on twitter

Recent Tweets

If you like email and you like grammar, feel free to subscribe to Motivated Grammar by email. Enter your address below.

Join 975 other followers

Top Rated

%d bloggers like this: