Weekly Read: The State of Jones

With the movie Free State of Jones opening this weekend, I thought it was a good chance to highlight this review of one of the books on which it’s based, from my old blog.

A title is a promise, at least for a work of nonfiction. It’s what draws you in, after all, and convinces you to give a book more attention. The full title of this book by Washington Post reporter Sally Jenkins and Harvard professor John Stauffer is The State of Jones: The Small Southern County That Seceded From the Confederacy. It’s a case that Jenkins and Stauffer don’t make.

Which is a shame, because the story they have to tell is fairly fascinating in its own right and something that a lot of Americans don’t know about. Revolving around a backwoods Mississippi “dirt farmer” named Newton Knight, it’s a tale of racial and class divisions before, during, and after the Civil War. Poor farmers from areas of Mississippi like Jones County had little interest in defending the ability of wealthy elites elsewhere to own slaves. Faced with the horrors of war in places like Corinth and Vicksburg and with families starving back at home due to shitty wartime economics, Knight and a group of others deserted from the Confederate army and headed back home.

Back in the Mississippi countryside, Knight and company organized an armed group that basically made life impossible for the Confederacy in Jones and surrounding counties. In addition to skirmishing with soldiers dispatched to arrest them for desertion, Knight’s group raided Confederate supply lines and tax collectors. It’s fair to say, based on the evidence presented in the book, that Jones County was effectively outside the sphere of Confederate power well before the end of the war.

But that’s not the same as secession. Maybe it’s because I’m a West Virginian and familiar with our unique history when it comes to the birth of the state and kind of sensitive about it, but secession is a formal, political act, not the de facto result of guerrilla military activity. Jenkins and Stauffer never provide evidence of such an act and, in fact, don’t really show whether Knight and his company were more pro-Union insurgents or simply a group of outlaws who gathered together to protect themselves and, as a side effect, cleared the Confederates from Jones.

It’s an important distinction because there was a hot debate when The State of Jones came out about its quality as work of history. Detractors argued that Jenkins and Stauffer massaged the historical record (and filled in gaps with imaginative extrapolations) to make Knight more of a modern progressive figure than he actually was (see, e.g., here and here). As for the question of secession itself, in part two of her three part review, professor Victoria Bynum (author of another book on Jones County) writes:

The old tale that Newt Knight and his band of renegades drew up a Constitution during the Civil War that declared Jones County, Mississippi, to have seceded from the Confederacy has been a favorite of journalists, folklorists, and even a few historians, since the late nineteenth century. Until historians finally shattered this myth, its effect was to paint the men of the Knight Company as hyper-secessionists rather than Unionists; i.e. as good old Southern white boys on a tear against any and all authority—rebels against the Rebellion, if you will.Stauffer’s defense is, in my opinion, weak:

From Newton Knight’s perspective, neither he nor his fellow Unionists seceded from the Union, which means they were never part of the Confederacy. Knight insisted that since Jones County had voted against secession, it ‘never seceded from the Union into the Confederacy.’

But from the perspective of the Confederacy, Knight and his fellow Unionists did secede. Confederate officers wrote that Jones County was in ‘rebellion’ against the Confederacy, and they referred to Knight and his men as ‘traitors.’ These were the same terms Republicans used to describe Confederates.It simply doesn’t work that way. Whatever irregularities existed with Jones County’s delegate to the Mississippi secession convention (the book alleges that he switched his position and voted for secession, even though the county had voted overwhelmingly against it), the convention voted to secede and the state as a whole was along for the ride. As was Virginia, of course, except for the counties west of the Alleghenies that stood up, said “bullshit to this,” and created, eventually, the state of West Virginia. Statewide votes are binding on the entire state. Individual disaffected voters don’t get to ignore results they don’t like.

Aside from the whole secession issue, The State of Jones has some other flaws that keep it from being easily recommended. For one thing, it’s focus shifts without any good reason from the more personal story of Knight and his family to broad depictions of several major engagements during the war (one of which, Bynum argues, Knight wasn’t present for). Those get tedious, mostly because they drive home the same point each time – war is hell, the Confederate foot soldier’s life was one of near constant starvation and disease, and it’s easy to see why anyone would want to escape it. Once we’ve gotten that point, do we really need it made over and over again?

Another problem with the book is, as noted above, its use of speculation and conjecture to fill in the blanks of Knight’s life and the lives of those around him. To be completely fair, Jenkins and Stauffer don’t hide it when they do it. To the contrary, many times they discuss a particular event, then transition into something along the lines of “we don’t know what Knight thought about this, but it might have been . . ..” Nonetheless, it’s frustrating to have the actual history whither down such dead ends.

I’m glad I read The State of Jones, if only because I knew nothing about this particular part of the Civil War before. But, after reading it and much of the discussion about it around the Web, I wouldn’t recommend it. There are other, more scholarly (if drier, perhaps), accounts out there. But The State of Jones is the one most likely to be encountered by the general public. That’s OK, if it serves as a jumping off point, rather than a comprehensive education.

Originally published March 15, 2013.

The New York Times had an interesting article on the movie and the director’s engagement with the issue of historical accuracy.

Advertisements

Unmasking Judas

Since the time I wrote this post in 2014, Big Big Train staged a set of fairly rare live gigs which, thankfully, were recorded. They’ve been sharing the results on YouTube, the second of which was “Judas Unrepentant.” Sounds like a good enough excuse for me to repost this. Watch, listen, read, and enjoy!

It took a while for Big Big Train’s The Underfall Yard, released in 2009, to grow on me.  It’s successor, English Electric Volume One still hasn’t*, for whatever reason, with the exception of one track.  It’s a song about something that always strikes me as fascinating – art forgery.

“Judas Unrepentant” is about a guy who forges art, but does it in a very clever way.  Rather that churn out reproductions of known classics, he has a different scheme:

Establishing provenance
Acquiring old frames with Christie’s numbers
Then Pains a picture in the same style
Specializing in minor works by major artists

It’s quite brilliant, actually.  Reminds me of a story I heard Rick Nielsen of Cheap Trick tell about their early days – where every other bar band played the radio hits by Zeppelin or The Who, they’d learn the B-sides nobody paid much attention to, so it sounded like original material (although they never passed it off that way).

I always wondered if the song was completely fictional or inspired by a real forger.  Last night, I think got the answer, thanks to a 60 Minutes piece on Wolfgang Beltracchi.  As the setup explains:

Wolfgang Beltracchi is a name you may never have heard before.  Very few people have. But his paintings have brought him millions and millions of dollars in a career that spanned nearly 40 years. They have made their way into museums, galleries, and private collections all over the world.  What makes him a story for us is that all his paintings are fakes. And what makes him an unusual forger is that he didn’t copy the paintings of great artists, but created new works which he imagined the artist might have painted or which might have gotten lost. Connoisseurs and dealers acknowledge that Beltracchi is the most successful art forger of our time — perhaps of all time. Brilliant not only as a painter, but as a conman of epic proportions.

Now, the song is not Beltracchi’s story.  For one thing, the song indicates that its hero wanted to get caught:

His time bombs are in place
And anachronisms
Clues pointing to the truth
If ever they are X-rayed

It’s clear from the story that Beltracchi didn’t want to get caught, which he did.  He was sentenced to six years in prison and his wife/codefendant to four.  As for how he got caught?

But then in 2010, he got busted by this tube of white paint.

The Dutch manufacturer didn’t include on the tube that it contained traces of a pigment called titanium white. That form of titanium white wasn’t available when [Max] Ernst would have painted these works and Beltracchi’s high ride was over.

Which is interesting, because in the song, our hero:

Wrote legends in lead white
to trick the experts
And hoodwink the trained eye

Coincidence?  Could be.  But Beltrachhi’s story must have been in the news in Europe sometimes before “Judas Unrepentant” was written, so it makes sense that one served as inspiration for the other.

One thing I will say for the song is that is provides something the 60 Minutes piece doesn’t, which is answering why go through all trouble?  Beltracchi is a staggeringly talented guy.  Presumably he could have been a successful artist under his own name, so why all the fraud?  “Judas Unrepentant” has an answer:

He’s painting revenge
Embittered by lack of success

* * *

Expressing contempt
For greedy dealers
Getting rich
At the artist’s expense

Revenge as the long con.  I like it, although it all comes to a tragic end, sadly.

I think what makes art forgets so interesting is that they tend to poke a finger in the eye of the art world, challenging its aesthetic bona fides and pointing out how, so often, people only care about the name attached to a work, not the work itself.  To that end, I applaud this collector:

This $7 million dollar fake Max Ernst is being shipped back to New York.  Its owner decided to keep it even after it had been exposed as a fake. He said it’s one of the best Max Ernsts he’s ever seen.

Because, in the end, the important thing shouldn’t be whether the signature on the bottom makes your friends jealous, but whether the art moves you and makes you think about it.

* The similarly named English Electric by OMD, however, grabbed me right away, for what it’s worth.

This post originally appeared at my old blog on February 24, 2014.

Stick Your High Art Where the Sun Don’t Shine!

Another blast from the past . . .

OK, not really. I’ve got nothing against what most people think of as “high” art – I enjoy quite a bit of it – I just object to the classification. Regardless of how well-meaning or merely taxonomic it strives to be, it carries an implied judgment of “low” art as being, somehow, not worth as much. By further implication, it suggests that those who enjoy or make “low” art are somehow lesser than those who deal with “high” art.

I bring this up because of a recent essay over at the New York Times philosophy blog by Gary Gutting (with an assist from Virginia Woolf) about the divergence. Along the way, he appears to argue that musical worth, at least (it’s unclear if his metrics would apply to literature, film, or visual arts) can actually be quantified and judged objectively.

Along the way, he lays down this assertion:

Centuries of unresolved philosophical debate show that there is, in fact, little hope of refuting someone who insists on a thoroughly relativist view of art. We should not expect, for example, to provide a definition of beauty (or some other criterion of artistic excellence) that we can use to prove to all doubters that, say, Mozart’s 40th Symphony is objectively superior as art to ‘I Want to Hold Your Hand.’ But in practice there is no need for such a proof, since hardly anyone really holds the relativist view.

* raises hand *

I’m not sure how many of us there are, but I for one will proudly admit to being a relativist on the quality of art. Someone’s interaction with art is so personal, so bound up in the quirks of our own experiences, that it’s impossible to convert that interaction to some kind of objective measurement. For the record, I’m not ignoring the objective fact of consensus – that I like something a majority of the world can’t stand doesn’t make them right and me wrong, but it does mean I’m swimming against the current.

Anyway, back to the philosopher, who continues:

We may say, ‘You can’t argue about taste,’ but when it comes to art we care about, we almost always do.

Well, yeah, people will argue about things that matter to them, be it art, politics, or sports. Just because we do doesn’t mean the arguments can be won on some kind of objective scale. Humans will argue about anything!

He goes on:

You may, for example, maintain that the Stones were superior to the Beatles (or vice versa) because their music is more complex, less derivative, and has greater emotional range and deeper intellectual content. Here you are putting forward objective standards from which you argue for a band’s superiority. Arguing from such criteria implicitly rejects the view that artistic evaluations are simply matters of personal taste. You are giving reasons for your view that you think others ought to accept

Several things strike me as wrong about this.

The most important one, I think, is that Gutting is conflating the manner in which someone defends a preference with the actual basis upon which that preference rests. I’ve listened to an awful lot of music in my four decades on the planet, from the most popular radio hits to the most obscure wind band compositions. A lot of those I’ve listened to because of “hey, if you liked X, you’ll like Y, too” recommendations. I’m not sure they’re worth any more than a coin flip when it comes to predicting whether I’ll like it or not. Some things move me, some things don’t. The same is true for everybody, isn’t it?

More likely, these “objective” standards upon which Gutting relies are not the considerations we have when we decides something moves us, but post-hoc rationalizations to try and explain why that thing moved us. At the end of the day, I can’t really say why I prefer Marillion to Magma.* I suppose I could dig into the construction of the various songs and come up with some reasons for it, but they’d be meaningless. Most of the time, I’d rather listen to Brave than Udu Wudu. But sometimes not, you know? I can’t really tell you why.

Gutting’s reference to “objective” standards make me think of people who argue about whether one athlete is better than another when they’re separated by decades. Yes, statistics will be trotted out to support argue that Pele is better than Lionel Messi (or vice versa), but they don’t prove anything. Too many years have passed, the game has changed, etc. Ultimately, we have our favorite in mind before the argument begins and scramble to find some justification for it. If it was as simple as “consult these objective measurements” there’d be nothing to argue about.

Another flaw in Gutter’s presentation is assuming that those things he lists are “objective” to begin with. I’ll give him a pass on complexity for now (although more of that later), but the others have not just some, but large amounts of, subjectivity inherent in them. Whether something is “derivative” is a value judgment, in the end. Any musician is influenced by other music she’s heard and is, to some point, derivative of what’s come before. What’s the dividing line for being too derivative? What if it’s a parody, pastiche, or homage, anyway? Even more untethered from objective measurement are a piece’s “emotional range” and “intellectual content.”

As for complexity, how to measure it and what it means isn’t readily apparent. “Complex” generally implies some amount of difficulty, but any musician will tell you that sometimes playing something “simple” precisely and with musicality is more difficult than playing something that’s a tangled flurry of notes. Furthermore, that something is more complex doesn’t make it inherently more likely to connect with the listener. Quite the opposite, in fact. Returning to the Marillion/Magma example, few would argue if you called Magma’s more complex, but that wouldn’t lead inexorably to a conclusion that it was superior. For some folks it would be, for some folks it wouldn’t. For some people, there is a point where there are simply too many notes.

For another thing, using complexity as some sort of taxonomic tool fails to conflate like with like. Of course a three-minute song recorded in the early days of multitrack recording by four guys is less “complex” than a half-hour long symphony written to be performed by a full orchestra made up of dozens of people. So what? How does that help us judge either piece? It’s like saying desert is less nutritious than the main course – it utterly misses the point.

Someone in the comments to Gutter’s piece trotted out Duke Ellington’s aphorism:

There are simply two kinds of music, good music and the other kind

But even that’s not quite right – there’s what you like and what you don’t; what moves you and what doesn’t; what you want to hear and what you don’t. That a lot of people agree with you, or a consensus develops down through history that a particular work is a masterpiece doesn’t change that.

At the end of the day, as I said, art is personal. To label some of it “high” and some of it “low” throws up class barriers where none really exist. People like what they like. Sometimes, they like the same stuff you do. Sometimes they don’t. Deal with it.

* Before I get any angry letters in Kobaïan, I dig Vander’s bunch when I’m in the mood. Don’t take it personally.

This post originally appeared at my old blog on July 18, 2013

The Other Side of Jury Nullification

I haven’t talked a lot about law on this blog, but I did at my old one. Here’s a post on jury nullification that I thought I’d bring back in light of this interesting discussion over at The Volokh Conspiracy.

Jury nullification is back in the news, thanks to a heavy handed (and most likely unconstitutional) prosecution in New York.  The local US Attorney has charged a 78-year-old man with jury tampering because:

Since 2009, Mr. Heicklen has stood there and at courthouse entrances elsewhere and handed out pamphlets encouraging jurors to ignore the law if they disagree with it, and to render verdicts based on conscience.

That concept, called jury nullification, is highly controversial, and courts are hostile to it. But federal prosecutors have now taken the unusual step of having Mr. Heicklen indicted on a charge that his distributing of such pamphlets at the courthouse entrance violates a law against jury tampering.

Eugene Volokh does a good job of analysing the First Amendment issues with the prosecution, but I’m more interested in the underlying issue of jury nullification.

Jury nullification really isn’t a thing in and of itself.  It’s more a side effect of the prohibition against double jeopardy in the Fifth Amendment.  When a jury acquits a defendant at trial, that’s the end of it.  The prosecution cannot seek appellate review of the verdict.  By contrast, a defendant can challenged the sufficiency of the evidence on appeal, although (as I’ve explained before) there’s little chance of success.

The upshot of that setup is that a jury can return a not guilty verdict for any reason it wants, from the state’s failure to prove its case to the jury’s disgust at the law being enforced.  Those of the libertarian/people power persuasion see jury nullification as an unfettered good, a way for the people to check the power of the state when it comes to unpopular laws or discriminatory applications of otherwise popular laws.

That’s all fine and dandy, in theory, but it strikes me as naive in practice.  After all, if we tell jurors to “render verdicts based on conscience” there’s no principle that limits it to acquittals.  Judges routinely instruct jurors both to ignore evidence that comes out in court and instructs them about the burden of proof and other legal issues.  If they are free to disregard what the judge says, it could lead to all kinds of problems.

Maybe I’m just cynical, but from my experience it doesn’t look like jurors give the weight they should to the judge’s instructions in most cases. My completely unscientific conclusion is that the presumption of innocence and beyond-a-reasonable-doubt standard exist largely on paper at this point, not in the minds of actual jurors.  As a result, we already teeter dangerously close to a criminal justice system that makes convictions of innocent people too easy.  Any program that exacerbates that state of play can’t be altogether good.

Jury nullification has a long and storied history in this country, dating back at least to the libel trial of John Peter Zenger in 1733.  But that was a different era, one in which the basics of the law was much more in the grasp of potential jurors.  In the modern era, I’m not so sure that telling jurors they can and should go rouge won’t lead to more harm than good.  At the very least, it’s a problem that jury nullification advocates need to face head on.

And they’ll have to do better than some of the commentators to this article about the case over at Reason.  Asked to distinguish between jurors who acquit because they view the law as unjust and jurors who acquit for less lofty reasons (i.e., an all-white jury acquitting a Klansman who killed a black guy), the best they can do is a variant on the No True Scotsman fallacy – the second example isn’t “really” jury nullification.  Sadly, it produces the same result, so any theoretical distinction is moot.  In any case, further informing jurors that they can do whatever the hell they want would encourage bigotry and bias as much as more principled decisions.

NOTE: This post was originally published on February 28, 2011.

What’s the Point of a Review?

I’ve been writing reviews since the days way before blogs, when we had to chisel words by hand on individual monitor screens. That means I occasionally write about reviews, as in this piece from early last year. Even as a published author, I still see a need for bad reviews!

My Friday Reviews are the descendant of one of the features of my original, hand crank operated, web page I had while I was in college and law school.  There I’d do reviews of just about every album I got, as part of a regular process of listening and figuring out what I thought about it.  I stopped doing those, largely because my reviews were winding up in one of two formats – gushing praise or harsh scorn.  If I didn’t really “feel” one of those, I didn’t even write up.  I’d like to think I do better now, but it’s helpful to be able to pick and choose.

I bring all this up because of an interesting two-person article in the upcoming issue of the New York Time Sunday Book Review which asks the question, “do we really need negative book reviews?”

Now, as a struggling writer, I kind of like the idea of doing away with negative reviews. Who wants to see their work torn to shreds, after all?  But I’m not certain that would really be the best thing.

Francine Prose makes the case for not writing negative reviews.  It’s pretty simple:

Even so, I stopped [writing negative reviews]. I began returning books I didn’t like to editors. I thought, Life is short, I’d rather spend my time urging people to read things I love. And writing a bad book didn’t seem like a crime deserving the sort of punitive public humiliation (witch-dunking, pillorying) that our Puritan forefathers so spiritedly administered.

From my reading of professional critics, that seems to be the best part of the job – when they find something in need of a champion, a book or film that won’t reach a wider audience without some cheerleading.  It must be more rewarding that writing what shit the latest Transformers movie is or whatever.  So I see the point.

On the other hand, however, that seems a bit too touchy-feely, doesn’t it?  To be fair, Prose (good name for a writer!) doesn’t argue for lying about the quality of books, just not writing reviews of bad things at all.  Which, come to think about it, might be even worse – being ripped apart is one thing, being ignored quite another.

Zoe Heller makes the case for negative reviews and it is, as well, pretty simple:

most writers do not write merely, or even principally, to escape from or console themselves. They write for other people. They write to have an effect, to elicit a reaction. That is why they scrap and struggle, often for years, to have their work published. Being sentient creatures, they are often distressed by what critics have to say about their work. Yet they accept with varying degrees of resignation that they are not kindergartners bringing home their first potato prints for the admiration of their parents, but grown-ups who have chosen to present their work in the public arena. I know of no self-respecting authors who would ask to be given points for ‘effort’ or for the fact that they are going to die one day.

Part of being an artist, at least one who shares his work with other people, is the need to deal with criticism.  My father is a first rate grammar-Nazi.  I have him read my fiction, even though it’s not the kind of thing he normally reads, because he will be precise and vicious with a red pen.  When my mother asked if I really wanted him to do that, I said, “because editors and agents will be kind and not point out those things?”  Being criticized is part and parcel of being a creative person.

Further, as Heller points out, reviews come with bylines and, hopefully, supporting argument as to whether a book is good or bad.  Real criticism goes miles beyond “it sucks” or even “it’s great!”  Critics who are savage just for the fun of it won’t garner a lot of respect or readers.

After all, as Prose admits, trying not to write a negative review is like trying not to eat too much at Thanksgiving.  You’re bound to find something that rubs you the wrong way, doesn’ work, and compels you to write about it.  Even if, as she also points out, in the end, nobody will really pay attention to what you have to say.

These days, when I write a review, I try to have something interesting to say about whatever the subject is. That’s why there isn’t a review posted every Friday.  Something’s got to strike my fancy somehow, either by being brilliant or flawed, but I won’t think twice about saying I think something sucks.  I just hope I have good enough reasons to make somebody else think, “yeah, all right.”  Agreement, of course, is not required.

So I think the answer is yes, we do need negative book reviews.  Whether we need “bad” reviews is, of course, a completely different question.

NOTE: This post was originally published on February 13, 2014.

Pulling a Town Out of Thin Air

Getting Moore Hollow ready for publication this fall made me think back to this piece from my old blog. Moore Hollow is set in West Virginia, but not in any place that actually exists on the map. Jenkinsville and Vandalia County were pulled straight from the ether. Maybe that’ll change someday.

One of the cool things about writing fiction is you get to make up stuff as you go along (it’s sort of the nature of the game).  Not just characters and what they do but, often just as important, where they do it.  You can build entire worlds and nations in your mind, not to mention cities.  I’ve even made some maps (crude, but effective – I’m not a cartographer, after all) of the world in which my Water Road books are set, as well as another world I’ve yet to write in.  It’s all quite fun.

But imagine that you could create a town out of thin air, as a fiction, only for it to pop up in real life?  Now that’s really cool!

Consider the strange case of Algoe, New York (not to be confused with the planet Algon, where an ordinary cup of drinking chocolate costs 4 million pounds).

Back in the 1930s, it wasn’t unusual for mapmakers to steal each other’s work.  After all, if a map reflects realty and someone copies the map, don’t they have a defense to plagiarism by arguing that both the original map and the alleged copy accurately reflect reality?  How can that lose?

Turns out, map makers got savvy and began including some fictional places to trap would be copyists:

That’s what Otto G. Lindberg, director of the General Drafting Co., and his assistant, Ernest Alpers, did in the 1930s. They were making a road map of New York state, and on that out-of-the-way dirt road, they created a totally fictitious place called ‘Agloe.’ The name was a mix of the first letters in their names, Otto G. Lindberg’s (OGL) and Ernest Alpers’ (EA).

The trap set, it appeared to work, when the town of Algoe appeared on a map made by none other than Rand McNally a few years later.  Case closed, right?  Big check from Rand McNally to Lindberg and Alpers.  Not so fast – Rand McNally offered a defense: there really was a town called Algoe.  In fact, the official county map showed an Algoe General Store in that location.  Checkmate, cartographic honey pot.

But how’d that happen?

Good question. Here’s the ironic answer. The owners had seen Agloe on a map distributed by Esso, which owned scores of gas stations. Esso had bought that map from Lindberg and Alpers. If Esso says this place is called Agloe, the store folks figured, well, that’s what we’ll call ourselves. So, a made-up name for a made-up place inadvertently created a real place that, for a time, really existed. Rand McNally, one presumes, was found not guilty.

Then the store closed. It isn’t there anymore.

Having said that, according to the NPR story, Algoe held on for years on Google Maps until it, again, vanished into thin air recently.

So, want to have an impact on the world?  Make a map and give it a fictional town.  It might come to life without you even knowing about it!

NOTE: This post originally appeared on my old blog on April 1, 2014.

If You’re Worried About Rosebud, You’re Missing the Point

It’s his sled. It was his sled from when he was a kid. There, I just saved you two long boobless hours.

Peter Griffin, spoiling Citizen Kane

Saw Gone Girl last weekend.  It’s really good, particularly if you like the kind of movie that takes place in an air of dread that’s perfectly summoned by David Fincher (with able assists from Trent Reznor and Atticus Ross).  I say that even knowing the big twist of the film going into it.  Not because I had read the book on which it’s based, but because my wife blurted it out during a TV commercial. She didn’t know I wanted to see it.

Point is, she didn’t really “spoil” the movie for me, in the true sense of the word.  That’s because the flick is good enough that it doesn’t rise or fall on the big “twist” (which, for what it’s worth, happens about halfway through – this isn’t The Sixth Sense we’re talking about).  In my opinion, any movie/book/TV show that rises and falls on that twist isn’t really worth watching.

What’s more, people seem to enjoy things more once they know how it turns out.  At least that’s what some research says.

Back in 2011, as The Atlantic reports, a study was published that sounds pretty neat:

Scientists asked 900 college students from the University of California, San Diego, to read mysteries and other short stories by writers like John Updike, Roald Dahl, Agatha Christie, and Raymond Carver. Each student got three stories, some with “spoiler paragraphs” revealing the twist, and some without any spoilers. Finally, the students rated their stories on a 10-point scale.

The results?  Readers preferred the spoiled stories.  But why would we want to know how it ends ahead of time?

One theory is that our anticipation of surprises actually takes away from our appreciation for the 99 percent of the movie that isn’t a monster twist. ‘The second viewing is always more satisfying than the first,’ Sternbergh said, ‘because you notice all the things you missed while you were busy waiting for the twist.’ Psychologists have observed that when we consume movies and songs for a second (or third, or hundredth time), the stories become easier to process, and we associate this ease of processing with aesthetic pleasure.

Think about this for a second.  Most of us have some piece of culture that we go back to again and again.  I know that the big escape at the end of Brazil takes place all inside Sam’s head, but I still watch it.  I know that Arthur and Ford wind up on a primitive Earth populated by a bunch of idiots expelled from a better planet, but I’ll still consume Hitchhiker’s Guide . . . again (in its many forms).  And I know Tommy goes back to being blind, deaf, and dumb at the end, but that doesn’t make “Pinball Wizard” kick any less ass.

Of course, there might be other reasons why spoilers really aren’t, including the uncomfortable recognition that we really like predictability more than we let on.  But, in this area at least, I’d like to not be completely cynical and think that, deep down, we realize that works built on the big twist only are, as someone else put it in the Atlantic piece:

like artistic flash paper: It excites for a moment but offers little lasting wonder.

After all, we want to be better than Peter Griffin.  Right?

Note: This piece was originally posted on my old blog on October 20, 2014.