Thursday, October 6, 2011

Atlas didn't shrug ...

... he shared his talents and our world is more interesting and better engineered for it.  He did not hoard his ability, but used it to spark our imaginations and enable the abilities and creativity of others.  His career was a celebration of the individual.  He will be missed.

Steve Jobs, 1955-2011

Wednesday, September 21, 2011

Not the senator from Massachussets, yet

I've discussed the principle of the common good, or the common welfare, here a few times - to my mind it is the fundamental underlying principle of the American system, it's what makes it all work.  Elizabeth Warren nails it:

Thursday, September 15, 2011

notGalileo

I thought that Rick Perry’s comparison of climate change deniers to Galileo would get some traction, as it’s such an obviously ridiculous analogy.  As I haven’t seen much discussion of it, maybe it’s not as obvious as I thought.  So let’s look at the two in a kind of Madlib, fill-in-the-blank format.  I will have to make one small adjustment first: Galileo gets to be the climate scientists, not the climate change deniers.

Once upon a very long time ago, back in the:
  1. 4th century BCE,
  2. Industrial Revolution,
there was a theory.  The theory said that:
  1. the sun and the planets rotate around the earth.
  2. the sky is so big that we can keep pumping garbage into our atmosphere without causing any damage.
It wasn’t a very well-backed theory and a lot of scientists argued with it, but it was popular and a lot of people believed it.  They liked believing it.  It made them feel:
  1. important.
  2. safe.
There were some problems with this theory, though.  Little things like it didn’t match real world observations.  So:
  1. epicycles were added to the orbits, to explain away the problems.
  2. cities enacted the first clean air acts, banning the more obvious smog causing pollutants.
But this wasn’t enough.  So a new theory was developed; it was:
  1. Tycho Brahe’s idea that the other planets rotate around the sun but the sun rotates around the earth.
  2. the 1970s theory that pollutants would block the sun and cause another ice age.
Unfortunately, this theory didn’t match the real world either.  So another theory was found that matched all the real world observations.  This theory was called:
  1. the Copernican or heliocentric model
  2. global warming, though I prefer climate instability
and it argued that:
  1. the earth and the planets all orbit the sun.
  2. pollutants in the atmosphere cause a greenhouse effect where heat is trapped in the atmosphere, slowly warming the planet and disrupting world weather patterns.
This theory was developed in:
  1. the early 17th century
  2. the late 20th and early 21st centuries
by:
  1. two renowned scientists named Johannes Kepler and Galileo Galilei.
  2. many well-respected climatologists.
Within a very short period of time, a very strong consensus developed among everyone with any credibility who looked at the data.  This time, the theory was right. It matched the real world observations perfectly and, over time, as more evidence came in from:
  1. observations made with the newly invented telescope,
  2. hundreds of researchers all over the world,
the new evidence was a perfect match for the theory.  In science, this is the real test.  Developing a theory that matches existing observations is great, but to be truly convincing, a theory must predict evidence that has not yet been collected.  This theory met that test and convinced everyone who relied on reason and science.  But not everyone does.  There were still skeptics, people who didn’t like the new theory, people who felt uncomfortable with change and preferred to rely on tradition and faith.  Some were ignorant and insecure, but others had self-interested motives to deny the new knowledge.  These people were afraid that accepting the reality of the earth’s functioning as a planet would weaken their power base.  They privileged their reading of the bible over facts and evidence.  They were:
  1. the leaders of the Roman Catholic Church,
  2. the leaders of evangelical Christian movements in the US and politicians who rely on evangelical votes,
and they saw this theory as undermining the literal received truth of the bible.  Their belief system was, they feared, too frail to survive having so much as one of its myriad of elements challenged.  They could have reexamined their understanding of their sacred text (the divine being said the sun rose and set because he was speaking in language his people could understand), they could have used the age-old ‘mysterious ways’ argument to claim that there is no need for faith and science to be in perfect synch (if the divine could allow the Holocaust, a little planetary movement should not be a big stretch).  But they didn’t.  They made the choice to take an absolutist position and attack science for its audacity in disagreeing with them.

But it’s hard to attack an abstract.  So instead they attacked the scientists.  They couldn’t reach them all; some they had no jurisdiction over.  So they left:
  1. Kepler, who was not only in far off Germany but a Protestant
  2. The European, Asian, etc. scientists
alone and focused their attacks on those they could hurt.  Not right away though.  Their first responses were actually encouraging.  The first real notice they took of the theory was in:
  1. 1611 when Galileo lectured on the subject in Rome.
  2. 1989 when Bill McKibben wrote the first popular (rather than abstrusely scientific) account of the evidence.
The most notable responses were:
  1. a review of Galileo’s work by Jesuit mathematicians at the Collegio Romano, who granted it certification.
  2. the first President Bush’s promise to “fight the greenhouse effect with the White House effect.”
Then they thought about it some more and decided it might be dangerous.  They tried to silence the theory by:
  1. calling Galileo before the Inquisition and instructing him not to "hold, teach or defend" his beliefs; that he continued to hold them was evident to anyone who spoke with him on the matter, but the church left him alone as long as he kept those ideas out of the public sphere -- the original "don't ask don't tell".
  2. blocking publication of federally funded research into the impact of climate instability, preventing NOAA and NASA scientists from speaking publicly, and censoring reports and websites.
When those efforts weren’t enough, they turned to discrediting the scientists by:
  1. recalling Galileo to the Inquisition and excommunicating him, not for his belief in science, but for disobeying their gag order.
  2. trolling through their emails looking for evidence of fakery, evidence they didn’t find, but that didn’t stop them from pretending they did.
Through these measures they manage to obscure the near-universal scientific consensus and convince the gullible that their position was anything other than an attempt to bully the truth into submitting to their political ambitions.

It’s almost as if they’re the same story.

Wednesday, September 14, 2011

Fixing nothistory sounds

Modernity impinges on our awareness in the most historical of sites.  This can be good: visiting a medieval city is far more pleasant is the tourism board hasn't gone all out to recreate the pong of untreated sewage and the adults you share the space with have bathed since reaching puberty.  The absence of plague-bearing fleas can also be counted as an advantage.  But what about sound?  There have been many attempts to recreate musical sounds of the past using period instruments in ancient structures.  Dr. Damian Murphy, a sound artist and lecturer in the Department of Electronics at the University of York, has recreated the sounds of sites that are no longer intact.  You can listen to music as it sounded in Coventry Cathedral before it was destroyed in the Blitz and druidic ritual complete with the resonance created by the once-complete circle of Stonehenge but without the sounds of the nearby highway here.

Nice rescue of nothistory.

Monday, August 15, 2011

NotSerious campaiging

Of the many threads of discourse on the results of the Ames Iowa straw poll, there are two that contradict each other entirely.  One is that the straw poll is a poor predictor of success in the nomination game and the other is that Pawlenty had to do well in the straw poll to be viable.

The first is based on history.  While there was a poll in 1979, it was a minor event with low turnout and not repeated soon.  So the history of the poll as part of the electoral scene starts in 1987.  Since I haven't seen it laid out in full detail anywhere else (it may have been, I just haven't seen it), here's the list of all previous placings versus nomination results (Republican only):


Year
1st
2nd
3rd
Nominee
1987
Robertson
Dole
GHWBush
Bush (3rd)
1995
Dole
Gramm
Buchanan
Dole (1st)
1999
GWBush
Forbes
Dole
Bush (1st)
2007
Romney
Huckabee
Brownback
McCain (10th)

Of the four significant Ames Iowa Republican straw polls, in two the winners became the Republican nominee, in one it was the third place candidate, and most recently, the guy who came 10th out of 11 won the nomination.   So how does coming in third spell doom for the Pawlenty campaign?  Not just third place, but one of only three candidates to break double-digits, putting him significantly ahead of most of the others. 

The claim that he needed to do well in Ames is more than just a talk-show meme; Pawlenty said in no uncertain terms that his campaign "needed some lift" from Ames and that, having not gotten it, his candidacy was no longer viable.  He also said his campaign was about getting his record of achievement before the voters.  Not about charisma or a vision for America, but getting his record out there.  That's a valid strategy, but it's a long-haul strategy.  It's going to take time for voters to get past the flash-in-the-pan candidates to see your worth.  That is incompatible with a strategy that depends on winning a popularity contest at the first state fair of the campaign season.  These are basic realities that any serious politician or handler knows.  Which makes me wonder if Pawlenty was ever in it to win.  Either he has really, really bad political instincts or it was all a game.  

What kind of game?  Well, it does occur to me that ex-presidential candidate Pawlenty is a much more bookable speaker than ex-governor Pawlenty.  He said himself, his campaign was not about a vision for America, it wasn't even about winning the presidency, it was about getting himself more recognition.  Which makes him more marketable.

Let's face it, politicians make a hell of a lot less money than the people they hobnob with, especially Republican politicians.  Compared to most of the big names in his party, he's a relative pauper.  Now is the perfect time to start building that fortune: his kids are about to start college and opportunities for Republican speakers are as good as they've ever been.  His local party offered him the chance to run for senator, but he's given them a definite no on that.

The way I see it, this all adds up to one thing: He's pulling a Palin, capitalizing on a failed foray into national politics by leaving public service for the service of Mammon.  The difference is, he meant to all along.  If I was one of his donors, I'd be pretty unhappy.

Tuesday, August 9, 2011

That's not ... what is that?

In my ongoing quest to provide historical context to contemporary issues, I could not possibly allow the latest incarnation of exploratory endeavor to pass without comment.  The platform of the Tea Party in Space* brings to mind a number of important historical precedents.  Alexander the Great's independent bid for world domination, taken with the reluctant permission of the Greek city-states, but not their financial backing, comes to mind as an example of the potential of individual initiative.  The voyages of Christopher Columbus and Vasco de Gama demonstrate the problems inherent in introducing the heavy hand of government into the quest for knowledge.  And yet, there is another precedent that strikes me as more relevant, more apt, more perfectly suited to this topic:


*I must be a sinner, for I covet that teacup.

Saturday, August 6, 2011

That's notCommon Law on assault

There has been an astonishing new development in criminal law, or at least some lawyers are hoping to make it so.  The story, as told by a local news show:


The part I find appalling is that, after two men opened fire on a Philadelphia bus, aiming at the passengers within, their defense is that they should not be charged with assault because no one was injured.

What a fascinating legal principle: "No harm, no foul."

Except that's not how our legal system works.  We have a little thing called the Common Law system, based on hundreds of years of precedents, mostly from England.  How did that happen?  A series of what are known as "reception statutes" were enacted in post-Revolution America.  Some were legislative acts, some were court decisions, the rest were written into state constitutions.  Here, for example, is the New York one, enacted in 1777:

[S]uch parts of the common law of England, and of the statute law of England and Great Britain, and of the acts of the legislature of the colony of New York, as together did form the law of the said colony on the 19th day of April, in the year of our Lord one thousand seven hundred and seventy-five, shall be and continue the law of this State, subject to such alterations and provisions as the legislature of this State shall, from time to time, make concerning the same.

By adopting English common law, with the right to make changes as needed, the new states avoided the legislative nightmare of having to create an entirely new legal system.  Given that the colonies the states derived from had also followed English common law, this meant they could keep their courts and legal systems as they were.  It was a practical decision, but also an ideological one; the newly independent states may no longer be subject to the British crown, but they still saw themselves as culturally and philosophically British.  And so, throughout the United States of America today, we still follow the British common law legal system.  Oddly enough, the words "no harm, no foul" are not found in that system.  Nor has such a clause been added by any American state.  Instead there is a legal definition of the term "assault" that goes something like this:

an intentional act by one person that creates an apprehension in another of an imminent harmful or offensive contact

The key elements here are intent and apprehension of harm.  The video shows a bunch of very scared people running and ducking, and in one case crawling under a seat.  That pretty much covers apprehension.  As for intent, aiming a gun at someone and pulling the trigger satisfies that condition.  

There has been a great deal of abuse of the rule of law in the U.S. in recent years, most notably in areas where massive harm has gone unpunished.  Notions like "too big to fail" and the need for restitution does outweigh the need for prison corrupt our justice system.  They replace the notion of justice with harm minimization, a dubious concept in principle and even more dubious in execution.  But creating new definitions of criminal acts based on wishful thinking is not yet the common law for the common people, only the super-rich.

Thursday, August 4, 2011

Really notInsane


Is Anders Behring Breivik insane?  I have no idea.  But I’ve heard the comment, several times now, that he must be because only the insane would imagine that a terrorist attack could garner support for their cause, rather than the other way around.  That’s nothistory.

There are plenty of examples of terrorist acts that have worked against the intention of the perpetrator.  Timothy McVeigh’s attack on the Alfred P. Murragh Building in Oklahoma City created sympathy for the federal civil service rather than the anti-government uprising he hoped for.  But there are also examples where terrorists have gotten exactly what they wanted.  Here are a few:

1181 Reginald de Chatillon-sur-Marne was unhappy with the fact that the Crusader states had signed treaties and settled into peaceful co-existence with their Islamic neighbors.  Not satisfied with possession of Jerusalem, he believed that all Muslims were agents of the Devil and that all good Christians should do everything in their power to eliminate the scourge of Islam from the earth.  So he got some followers and attacked a caravan of pilgrims on their way to Mecca.  The next year he attacked a ship full of similarly intentioned Muslims.  The problem for the Crusaders was that their entire mission had been founded on anti-Muslim propaganda, albeit most of it based on falsified information.  To denounce Reggie would require admitting that their own cause was false.  The atrocities themselves horrified the Islamic world, but it was the unwillingness of the Crusaders to do or say anything about them that convinced Saladin and his allies that the treaties were nothing but stalling tactics and that the Christians were, in truth, not reliable regional partners but their avowed enemies.  The Islamic world united behind Saladin and retook Jerusalem, which in turn ignited the third and some subsequent Crusades (the ones aimed at the Islamic world, that is; not the ones against Christians of non-papist inclinations).  A stable, amicable relationship was successfully turned into years of bloody warfare by the acts of a single terrorist leader.  The heads of the Crusader states played into Reginald’s hands by allowing the perception that they sanctioned his actions.

1932 The two major parties took 93% of the vote in the Japanese election, the militarists being relegated to an invisible portion of the “other”.  Three months later, an attack by junior naval officers in Tokyo left the prime minister dead.  The officers surrendered and turned their trial into a platform for their political views, the first major presentation of the expansionist, emperor-worshiping position.  The young, handsome, passionate officers’ willingness to sacrifice their lives for their beliefs garnered massive public support.  Petitions begging for leniency were signed in blood by hundreds of thousands of people.  To appease the newly energized pro-military forces, the new government was led, not by the head of the parliamentary party in the majority, but by military leadership which immediately recognized the territorial gains in Manchuria, legitimizing the army’s adventurism, and increased the navy’s budget.  The next election, in 1936, proved that the support was personal, not political, as the militarists still couldn’t break double digits or gain a single seat in the parliament, so the ideologues in the military did it all again with another terrorist attack that overturned the results of another election.  The militarization of Japan in the 1930s happened despite elections in which over 90% of the population consistently voted against expansion (the same results occurred in 1939 as well), because the government responded to terrorism with appeasement, with precisely the same results Chamberlain got over on the next continent.

2001 Is there anyone who doubts that the Al-Qaeda terrorists who took down the Twin Towers in New York got exactly what they wanted?  A decade ago, the overwhelming majority of Muslims in every country that takes polls had positive feelings towards the U.S.  Today, the U.S. is seen, at best, as untrustworthy and erratic, at worst as an outright enemy.  A decade ago, Muslim Americans were broadly accepted as part of the melting pot.  Today, there is a growing anti-Muslim industry that is prominent in the public discourse and making inroads into our government.  The terrorists wanted to turn Christians and Muslims against each other.  As horrified as we were by their actions, we gave them exactly what they wanted.  They succeeded, not because they were smarter than McVeigh or their cause was more just, but because we let them.  We played into their hands.

So far the Norwegians are not playing into Behring Breivik’s hands or trying to appease his co-believers.  So far, all accounts indicate, the backlash is promoting the Labor Party he was trying to overthrow.  But that doesn’t make him crazy for thinking it might have gone the other way.

Sunday, July 31, 2011

Sorry, Bill, Christians are not necessarily pacifists

I had thought that Bill O'Reilly would be quickly shown the error of his ways when he argued that Anders Behring-Breivik could not be Christian because, No one believing in Jesus commits mass murder.  Apparently not so much; he's still arguing that Christianity is a peaceful religion.  So, for anyone who is unclear on the concept, he is a brief chronology of some of the highlights of Christian murders throughout history:

313 Edict of Milan - Christianity is legalized in the Roman Empire.  Eusebius and Lactantius call for vengeance on those who have persecuted Christians, the latter declaring that, "[God's] fury is poured out like fire and the rocks are broken asunder by him."  Almost immediately pagan temples start being looted and destroyed, a process that would be repeated regularly for over a century.  Some contemporary sources claim that pagans were slaughtered during these acts, others deny it.

333 The first decree calling for capital punishment for heretics, in this case anyone found in possession of texts declared heretical.  Followed in 346 by capital punishment for anyone who visits a pagan temple and in 356 with capital punishment for anyone practicing paganism anywhere, including their own homes.

415 A Christian mob led by monks in Alexandria attacks Hypatia, a renowned philosopher, mathematician and astronomer, dragging her through the streets to a church where they murder her.  Her school and students relocate to Greece to escape further persecution.

532 Riots in Constantinople that started over a sporting rivalry turn political.  Justinian orders the execution of rioters; 30,000 unarmed civilians are slaughtered.


782 Charlemagne orders 4500 Saxons beheaded for returning to pagan belief after their coerced conversion to Christianity.


1095-6 Crusaders on their way to the Holy Land stopped off to massacre Jews in Mainz, Cologne and other towns along the way.  In 1099, they massacred over 60,000 people in Jerusalem.


1189 Richard I of England punished Jewish leaders for daring to try to attend his coronation (though he seems to have kept the gifts they brought).  The rumor that he ordered the death of all Jews led to massacres in both London and York in which unknown numbers of Jews were beaten to death and burned alive.


1191 Richard topped that with the mass execution of Muslim prisoners in Acre.  Philip of France, who had captured them and left them in the care of Conrad of Montferrat, had negotiated terms for their release, but Richard couldn't be bothered.


1209-1229 In the Albigensian Crusade, Cathars of Southern France who refused to convert to Catholicism were burned at the stake.


1234 The people of the town of Steding, Germany refused to accept the power of the Roman Church.  At the request of the local Bishop, Pope Gregory IX declared yet another crusade in which between 5000 and 11,000 men, women and children were slaughtered.

1456 In the aftermath of the Battle of Belgrade, 25,000 Turks were slaughtered by Hungarian forces.

1502 Vasco de Gama had the hands, ears and noses cut off of 800 sailors and passengers of a merchant vessel, then had them tied up, their teeth knocked in, and what was left of them burned alive.  Their crime?  Arriving at a city that had offended him.  Most of the "great" explorers limited their massacres to the inhabitants of port cities.

1631 During the Thirty Years War (which didn't last 30 years), between 20,000 and 40,000 residents of the city of Magdeburg were slaughtered after the city was taken.


Tudor England heretic burning.  Witch hunts.  The Inquisitions.  The slaughter of Native Americans by white Christians.  The Holocaust.  There's a lot more, but I think the point is made.

The denial of the pacifist nature of Christianity and justification of violence began soon after the legalization of the faith.  These arguments relied on both the text of the New Testament and arguments about the natural order of society.

354-430 Saint Augustine:  "If the Christian Religion forbade war altogether, those (soldiers) who sought salutary advice in the Gospel would rather have been counseled to cast aside their arms and to give up soldiering altogether.  On the contrary, they were told: 'Do violence to no man ... and be content with your pay' (Luke, 3:14).  If he commanded them to be content with their pay, he did not forbid soldiering." AND "The natural order conducive to pease among mortals demands that the power to declare and counsel war should be in the hands of those who hold the supreme authority."

1225-1274 Thomas Aquinas quoted Augustine at length in his own arguments for Christian violence, but made some additional points as well:  "As the care of the common weal is committed to those who are in authority, it is their business to watch over the common weal of the city, kingdom or province subject to them.  And just as it is lawful for them to have recourse to the sword in defending that common weal against internal disturbances, when they punish evil-doers, according to the words of the Apostle (Romans 13:4): 'He Beareth not the sword in vain, for he is God's minister, an avenger to execute wrath upon him that doth evil'; so too it is their business to have recourse to the sword of war in defending the common weal against external enemies."  In other words, the state is permitted to commit acts of violence by taking on the role of God's minister.  As for private persons, if they "have recourse to the sword by the authority of the sovereign ... through zeal for justice, and by the authority, so to speak, of God, is not to 'take the sword', but to use it as commissioned by another, wherefore it does not deserve punishment."

1483-1586 Martin Luther originally argued that only defensive wars were justifiable, but in 1529 called for a war of aggression against the Turks.  Like Augustine, he argued that the New Testament validates soldiering as a "legitimate and godly calling," and made the obvious connection that this could only be true if war is likewise a legitimate act.  Like both Augustine and Aquinas, he assumes that the state itself is an agent of God's will, and so justifies only fighting for the state, never against it.


In the case of Anders Behring-Breivik, then, there is no question of whether a Christian can commit mass murder.  Christian states and individuals have done so consistently since the religion first became powerful enough to have that ability.

There is, however, a theological question.  Now that our rulers are no longer divinely appointed (to the extent that they ever were), is it a sin for a Christian to commit violence against a secular state which does not represent the will of God?

But theology is not history and I am not a theologian.  But then, neither is Bill O'Reilly.

Friday, July 29, 2011

That's not the general welfare

Thom Hartmann from Russia Today did my job for me.  A bit of historical perspective on the current debt ceiling debate and the centrality in our constitution, as intended by the founding fathers, of the responsibility of the government to provide for the general welfare.  He gets a bit preachy in the last minute, but the history is solid and important:


Tuesday, July 19, 2011

That's notCleanAir

The image below is a magnified cross-section of exterior wall paint from a London house, built in 1705.  This particular house was painted rather often, on average every 4.2 years, providing a 300 year long regular record of paint types, weathering, etc.  A lovely find for a material culture specialist.  It also provides a rather graphic visual of the impact of environmental policy.  As nicely noted for us by Patrick Baty, it shows how the British Clean Air Act of 1956 ended a 250-year pattern of soot and industrial pollutant accumulation, visible as dark lines between the layers (marked by the 4th text-box down on the right-hand side):


All that filth, gone.  Filth that stuck to everything it touched.  Filth that used to get in people's lungs and on their skin.  Filth that covered crops and feed and livestock in the fields and was consumed in every meal.  Today's air-borne pollutants are less visible, but they just as surely accumulate on surfaces, whether of buildings or human bodies or food sources.

Thursday, July 14, 2011

That's notDemocracy in the US, but it is in the UK

In a previous post, I wrote about how the idea of a free market, as created by Adam Smith, is actually supposed to work, as opposed to the politically motivated, wildly distorted image so common in recent public discussion.  That post was on the economy.  Smith's argument, however, contains a principle that extends far beyond economic matters.  This one:


Smith argued that justice includes “protecting, as far as possible, every member of the society from the injustice or oppression of every other member of it”


In other words, there are lines that even the rich and powerful are not supposed to be able to cross.  We used to know that in America. In 1911, John D. Rockefeller himself, the richest man in the world, the first billionaire, and possibly the richest man in world history, was cut down to size by the Supreme Court decision in Standard Oil Co. of New Jersey v. United States, when his oversized monolith of an oil company was broken into dozens of smaller entities forced into competition under the Sherman Anti-trust Act.  In the 1920s Teapot Dome scandal, for the first time a member of a president's cabinet, one of the most powerful political figures in the country, the Secretary of the Interior, Albert Fall, went to jail and the power of the U.S. Congress to compel witnesses in corruption investigations was established.  In the Watergate scandal, a presidency was lost and multiple high-ranking members of the executive went to jail because Congress and the overwhelming public opinion of the people demanded punishment for any public official so arrogant as to believe they could violate the civil rights of American citizens.

But truth and justice are no longer the American way.

In the last decade, Americans, both the people and the leadership, sat back and watched as constitutional protections against unlawful search and seizure and the right of habeus corpus were trampled and felt no outrage.  In the last decade, Americans sat back and watched as elderly people lost the retirement savings they'd spent their lives working for because of illegal activities and felt no outrage.  Americans still sit back and watch as homes were taken away from people who had made every one of their payments by agencies not legally entitled to the ownership of their mortgages and felt no outrage.  I could go on.  But I'd rather look at what's happening in England right now.


The British, both the people and the leadership, found out that the privacy of three children was violated and the country exploded.  This clip from parliament is wonderful:




That's the Leader of the Opposition, Ed Milliband, asking Prime Minister David Cameron to take down Rupert Murdoch's News of the World.  NotW is the primary media support and a key financial support of the Liberal Party that Cameron belongs to.  It's like if someone had asked George W. Bush to take down Fox News during his administration.  And Cameron agrees unequivically.  The only argument is how to go about it and whether the Labour Party can make the Liberal's wear the blame.


That's how American politics used to work.  That kind of belief, that at the end of the day, no matter how big a corporation and how valuable it is to a political party, the role of government is to serve the people not the money, is what is supposed to make a democratically elected political leader better than a dictator.  That sense of outrage, spilling over from the people to the government, is what we like to call participatory politics.  If the people don't care about their rights and don't believe that their government should be held responsible for protecting them from what Justice Douglas called the problem of bigness, what's the point of a democracy?

Our history is full of examples of how democracy is supposed to function, of corruption prosecuted, of big fish being cut down to size, of the people caring enough about what's right and just to hold our elected officials responsible.  It's time to go back to school and study our history to see how that works.  Or just turn on the news and watch the Brits get it right.

Wednesday, June 22, 2011

temporary hiatus

That's Not History is on temporary hiatus for an in-depth, personal investigation of the U.S. medical system.  Your regularly scheduled nothistory should resume soonish (I hope).

In the meantime, two thoughts for the day.

1. Jon Stewart has world historian envy.  How cool is that?  It's like we're the new rocket scientists.  Which would mean I would trump my dad, who used to be a rocket scientist. 



2. We need a new name for the weird stuff going on with the weather.

Global warming sounds soothing.  It's like we've put our poor, beleaguered planet on a comfy couch, wrapped in a soft blanket, in front of a roaring fire, with a cup of hot chocolate.  Awwww.  And what's the opposite of global warming?  Global cooling.  What, we're trying to bring on another ice age?  Who wants that?

Climate change is confusing.  Change is good, right?  Especially if you're a progressive or lib'rul.  So how come the left is against change and the right is for it?  Too confusing, change the channel.  Oh, and those programs they have to stop climate change, like that cappy tradey thing?  What exactly are we capping and trading, huh?  It wouldn't happen to be AMERICAN JOBS, would it?  We've heard how those trade agreements go and we don't need any more of those, no sirree bob.

I propose:  Climate instability.

No one likes instability.  Economic instability means being out of work and the in-laws moving in with you because they've lost their retirement savings; no one wants that.  Unstable people are scary, like crazy Uncle Harry who no one wants to sit next to at Thanksgiving just in case he's off his meds; no one likes them either.  And the cure for climate instability is climate stabilization.  All you have to do is look out your window or watch the news to know something's going wrong with the weather, and fixing that sounds like a darned good idea.  Let's stabilize that sucker.

Sunday, May 8, 2011

Failed Messiah nothistory

The blog failedmessiah.com made a great nothistory pick this week.  The orthodox Jewish paper Der Zitung (literally, "The Newspaper") published this picture:



Notice the problem?  The invisible Hillary Clinton.  It seems they find photographs of women inherently sexually provocative, so they resorted to fauxtography.  This is nothistory at two levels.  First, because it misrepresents the event itself, and second, because the idea that a policy of removing women's images is somehow a holy act is a false reading of the history of Jewish teachings, specifically the ancient teaching of geneivat da'at (literally, "stealing knowledge/thought"), which forbids misrepresentation or deceit.  This teaching originated in second century Babylon and has been reinforced and expanded upon by Judaic scholars ever since. 

There is no historic Jewish teaching or recognized rabbinic authority that provides a blanket condemnation of the viewing of female images.  The only discussion of the issue that I can find or have ever heard of was some years ago in reference to one of the Holocaust museums' images of nude concentrations camp victims.  Some orthodox Jews protested that the images violated the modesty of the women, because of their nudity; there was no claim that the portrayal of women per se was sinful and no objection to the images of clothed women.  For the tortured logic of using a false reading of historical teachings to justify misrepresenting a historical event, Der Zitung wins a nothistory squared award.

Kudos, Failed Messiah.

Thursday, April 21, 2011

That's not the free market

David Barton recently said:  “It is a principle of free market. That's a Biblical principle, that's a historical principle. We have all these quotes from Ben Franklin and Jefferson and Washington and others on free market and how important that is to maintain. That is part of the reason we have prosperity. This is what the Pilgrims brought in, the Puritans brought in, this is free market mentality.”

Barton was actually talking about net neutrality, arguing that it’s a socialist concept and therefore anti-biblical.  But what caught my attention was his deification of the free market.  He is no more guilty of this than many, many others.  I’m not attacking him in particular, just using him as an example of something that has been bugging the hell out of me for decades.  Ever since I took introductory economics in college and had to read Adam Smith.

For those who haven’t heard of him (which would include almost every freshman I’ve ever taught), he’s the guy who came up with the theory of the free market, in a book called The Wealth of Nations, first published in 1776.  I won’t bother arguing the ridiculousness of claims to biblical or Pilgrim free marketism; the economic lives they lived were so fundamentally different from ours that valid comparisons are not possible.  Nor will I dispute that many of the founding fathers agreed with Smith’s notion of free markets; we know that Adams, Jefferson, Madison and Hamilton all read The Wealth of Nations and there is some evidence that Washington did as well.  From what I’ve read (which isn’t comprehensive), it seems that they agreed with its principles.  My peeve (and it’s much too rank to be a pet) is that most people who talk about the theory of the free market today have NO FRIKKEN CLUE what it actually is.

Smith’s argument was that government shouldn’t go into business.  At the time, the British government was investing madly in expanding the empire for the benefit of a small number of very wealthy corporations.  The theory was that colonies would be sources of raw materials for British industry and provide consumers for British manufactured goods.  As Smith explained:

A great empire has been established for the sole purpose of raising up a nation of customers who should be obliged to buy from the shops of our different producers all the goods with which these could supply them. For the sake of that little enhancement of price which this monopoly might afford our producers, the home-consumers have been burdened with the whole expense of maintaining and defending that empire. For this purpose, and for this purpose only, in the two last wars, more than a hundred and seventy millions [in pounds] has been contracted over and above all that had been expended for the same purpose in former wars. The interest of this debt alone is not only greater than the whole extraordinary profit, which, it ever could be pretended, was made by the monopoly of the colony trade, but than the whole value of that trade, or than the whole value of the goods, which at an average have been annually exported to the colonies.

Taxpayer money was being wasted in vast sums on costly military ventures that would benefit the few and burden the rest of society.  The government of the day argued that this was an investment in the wealth of the empire.  Today, economic historians agree with Smith: the empire cost the British taxpayer far more in cash outlays for troops, weapons, government outposts, etc. than it ever brought back in expanded markets (not just when Smith was writing, but for the entire period of the British Empire).  But the owners and investors of the British East India Company made huge piles of money.  The government spent bucketloads of taxpayer money to make their friends, the already rich, even richer.  Sound familiar?  The notion of the free market was that governments should not promote one business over another.  In Smith’s words:

The statesman who should attempt to direct private people in what manner they ought to employ their capitals, would not only load himself with a most unnecessary attention, but assume an authority which could safely be trusted, not only to no single person, but to no council or senate whatever, and which would nowhere be so dangerous as in the hands of a man who had folly and presumption enough to fancy himself fit to exercise it.

Smith pointed out, rightly, that governments tend to make lousy business decisions.  Their motives and modes of thinking are all wrong, based not on profit (which encourages economic growth) but on political expediency (which usually doesn’t) or cronyism (which never does).  The only exceptions I can think of in all of world history are Augustus Caesar’s decision to build a fort and a harbor in what is now Yemen and establish a shipping line to bring spices across the Indian Ocean under Roman control, and the Japanese government’s decision in the early post-WWII decades to promote the car industry when the best economists were telling them to stick with lower technology goods.  Those two examples both paid off handsomely.  Otherwise, governments that have tried to guess where the economy should go have generally gotten it wrong.  The more the government gets involved in trying to direct the economy, the worse it gets.  Communist countries of the 20th century are the worst-case examples of how devastating the consequences can be when governments try to make business decisions, with tens of millions of people paying with their lives for poor government choices.

But that’s not what Barton was talking about, nor what the majority of Americans think of when they talk about government and the free market.  They think that a free market means that a government should stay out of the economy altogether.  And that is absolutely NOT what Smith said, and absolutely NOT what the founding fathers meant by a free market.

Smith was pushing his government to stop acting like a business and start acting like a government.  And to Smith, acting like a government meant three things: defense, justice, and public works.  It’s that second one that matters here, because Smith argued that justice includes “protecting, as far as possible, every member of the society from the injustice or oppression of every other member of it,” including particularly the abuses of unregulated business.  He railed against the evil of monopoly, both in general and in specific (the dyers trade secrets and the British East India Company being favorite targets).   He argued that, left to their own devices, corporate interests would always cheat the public, for, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”  

Unfortunately, he never managed to write the last book he had planned, the one devoted exclusively to government, so a lot of what we know about his thoughts on regulation comes from interpretation rather than direct statement.  For example, if Smith had not supported government regulation of business, The Wealth of Nations would have been a much, much shorter text: it has over 100 pages just on regulation of the banking industry.  You don’t really have to come out and say “I believe in government regulation of business” when you spend 100 pages describing how that regulation should work.  The real problem Smith had with government regulation was not its existence, but the frequency with which regulations served business interests rather than the people, like when companies get involved in writing the regulations:

The proposal of any new law or regulation of commerce which comes from this order [the business world], ought always to be listened to with great precaution, and ought never be adopted till after having been long and carefully examined, not only with the most scrupulous, but with the most suspicious attention.  It comes from an order of men, whose interest is never exactly the same with that of the public, who have generally an interest to deceive and even oppress the public, and who accordingly have, upon many occasions, both deceived and oppressed it.

Smith never argues against regulation – he assumes there will be regulation – his concern is with the nature of that regulation.  To have a free market, investments must be chosen freely by individuals and businesses, without government instructions or tax incentives, without tariffs or subsidies to artificially manipulate prices, and without the government using tax money to promote particular companies.   That’s a whole big category of government regulation that he opposed.  But, if the economy is left entirely to the companies?  According to Smith, collusion, price-fixing, monopolies, and other forms of anticompetitive behavior will damage the economy and bring everyone down with it.  Government should not direct the economy into one sector or another; that would be an unfree market.  But regulating company behavior?  THAT is the proper role of government in a free market. 

That’s what Adam Smith wrote, that’s what the founding fathers read in The Wealth of Nations, and that is what they meant when they used the term and related language from Smith’s writing.  Governments should regulate business to protect society and the people from the evils of greedy businessmen.  In the American founding fathers’ own language, “We the People of the United States, in Order to… promote the general Welfare … do ordain and establish this Constitution for the United States of America.”  Regulating business is, and was always meant to be, an essential part of the constitutional obligation to promote the general welfare.

If you want to talk about a theory of economics in which the role of the government is to sit back and let business do whatever the hell it wants, go right ahead.  It’s called Libertarianism.  If you want to talk about the theory of the free market, the beliefs of the founding fathers, and the Constitution, you have to acknowledge the essential role of government regulation of economic behavior.

Tuesday, April 19, 2011

Thats notKabuki

Talking Points Memo has an article on winners and losers in the first 100 days of the new congress.  Number three on the winners list is Kabuki Theater, referring to the posturing and posing of political pundits and personalities.  I won't argue the legitimacy of the metaphor, it's a common enough one, but the image?



Sorry Mr. Marshall, but that's notKabuki.  For starters, those are women, and all Kabuki performers are male.  Then there's the makeup; the metaphor came about because of the exaggerated character of Kabuki performance, most easily seen in the facepaint.  Like this:



At a guess, given the backdrop and the styles of the kimono, I'd say they're probably Maiko, a term that technically refers to an apprentice geisha but these days is used to refer to anyone performing that style of traditional dance.

Monday, April 18, 2011

Not that 13th century

About a year ago, Foreign Policy ran a wonderful photo essay by Mohammad Qayoumi about Afghanistan in the 1950s and 60s, showing a rapidly modernizing society incorporating Western fashions and technologies but still retaining elements of tradition.  Women went to university while wearing headscarves, and men in pakol hats worked high-tech machinery.  Modern medicine was trucked to remote villages in the form of trained nurses, complete with those funny little white, winged caps that look like origami projects.  (I wonder whether Qayoumi realized the recurring role of headgear in his selections of images?)  

I recently came across that essay again, but this time, already knowing the story the pictures told, I was more aware of the text.  In particular, the opening paragraph, where he described the inspiration for the essay:
On a recent trip to Afghanistan, British Defense Secretary Liam Fox drew fire for calling it "a broken 13th-century country." The most common objection was not that he was wrong, but that he was overly blunt. He's hardly the first Westerner to label Afghanistan as medieval. Former Blackwater CEO Erik Prince recently described the country as inhabited by "barbarians" with "a 1200 A.D. mentality." Many assume that's all Afghanistan has ever been -- an ungovernable land where chaos is carved into the hills. Given the images people see on TV and the headlines written about Afghanistan over the past three decades of war, many conclude the country never made it out of the Middle Ages.

From a European perspective, a 1200 A.D. mentality is pretty damning; Europe in the 13th century was not exactly an advanced part of the world, and hadn’t been for some time.  But from a Central Asian one?  What exactly does “making it out of the Middle Ages” mean in Central Asia?

If we set the Middle Ages at roughly the 5th to 15th centuries, Middle Ages Afghanistan was actually a pretty hopping place.  In fact, it started hopping a long time before the Middle Ages.  

What is today Afghanistan has had a lot of names.  It (or parts of it) were Bactria and the Hindu Kush and Kandahar and Kushan and Khorasan. The area first appeared in world history when the early Mesopotamians established a colony there to mine the precious lapis lazuli that colored and decorated all their most precious objects, from the banner of Ur to the Ishtar Gate.  That makes it the first place ever considered valuable enough for a projection of imperial power beyond the cultural boundaries of the metropole.  But that’s just history – the written record – kingdoms and cultures rose and fell in the area long before anyone wrote about them.

It was part of the Persian Empire before becoming a center of Hellenistic culture under the Seleucids.  In the second century BCE, a Chinese emissary by the name of Zhang Qian found that, “Its people cultivate the land and have cities and houses [an important measure of civilization in Chinese culture] … The people are poor in the use of arms and afraid of battle, but they are clever at commerce. ... The population of the country is large, numbering some 1,000,000 or more persons. The capital is called the city of Lanshi (Bactra) and has a market where all sorts of goods are bought and sold.”  His report to the emperor, and description of the sophistication of the society and value of the goods, led the previously insular Chinese court to declare the Bactrian’s as worthy friends. 

This was a critical turning point in world history.  When the Chinese emperor decided to encourage contact with Bactria, his armies were sent to establish control over the uninhabited portions of the route and protect travelers along the way.  Thus was the Silk Road created. 

Afghanistan’s role in world trade wasn’t limited to the opening of the Silk Road.  The land sits at the T-crossing of the East-West Silk Road and the North-South trade route from Central Asia into India.  All goods traveling overland between India, Europe, China, Japan, Persia, and everyone in between, went through what is today Afghanistan, where massive markets covered many square miles of ground.  Managing the markets meant not only knowing business and taking care of basic needs, like sanitation, clean water, and food, it also meant providing complex financial services, from money-changing to investment instruments.  Not to mention all the locals who got into the trade business themselves.  The area was the heart of a vast, international trading system, and its people were heavily involved in the world of commerce.  Nor was it a passive trade center.  From the first to the third centuries, the Kushan people ruled the area, and under their control and patronage, entirely new cultural traditions emerged from the mix of goods and ideas flowing into the area.  New artistic styles were developed, combining Greek, Indian and Chinese influences (most famously seen in the now lost Bamiyan Buddhas and the Dunhuang Caves paintings) and Buddhism was transformed from the private practice of individual, ascetic spiritualism to a religion of great temples and monasteries.  Theirs was a free-wheeling society, where money talked, fortunes could be made or lost in a deal, and people carved out their own destinies.

And in the Middle Ages?  The people of what is now Afghanistan added the art of war to their mastery of commerce and culture.  Western Afghanistan, the part in the mountains, resisted the Islamic Conquest when it was at its most vigorous, and even the Eastern territories were held for little more than a century before returning to native rule.  Compare that to the failures of the Byzantines and Crusaders centuries later when the Islamic World was well past its peak.

In the 13th century they were finally conquered, but it took the biggest, most expansive empire the world has ever known to do it.  Under the Mongols, and later their offshoot the Timorids, Afghanistan continued to be a key node in the vast Eurasian trade network of the re-emergent Silk Road and their people continued to live their lives as free, independent individuals, masters of their own destinies, and world leaders in commerce and business.  In other words, exactly the sort of wealthy, experienced businessmen that the Erik Prince’s of the world aspire to be today, back when Secretary Fox’s predecessors were still land-tied serfs, laboring without pay and begging for scraps from their lords’ tables.

As Qayoumi showed, far better than I could, Afghanistan’s current problems do not stem from a rigid, unbending culture irrevocably tied to anti-Western traditions.  Neither are they the result of a history devoid of accomplishment.  'Medieval’ is only derogatory if you happen to be of European descent. And smug, superior mockery of other cultures and societies is nothistory.

Wednesday, April 13, 2011

Still not Christian history (necessarily)

This seems kind of redundant after my previous post, so I'll keep it brief.  Filmmaker Simcha Jacobovici has come up with "the best archaeological argument ever made that two of the nails from the crucifixion of Jesus have been found".  The logic is as follows: a tomb was discovered that might have been that of Caiaphas, the tomb contained nails that might have been used in a crucifixion, and (in Jacobovici's words) "since Caiaphas is only associated with Jesus's crucifixion, you put two and two together and they seem to imply that these are the nails."

First, the tomb is reported to be that of the entire Caiaphas family, not just the one guy, so they nails could have been associated with any of them.  Or with the construction of the tomb.  Or have fallen out of someone's pocket.

Second, Caiaphas is associated with Jesus' crucifixion TODAY.  At the time of his death?  Really, not so much.  For one thing, Josephus tells us he was in power for 18 years, not to mention that he has to have been a man of some importance already to have been appointed High Priest by the Roman rulers, ergo the trial of Jesus wasn't the only accomplishment of his life.   Christianity at the time was a teeny, tiny little cult.  No one but the Christians themselves would have been interested in those nails, by what logic would they bury them with Jesus' antagonist? I'm not saying they couldn't be those nails, but why on earth would they be? 

Reuters did actual research, and couldn't find a single, credible expert who gave any credence to the theory, including the Israel Antiquities Authority who was in charge of the dig.  Jacobovici himself says it's only a 'maybe'.  And yet, it's all over the freakin media.  A quick search in Google found dozens of articles, very few of which mentioned any doubts on its validity.  Because nothistory makes for good ratings.  Feh.

 

Wednesday, March 30, 2011

That’s not Christian History (necessarily)


Is it the greatest thing since the Dead Sea scrolls?  Maybe.  What it isn’t is definitively Christian.  At least not yet.

Some odd book-like objects were found about five years ago in a cave in Jordan.  Not your typical books, these are inscribed metal plates, bound (in some cases on all four sides) with more metal.  It’s possible that they are early Christian artifacts.  Early dating evidence makes the timing possible.  The location of the find makes it possible.  But the supposed definitive proof is what I like to call nothistory.

This proof is described by Dr. Margaret Barker, identified as a past president of the Society for Old Testament Study, thus:
‘As soon as I saw that, I was dumbstruck,’ he [sic] said. ‘That struck me as so obviously a Christian image. There is a cross in the foreground, and behind it is what has to be the tomb [of Jesus], a small building with an opening, and behind that the walls of the city.
‘There are walls depicted on other pages of these books too and they almost certainly refer to Jerusalem. It is a Christian crucifixion taking place outside the city walls.’

Let’s start with the easy stuff.  Margaret Barker is indeed entitled to the title of doctor, but not through the academic route.  Her doctorate of divinity was honorary, granted by the Archbishop of Canterbury in 2008 for her work on the symbolism of the first temple.  Her thesis, not widely accepted, is that there was an earlier Jewish tradition that can be inferred from the symbolism embedded in the physical structure of the first temple, one based on a more mystical faith and including worship of a sacred tree associated with the Akkadian mother-goddess Asherah, but that was replaced by a later, more strictly monotheistic Judaism. She argues that disciples of the older form of Judaism had not yet disappeared by the time of Christ, that a number of them became his followers, and that theirs is the form of Judaism most influential in the teachings of the historical Jesus.  It’s an interesting idea, though her evidence seems (to the non-expert eye, anyway) pretty thin.  More worrying, her inclusion of medieval tropes like infant sacrifice is highly suspect, especially considering that many of her sources were written hundreds if not thousands of years after the fact.  While she is a popular figure in Latter Day Saint circles, critiques of her work are readily available on the web, though a search on “Margaret Barker Bible review” in JSTOR brought up no scholarly evaluations of her work.

The other easy hit is the notion of a Christian crucifixion.  I suggest we all agree that she meant a Roman crucifixion of Christians and move on.

Now let’s get to the actual evidence, or rather, the question raised by the supposed evidence.  That is, is an image of a cross in front of a building with an opening in front of city walls necessarily Christian?  There are three images here to unpack: the city walls, the building with the opening, and the cross.  

City Walls:  I have to wonder what makes city walls depicted in artifacts found in Jordan “almost certainly … Jerusalem”?  City walls of that period pretty much all looked the same, and most of the distinguishing features of Jerusalem’s current walls, like King David’s Gate, are of far more recent construction than the artifact in question.  We have no idea what might have distinguished Jerusalem’s walls from any other city’s walls in the 1st century CE.

A Building With an Opening:  Which apparently “has to be” the tomb of Jesus.  Just for the moment, we will assume that these artifacts are what they are hypothesized to be.  In that case, they were made by people who were intimately familiar with the details of the story of the historic Jesus, if not with the man himself.  They would have known that the tomb of Jesus was not a hole in a building, it was a hole in a cave.  There is, in fact, a building over that cave today, known as the Church of the Holy Sepulcher.  There are depictions of Jesus’ tomb that represent the church itself as the tomb.  If, however, you visit that church (as I have), you will be taken deep, down below the street level to peer through the semi-darkness at a niche cut out of the side of a small cave.  That’s what has historically been believed to be the tomb of Jesus.  Why would people familiar with, not the medieval imagination of Christ’s tomb, but the actual location itself, depict it as a building?  Unless we are further positing that the Church of the Holy Sepulcher is in the wrong place and that all those pilgrims prayed at the wrong hole in the wrong cave.  So far, though, no one’s said that.

 And, saving the best for last, The Cross: The notion that a cross symbolized Jesus to his contemporary followers requires four important assumptions.  First, that his crucifixion was performed on a cross-shaped structure.  Second, that crosses were used at the time to symbolize crucifixion.  Third, that crucifixions in the region were assumed to be Roman crucifixions of Christians.  Fourth, that the cross as a symbol of Roman crucifixion of Christians was necessarily seen as representative of the crucifixion of Jesus.

The Greek term in the New Testament, stauros, is typically translated as ‘cross’, but in fact meant ‘stake’ or ‘pole’.  Roman crucifixions were done on either stakes (with the hands bound or nailed together above the head) or on Tau shaped crosses, resembling a capital letter T, that left the head unsupported.  Other civilizations also used Y-shaped structures for crucifixion, but the point is that the choices were either stake or T, not a cross, and that the only evidence we have indicates the stake.  A community of people close to the historical Jesus would not have associated a cross with his crucifixion.

Crosses were important symbols at the time, though not of crucifixion.  The cross as a symbol in the Mediterranean predates both Christianity and the Roman Empire.  The most popularly known is the Egyptian ankh, which looked slightly different, having a loop at the top.  The contemporary Christian version, the right-angled intersection of two straight lines, was used by a number of civilizations with a number of different meanings, including the joining of the male and female.  The first use by Christians was in the third century, which was described by Tertullian (a Christian who lived in Roman Carthage from 160-220 CE) thus:

The cross is adored with all the homage due only to the Most High; and for any one to call it, in the hearing of a genuine Romanist, by the Scriptural term, "the accursed tree," is a mortal offence. To say that such superstitious feeling for the sign of the cross, such worship as Rome pays to a wooden or a metal cross, ever grew out of the saying of Paul, "God forbid that I should glory, save in the cross of our Lord Jesus Christ"--that is, in the doctrine of Christ crucified--is a mere absurdity, a shallow subterfuge and pretence. The magic virtues attributed to the so-called sign of the cross, the worship bestowed on it, never came from such a source. The same sign of the cross that Rome now worships was used in the Babylonian Mysteries, was applied by Paganism to the same magic purposes, was honoured with the same honours. That which is now called the Christian cross was originally no Christian emblem at all, but was the mystic Tau of the Chaldeans and Egyptians

Which brings us to assumption three, that a crucifixion would be assumed to be a Roman crucifixion of Christians.  Crucifixion was certainly practiced in the Levant by the Romans, but also by the Persians and Seleucids.  Herodotus, Thucydides and Xenophon all (apparently, I’ve seen the citation but not the original texts) confirm the Persian practice.  Under the Seleucids, the Judean ruler, Janneaus Alexander, was reported to have crucified 800 Pharisee prisoners in front of their wives and children.  As for the Romans, crucifixion was but one of many forms of the death penalty, which ranged from the mercifully quick beheading, through the symbolically rich punishment for patricide of being sewn into a leather sack with a cock, a dog, a serpent and a monkey and thrown into the ocean (or a lake, if the ocean wasn’t handy), and culminating in the slow, painful death from asphyxiation and/or thirst and exposure (depending on the method, though infection and blood loss could also come into play) of the crucifixion.  As Josephus reported, in the aftermath of the Roman siege of Jerusalem, this ‘most wretched of deaths’ was meted out to hundreds of Jews.  Thus, the crucifixion of Jesus was hardly a singular act, either in its place or time.  

Finally, the most common symbol archeologists and historians associate with early Christians is the fish, not the cross.  The standard interpretation is that, at a time when crucifixion was common, the cross was seen as a symbol of horrific punishment, not of divinity.  While the (contested) use of the cross as a Christian symbol may date to the third century, at the end of that century, when Constantine had a vision telling him he would be successful in battle if he fought under “A heavenly divine symbol” he chose the Chi-Rho, not the cross.  (That is, as recounted by Lacantius.  Eusebius claimed that Constantine was convinced of the truth of the Christian faith when the sun’s rays formed a cross over a battlefield.  Constantine was Lacantius’ patron and appointed him to tutor his son, making Lacantius much more personally connected to Constantine than Eusebius, who was bishop of a distant province, an inveterate political manipulator, and whose account was written conveniently after Constantine’s death.)

It’s not that these artifacts could not be Christian.  They could.  But the leap from an image of a cross in front of a building with a hole in it in front of a city wall to ‘this must be the cross of Jesus before the tomb of Jesus before the walls of Jerusalem’ is a leap of faith, not of history.

Thursday, March 17, 2011

Shhh, don't tell anyone it's history

It’s always interesting when thought streams from different parts of life intersect. 

At the moment, I’m teaching the General Education/Core Curriculum/whatever-your-school-calls-those-classes-everyone-needs-to-take-some-of course on the history of the whole world from the mists of time to 1500.  The primary sources I’ve chosen for that course include documents on medieval European laws and court cases, which led me to wonder when and how things changed, when did the basic principles I recognize as the American legal system start to be practiced?  This led me a barrister by the name of William Garrow, who was a transformative figure (or so my sources tell me, I’m hardly an expert) in the English legal system in the 1880s and 90s (after which he was first promoted to King’s Council and then demoted to a mere Member of Parliament before eventually wasting his final decades in such worthless occupations as judge, Solicitor General and Privy Councilor).  Before Garrow, the courts relied on the testimony of witnesses to a crime.  The prosecutor found the witnesses, the judge asked the questions, the jury evaluated the evidence, and the defense attorney … well, there usually wasn’t one and the defendant wasn’t generally allowed to speak on his or her own behalf.  It all came down to the witnesses, who were not cross-examined!  Garrow changed all that.  By appearing in court for the defendants and asking the witnesses questions, he is credited (by my sources) with playing a key role in the innovations of the adversarial system, rules of evidence, and the invalidation of hearsay evidence, in addition to personally introducing both the concept and the phrase, “innocent until proven guilty”.

Garrow’s biographers, John Hostettler and Richard Braby, in Sir William Garrow: His Life, Times and Fight for Justice, picked a particular cross-examination from the Old Bailey records to demonstrate how it was done.  In this case, Garrow had already gotten the witness, a Mr. Fleming, to admit to being the receiver of the goods that the defendant was accused of stealing.  He had further gotten Fleming to admit that he was likely to be hanged for possession of stolen goods unless he could fix possession on another person.  Garrow then went in for the kill:

Garrow: Did you not do this to save your own neck; did you not make the disclosure to save your own life?

Fleming: I suppose I must answer, Mr. Garrow, in the affirmative, for I know no better: I certainly made this disclosure to save myself.

Garrow: Then you are now swearing, in order to fix this danger on somebody else to save yourself.

Fleming: I apprehend, Mr. Garrow, I am still in the same danger if I do not fix on the right person.

Notice that, in his last answer, Fleming argued that it would go badly for him if he made a false accusation.  The jury didn’t care.  They were horrified at the idea that they were expected to accept the testimony of someone who had such a direct interest in a conviction, and on that basis alone, acquitted the defendant.  It became, over time, an accepted tenet of the English legal system that interested witnesses were not reliable.

I mulled that over for a few days, trying to figure out why it felt somehow off.  Oversimplified.  Naïve, even.  Too likely to excuse real criminals who posed serious dangers to society.  Was it the jury’s refusal to consider the witnesses final statement? 

Meanwhile, another stream of my life, starting in a completely different place, was heading right for that very point.  I recently got back in contact with a woman I’ve known since I was born, as far as I can tell.  Our parents were friends and the families got together several times a year, but as we grew up, as so often happens, we drifted off in different directions.  Reconnecting with her got me thinking about all the other people I’d known that way, and wondering where they are today.  The great gift of Google came through, and I found one.  She’s now called Alexandra Natapoff, but I’m pretty darned sure she’s the kid I used to know as Sasha.  She was a few years younger than me, and I don’t remember her all that well, but I remember her mother, and this Alexandra looks, sounds, and moves exactly the way her mother did back when we were kids.  How do I know that?  Because I found this video.  Which, in an astonishing coincidence, addressed my question about Garrow and the testimony of Mr. Fleming:



We think of our legal system as inherited from the English system, based on the same principles and precedents of common law.  And yet, somewhere in the late 1700s, they diverged.  Given that we were fighting a war of independence from England at the time, it’s understandable that our legal experts might not have been studying their latest innovations.  And yet, we did pick up the notions of the adversarial system, presumption of innocence, etc.  But we somehow didn’t adopt the inherent distrust of the snitch.* 

Natapoff makes the point, rather effectively I think, that our reliance on snitches in the legal system has in recent years expanded to reliance on snitches in the wider, American society.  We are socializing ourselves, myself obviously included (hence the mulling), to accept snitching as part of the normal practice of citizenship.  This can be a very good thing.  The anonymous bar employee who paid attention to a semi-drunk man’s ravings, got his license plate number off his car and called the police saved the lives of hundreds of innocent people.  Less risky, perhaps, but just as important and valuable a protection of lives as the airplane passengers who jumped on the man trying to light his shoe on fire.  I applaud anonymous bar employee’s snitchy ways.  Hell, snitching is not only a civic duty, in this age of terrorism, both homegrown and imported, snitching is an act of patriotism!

Which is all well and good from a law enforcement perspective, but as national identities go, “We are the Snitches” isn’t exactly “Give me liberty or give me death.”  It doesn’t have the same ring, the same nobility, the same élan.  More importantly, snitching by its nature divides us, turns us against each other.  It makes us, not a nation united in common cause (freedom, the common welfare, life, liberty and the pursuit of happiness), but individuals perpetually suspicious of our neighbors and distrusting of difference instead of celebrating the melting pot.  As national values go, it is fundamentally destructive.  Is this the America we want to be?  And what, if any, role does this valorization of snitching play in the atmosphere of distrust and anger in America today.



* I’m only addressing snitching, which is the act of informing on an individual, not whistleblowing, which is exposing wrongdoing by an organization.