Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Thursday, July 09, 2009

Our Neanderthal Neighbors

Svante Pääbo, "Mapping the Neanderthal Genome":

One thing that we're beginning to see is that we are extremely closely related to the Neanderthals. They're our relatives. In a way, they're like a human ancestor 300,000 years ago. Which is something that leads you to think: what about the Neanderthals? What if they had survived a little longer and were with us today? After all, they disappeared only around 30,000 years ago, or, 2,000 generations ago. Had they survived, where would they be today? Would they be in a zoo? Or would they live in suburbia?...

For example, if the Neanderthals were here today, they would certainly be different from us. Would we experience racism against Neanderthals much worse than the racism we experience today amongst ourselves? What if they were only a bit different from us, but similar in many ways — in terms of language, technology, social groups? Would we still have this enormous division that we make today between humans and non-humans? Between animals and ourselves? Would we still have distanced ourselves from animals and made this dichotomy that is so strong in our thinking today? These things we will never know, right? But they are fascinating things to thnk about.

Thursday, June 11, 2009

Our Minds Are Made Of Meat

Jonah Lehrer on "Emotional Perception":

From its inception in the mid-1950's, the cognitive revolution was guided by a single metaphor: the mind is like a computer. We are a set of software programs running on 3 pounds of neural hardware. (Cognitive psychologists were interested in the software.) While the computer metaphor helped stimulate some crucial scientific breakthroughs - it led, for instance, to the birth of artificial intelligence and to insightful models of visual processing, from people like David Marr - it was also misleading, at least in one crucial respect. Computers don't have feelings. Because our emotions weren't reducible to bits of information or logical structures, cognitive psychologists diminished their importance.

Now we know that the mind is an emotional machine. Our moods aren't simply an irrational distraction, a mental hiccup that messes up the programming code. As this latest study demonstrates, what you're feeling profoundly influences what you see. Such data builds on lots of other work showing that our affective state seems to directly modulate the nature of attention, both external and internal, and thus plays a big role in regulating thinks like decision-making and creativity. (In short, positive moods widen the spotlight, while negative, anxious moods increase the focus.) From the perspective of the brain, it's emotions all the way down.



(From The Frontal Cortex.)

Sunday, May 31, 2009

Dating the Past

Historiscientific nerd alert: There's a hot new method of dating historical artifacts, specifically ceramic artifacts, based on their moisture uptake. But there's at least one big problem -- it assumes that mean temperatures are constant. HNN's Jonathan Jarrett has the goods, in a paragraph so well-linked that I've cut-and-pasted them all. (I also changed some of the punctuation and split Jarrett's long paragraph into a few short ones.)

Now, you may have heard mention of a thing called "the medieval warm period." This is a historical amelioration of temperature in Europe between, roughly, the tenth and twelfth centuries. This probably decreased rainfall and other sorts of weather bad for crops, therefore boosted agricultural yield, pumped more surplus into the economy, fuelled demographic growth and arguably deliquesced most European societies to the point where they changed in considerable degree.

However, because of the current debate on climate change, it has become a ball to kick around for climate "scientists," those who wish to argue that we're not changing the climate pointing to it and ice coverage in Norse-period Greenland (which was less than there is currently despite less carbon dioxide in the atmosphere then), while those who wish to argue that we are changing the climate (and, almost always, that this relates to CO2 output, which does seem like a weak link in the argument) dismiss it as legend or scorn the very few and unscientific datapoints, not really caring that the historical development of European society in the ninth to eleventh centuries just doesn't make sense without this system change from the ground. None of these people are medievalists and they're not trying to prove anything about the Middle Ages, so it gets messy, but there is a case about this temperature change that has to be dealt with.

This obviously has an impact on this research. If the sample were old enough, the errors and change probably ought to balance out. But if it were, from, say, the eighth century, then the moisture uptake in the four or five subsequent centuries would be higher than expected from the constant that this research used and the figure would be out, by, well, how much? The team didn't know: "The choice of mean lifetime temperature provides the main other source of uncertainty, but we are unable to quantify the uncertainty in this temperature at present."

We, however, need to know how far that could knock out the figures. Twenty years? More? It begins to push the potential error from a single sample to something closer to a century than a year. That is, the margin of historical error (as opposed to mathematical error) on this method could be worse than that of carbon-dating, and we don't actually know what it is.



Lots of good stuff in the whole, long post, including an annotated run-down of ALL of the ways we know how to date old things.

Friday, May 29, 2009

It Is Not Logical

Andrew Hungerford -- aka the smartest, funniest dramatist * astrophysicist = lighting director you should know -- has written the best post on the physical holes in the new Star Trek movie that I think can be written.

Basically, almost nothing in the movie makes sense, either according to the laws established in our physical universe or the facts established in the earlier TV shows and movies.

Wherever possible, Andy provides a valiant and charitable interpretation of what he sees, based (I think) on the theory that "what actually happened" is consistent with the laws of physics, but that these events are poorly explained, characters misspeak, or the editing of the film is misleading. (I love that we sometimes treat Star Trek, Star Wars, etc., like the "historical documents" in Galaxy Quest -- accounts of things that REALLY happened, but that are redramatized or recorded and edited for our benefit, as opposed to existing ONLY within a thinly fictional frame.)

If you haven't seen the movie yet, you probably shouldn't read the post. It will just bother you when you're watching it, like Andy was bothered. If you have, and you feel like being justifiably bothered (but at the same time profoundly enlightened), check it out right now. I mean, now.

What Kinds of Math Do We Need?

Biologists are debating how much quantitative analysis their field needs; at Language Log, Mark Liberman pivots to linguistics:

The role of mathematics in the language sciences is made more complex by the variety of different sorts of mathematics that are relevant. In particular, some areas of language-related mathematics are traditionally approached in ways that may make counting (and other sorts of quantification) seem at least superficially irrelevant — these include especially proof theory, model theory, and formal language theory.

On the other hand, there are topics where models of measurements of physical quantities, or of sample proportions of qualitative alternatives, are essential. This is certainly true in my own area of phonetics, in sociolinguistics and psycholinguistics, and so on. It's more controversial what sorts of mathematics, if any, ought to be involved in areas like historical linguistics, phonology, and syntax...

Unfortunately, the current mathematical curriculum (at least in American colleges and universities) is not very helpful in accomplishing this — and in this respect everyone else is just as badly served as linguists are — because it mostly teaches thing that people don't really need to know, like calculus, while leaving out almost all of the things that they will really be able to use. (In this respect, the role of college calculus seems to me rather like the role of Latin and Greek in 19th-century education: it's almost entirely useless to most of the students who are forced to learn it, and its main function is as a social and intellectual gatekeeper, passing through just those students who are willing and able to learn to perform a prescribed set of complex and meaningless rituals.)


My thoughts are still inchoate on this, so I'll throw it open -- is calculus 1) a waste of time for 80-90% of the folks who learn it, 2) unfairly dominating of the rest of useful mathematics, 3) one of the great achievements of the modern mind that everyone should know about, or 4) all of the above?

More to the point -- what kinds of maths (as they say in the UK) have you found to be most valuable to your later life, work, thinking, discipline, whatever?

And looking to the future - I don't think we have a mathematics entry as such in the New Liberal Arts book-to-come; but if we did, what should it look like?

The New Socialism is the New Humanism

We loooove Kevin Kelly around here at Snarkmarket. Robin tipped me off to his stuff and he's since joined Atul Gawande, Roger Ebert, Virginia Heffernan, Clay Shirky, Michael Pollan, Clive Thompson, Gina Trapani, Jason Kottke, Ben Vershbow, Hilzoy, Paul Krugman, Sy Hersh, and Scott Horton (among others) in the Gore-Gladwell Snarkfantastic Hall of Fame. Dude should have his own tag up in here.

But I think there's a rare misstep (or rather, misnaming) in his new Wired essay, "The New Socialism: Global Collectivist Society Is Coming Online." It's right there in the title. That S-word. Socialism.

Now, don't get me wrong. I like socialism where socialism makes sense. Almost everyone agrees that it makes sense to have a socialized police and military. I like socialized (or partially socialized) education, and I think it makes a lot of sense to have socialized health insurance, as part of a broad social safety net that helps keep people safe, capable, knowledgeable, working. Socialism gets no bad rap from me.

I know Kelly is using the word socialism as a provocation. And he takes pains to say that the new socialism, like the new snow, is neither cold nor wet:

We're not talking about your grandfather's socialism. In fact, there is a long list of past movements this new socialism is not. It is not class warfare. It is not anti-American; indeed, digital socialism may be the newest American innovation. While old-school socialism was an arm of the state, digital socialism is socialism without the state. This new brand of socialism currently operates in the realm of culture and economics, rather than government—for now...

Instead of gathering on collective farms, we gather in collective worlds. Instead of state factories, we have desktop factories connected to virtual co-ops. Instead of sharing drill bits, picks, and shovels, we share apps, scripts, and APIs. Instead of faceless politburos, we have faceless meritocracies, where the only thing that matters is getting things done. Instead of national production, we have peer production. Instead of government rations and subsidies, we have a bounty of free goods.


But I think of socialism as something very specific. It's something where a group of citizens pools their resources as part of a democratic (and at least partially technocratic) administering of benefits to everyone. This could be part of a nation-state or a co-op grocery store. And maybe this is too Hobbesian, but I think about it largely as motivated by a defense against something bad. Maybe there's some kind of general surplus-economy I'm missing where we can just socialize good things without risk. That'd be nice.

When masses of people who own the means of production work toward a common goal and share their products in common, when they contribute labor without wages and enjoy the fruits free of charge, it's not unreasonable to call that socialism.


But I'll put this out as an axiom: if there's no risk of something genuinely bad, no cost but opportunity cost, if all we're doing is passing good things around to each other, then that, my friend, is not socialism.

This is a weird paradox: what we're seeing emerge in the digital sphere is TOO altruistic to be socialism! There isn't enough material benefit back to the individual. It's not cynical enough! It solves no collective action problems! And again, it's totally individualistic (yet totally compatible with collectivities), voluntarist (yet totally compatible with owning one's own labor and being compensated for it), anti-statist (yet totally compatible with the state). It's too pure in its intentions and impure in its structure.

Kelly, though, says, we've got no choice. We've got to call this collectivism, even if it's collective individualism, socialism:

I recognize that the word socialism is bound to make many readers twitch. It carries tremendous cultural baggage, as do the related terms communal, communitarian, and collective. I use socialism because technically it is the best word to indicate a range of technologies that rely for their power on social interactions. Broadly, collective action is what Web sites and Net-connected apps generate when they harness input from the global audience. Of course, there's rhetorical danger in lumping so many types of organization under such an inflammatory heading. But there are no unsoiled terms available, so we might as well redeem this one.


In fact, we have a word, a very old word, that precisely describes this impulse to band together into small groups, set collective criteria for excellence, and try to collect and disseminate the best, most useful, most edifying, most relevant bodies of knowledge as widely and as cheaply as possible, for the greatest possible benefit to the individual's self-cultivation and to the preservation and enrichment of the culture as a whole.

And that word is humanism.

The Soul of American Medicine

If I ever meet Atul Gawande, I'm giving him a high-five, a hug, and then I'm going to try to talk to him for about fifteen minutes about why I think he's special. From "The Cost Conundrum," in the new New Yorker:

No one teaches you how to think about money in medical school or residency. Yet, from the moment you start practicing, you must think about it. You must consider what is covered for a patient and what is not. You must pay attention to insurance rejections and government-reimbursement rules. You must think about having enough money for the secretary and the nurse and the rent and the malpractice insurance...

When you look across the spectrum from Grand Junction [Colorado] to McAllen [Texas]—and the almost threefold difference in the costs of care—you come to realize that we are witnessing a battle for the soul of American medicine. Somewhere in the United States at this moment, a patient with chest pain, or a tumor, or a cough is seeing a doctor. And the damning question we have to ask is whether the doctor is set up to meet the needs of the patient, first and foremost, or to maximize revenue.

There is no insurance system that will make the two aims match perfectly. But having a system that does so much to misalign them has proved disastrous. As economists have often pointed out, we pay doctors for quantity, not quality. As they point out less often, we also pay them as individuals, rather than as members of a team working together for their patients. Both practices have made for serious problems...

Activists and policymakers spend an inordinate amount of time arguing about whether the solution to high medical costs is to have government or private insurance companies write the checks. Here’s how this whole debate goes. Advocates of a public option say government financing would save the most money by having leaner administrative costs and forcing doctors and hospitals to take lower payments than they get from private insurance. Opponents say doctors would skimp, quit, or game the system, and make us wait in line for our care; they maintain that private insurers are better at policing doctors. No, the skeptics say: all insurance companies do is reject applicants who need health care and stall on paying their bills. Then we have the economists who say that the people who should pay the doctors are the ones who use them. Have consumers pay with their own dollars, make sure that they have some “skin in the game,” and then they’ll get the care they deserve. These arguments miss the main issue. When it comes to making care better and cheaper, changing who pays the doctor will make no more difference than changing who pays the electrician. The lesson of the high-quality, low-cost communities is that someone has to be accountable for the totality of care. Otherwise, you get a system that has no brakes.

It Is Not Logical

Andrew Hungerford -- aka the smartest, funniest dramatist * astrophysicist = lighting director you should know -- has written the best post on the physical holes in the new Star Trek movie that I think can be written.

Basically, almost nothing in the movie makes sense, either according to the laws established in our physical universe or the facts established in the earlier TV shows and movies.

Wherever possible, Andy provides a valiant and charitable interpretation of what he sees, based (I think) on the theory that "what actually happened" is consistent with the laws of physics, but that these events are poorly explained, characters misspeak, or the editing of the film is misleading. (I love that we sometimes treat Star Trek, Star Wars, etc., like the "historical documents" in Galaxy Quest -- accounts of things that REALLY happened, but that are redramatized or recorded and edited for our benefit, as opposed to existing ONLY within a thinly fictional frame.)

If you haven't seen the movie yet, you probably shouldn't read the post. It will just bother you when you're watching it, like Andy was bothered. If you have, and you feel like being justifiably bothered (but at the same time profoundly enlightened), check it out right now. I mean, now.

The Enterprise As A Start-Up

This is a post about the new Star Trek movie that contains no spoilers.

However:

Here's my rule about movie and television spoilers. If you're giving information that's already given in a preview, then you're spoiling nothing that hasn't been spoiled already. Likewise, if you're giving information that can be reasonably inferred, no spoiling has occurred.

If you're not willing to entertain either of these possibilities, if you scrupulously avoid movie trailers or cast lists, and you still haven't seen this movie, then not only are you a weirdo, you also stopped reading this post long ago.

So, you will be shocked, shocked to learn that at one point in the new Star Trek movie, just as you've seen in the trailer, James T. Kirk sits in the captain's chair, and that by the end of the movie, most of the characters that we associate with the Enterprise's crew are working together on the Enterprise.

Okay? Good.

So here's Henry Jenkins's thoughtful post, "Five Ways to Start a Conversation About the New Star Trek Film," which DOES contain more detailed spoilers. My excerpt, however, does not:

In the past, we were allowed to admire Kirk for being the youngest Star Fleet captain in Federation history because there was some belief that he had managed to actually earn that rank... It's hard to imagine any military system on our planet which would promote someone to a command rank in the way depicted in the film. In doing so, it detracts from Kirk's accomplishments rather than making him seem more heroic. This is further compromised by the fact that we are also promoting all of his friends and letting them go around the universe on a ship together.

We could have imagined a series of several films which showed Kirk and his classmates moving up through the ranks, much as the story might be told by Patrick O'Brien or in the Hornblower series. We could see him learn through mentors, we could seem the partnerships form over time, we could watch the characters grow into themselves, make rookie mistakes, learn how to do the things we see in the older series, and so forth. In comics, we'd call this a Year One story and it's well trod space in the superhero genre at this point.

But there's an impatience here to give these characters everything we want for them without delays, without having to work for it. It's this sense of entitlement which makes this new Kirk as obnoxious as the William Shatner version. What it does do, however, is create a much flatter model for the command of the ship. If there is no age and experience difference between the various crew members, if Kirk is captain because Spock had a really bad day, then the characters are much closer to being equals than on the old version of the series.

This may be closer to our contemporary understanding of how good organizations work -- let's think of it as the Enterprise as a start-up company where a bunch of old college buddies decide they can pool their skills and work together to achieve their mutual dreams. This is not the model of how command worked in other Star Trek series, of course, and it certainly isn't the way military organizations work, but it is very much what I see as some of my students graduate and start to figure out their point of entry into the creative industries.


The Enterprise as a start-up! It reminds me of that story about the guys who started Silicon Valley's Fairchild Semiconductor.

Let me add that I think Jenkins is wrong about the way promotion is presented in the film -- Star Fleet actually appears to be remarkably meritocratic, much more deferential to performance and aptitude tests than years served. Captain Pike tells Kirk that he could command his own starship (the second highest rank) in four years after leaving the academy. Chekhov is a starship navigator (and not, like Kirk or Uhura, a cadet) at only seventeen years old; Spock is a commander and academy instructor without there being a sense of a considerable age/experience gap between he and Kirk or Uhura. (He's introduced as "one of our most distinguished graduates," like he's a really good TA.)

But it's not academia; it's the NBA. You give these kids the ball.

The important point is that within this highly meritocratic structure, the crew members of the Enterprise are PARTICULARLY and precociously talented. Kirk is the fastest to rise to captain where fast rises are not uncommon. As I said to my friends after seeing the movie, it gets bonus points for emphasizing just how SMART these people are; Kirk, Spock, Uhura, Scotty, and Chekhov (among others) are explicitly presented as geniuses.

Okay, now I've probably actually included spoilers in this thing. So. What. Go see the movie already. Then read the rest of Jenkins's post. You'll enjoy them both.

(H/t: the awesome Amanda Phillips.)

The Ideas! The Ideas! Part... Whatever

Charlie Jane Anders, "Why Dollhouse Really Is Joss Whedon's Greatest Work":

The evil in Dollhouse is harder to deal with than the evil in Buffy because it's our evil. It's our willingness to strip other people of their humanity in order to get what we need from them. It's our eagerness to give up our humanity and conform to other people's expectations, in exchange for some vaguely promised reward. And it's our tendency to put any new piece of technology to whatever uses we can think of, whether they're positive or utterly destructive.

And that last bit, about technology, is the other main reason why Dollhouse is Whedon's most accomplished work, especially if you love science fiction like we do. Unlike Joss' other works, Dollhouse really is about the impact of new technology on society. It asks the most profound question any SF can ask: how would we (as people) change if a new technology came along that allowed us to...? In this case, it's a technology that allows us to turn brains into storage media: We can erase, we can record, we can copy. It's been sneaking up on us, but Dollhouse has slowly been showing how this radically changes the whole conception of what it means to be human. You can put my brain into someone else's body, you can keep my personality alive after I die, and you can keep my body around but dispose of everything that I would consider "me."

Friday, May 01, 2009

Google: The World's Medical Journal

A good anecdotal lead. Carolina Solis is a medical student who did research on parasitic infections caused by contaminated well water in rural Nicaragua.

Like many researchers, she plans to submit her findings for publication in a medical journal. What she discovered could benefit not just Nicaraguan communities but those anywhere that face similar problems. When she submits her paper, though, she says the doctors she worked with back in San Juan del Sur will probably never get a chance to read it.

"They were telling me their problems accessing these [journals]. It can be difficult for them to keep up with all the changes in medicine."


Hey, Matt, if you want to sink your teeth into a medical policy issue that's right up your alley, I think this is it.

There's legislation:

Washington recently got involved. Squirreled away in the massive $410 billion spending package the president signed into law last month is an open access provision. It makes permanent a previous requirement that says the public should have access to taxpayer-funded research free of charge in an online archive called PubMed Central. Such funding comes largely from the National Institutes of Health, which doles out more than $29 billion in research grants per year. That money eventually turns into about 60,000 articles owned and published by various journals.

But Democrats are divided on the issue. In February, Rep. John Conyers, D-Mich., submitted a bill that would reverse open access. HR 801, the Fair Copyright in Research Works Act, would prohibit government agencies from automatically making that research free. Conyers argues such a policy would buck long-standing federal copyright law. Additionally, Conyers argues, journals use their subscription fees to fund peer review in which experts are solicited to weigh in on articles before they're published. Though peer reviewers aren't usually identified or paid, it still takes money to manage the process, which Conyers calls "critical."


And cultural/generational change:

The pay-to-play model doesn't jive with a generation of soon-to-be docs who "grew up Google," with information no farther than a search button away. It's a generation that never got lost in library stacks looking for an encyclopedia, or had to pay a penny for newspaper content. So it doesn't see why something as important as medical research should be locked behind the paywalls of private journals.

Copyright issues are nothing new to a generation that watched the recording industry deal its beloved original music sharing service, Napster, a painful death in 2001. Last October, it watched Google settle a class-action lawsuit brought on by book publishers upset over its Book Search engine, which makes entire texts searchable. And just last week, a Swedish court sentenced four founders of the the Pirate Bay Web site to a year in prison over making copyrighted files available for illegal file sharing. And now the long-familiar copyright war is spilling over into medicine.


There's even WikiDoc

And, the article doesn't mention this, but I'll contend there's a role for journalism to play. Here's a modest proposal: allow medical researchers to republish key findings of the research in newspapers, magazines, something with a different revenue structure, and then make it accessible to everyone. Not perfect, but a programmatic effort would do some good.

Speaking of which -- what are the new big ideas on the health/medicine beat? This is such a huge issue -- it feels like it should have its own section in the paper every day.

Every Day Like Paris For The First Time

Jonah Lehrer + Allison Gopnik on baby brains:

The hyperabundance of thoughts in the baby brain also reflects profound differences in the ways adults and babies pay attention to the world. If attention works like a narrow spotlight in adults - a focused beam illuminating particular parts of reality - then in young kids it works more like a lantern, casting a diffuse radiance on their surroundings.

"We sometimes say that adults are better at paying attention than children," writes Gopnik. "But really we mean just the opposite. Adults are better at not paying attention. They're better at screening out everything else and restricting their consciousness to a single focus."


This (in bold) is the money-quote, though:

Gopnik argues that, in many respects, babies are more conscious than adults. She compares the experience of being a baby with that of watching a riveting movie, or being a tourist in a foreign city, where even the most mundane activities seem new and exciting. "For a baby, every day is like going to Paris for the first time," Gopnik says. "Just go for a walk with a 2-year-old. You'll quickly realize that they're seeing things you don't even notice."


I can confirm that this is true.

Also, peep this graph charting synaptic activity + density according to age (via Mind Hacks):

Huttenlocher_Graph.png


Apparently, that's where the real action is: contra Lehrer's article, baby brains don't actually have more neurons than adults, but way more (and way denser) synapses (aka the connections between neurons).

Also, just to free associate on the whole synapse thing: I had knee surgery a few weeks ago to repair a torn quadriceps tendon, and I'm in physical therapy now. Part of my PT involves attaching electrodes to my thigh to induce my quad to flex (this is called "reeducating the muscle.").

Anyways, it is always weird to confirm that we are just made out of meat, and that if you run enough electrical current through a muscle, it'll react whether or not your brain tells it to. That's all your brain is -- an extremely powerful + nuanced router for electricity.

Monday, April 13, 2009

Doctor Jones's Office Hours

vlcsnap-4229933.png

Good-looking people enjoy what economists/sociologists call a "beauty premium." They get paid more and are seen as better at their jobs than people of average attractiveness. It works for men and for women. Men, for example, get a premium for being taller, in shape, handsome, and with a nice head of hair.

Now here's where it gets interesting. A new Israeli study suggests that male professors get a beauty bump, but female professors don't. The researchers guess that this is rooted in a "contradiction between... role images and gender images": somehow, female attractiveness is seen as incongruous with the paternal, traditional scholar/educator role of the professor, where male attractiveness isn't -- particularly, it seems, for female students. That's the idea, anyways.

I don't endorse this conclusion, but there's definitely something going on here. A couple of things that came to my mind on reading this:

Monday, March 23, 2009

There's Solitary and Then There's Solitary

The other day, a group of my friends, including two other PhDs, discussed the high rate of depression among graduate students. "It's the stress," one said; "the money!" laughed another. But I made a case that it was actually the isolation, the loneliness, that had the biggest effect. After all, you take a group of young adults who are perversely wired for the continual approval that good students get from being in the classroom with each other, and then lock them away for a year or two to write a dissertation with only intermittent contact from an advisor. That's a recipe for disaster.

So I read Atul Gawande's account of the human brain's response to solitary confinement with an odd shock of recognition:

Among our most benign experiments are those with people who voluntarily isolate themselves for extended periods. Long-distance solo sailors, for instance, commit themselves to months at sea. They face all manner of physical terrors: thrashing storms, fifty-foot waves, leaks, illness. Yet, for many, the single most overwhelming difficulty they report is the 'soul-destroying loneliness,' as one sailor called it. Astronauts have to be screened for their ability to tolerate long stretches in tightly confined isolation, and they come to depend on radio and video communications for social contact...

[After years of solitary, Hezbollah hostage Terry Anderson] was despondent and depressed. Then, with time, he began to feel something more. He felt himself disintegrating. It was as if his brain were grinding down. A month into his confinement, he recalled in his memoir, "The mind is a blank. Jesus, I always thought I was smart. Where are all the things I learned, the books I read, the poems I memorized? There's nothing there, just a formless, gray-black misery. My mind's gone dead. God, help me."

He was stiff from lying in bed day and night, yet tired all the time. He dozed off and on constantly, sleeping twelve hours a day. He craved activity of almost any kind. He would watch the daylight wax and wane on the ceiling, or roaches creep slowly up the wall. He had a Bible and tried to read, but he often found that he lacked the concentration to do so. He observed himself becoming neurotically possessive about his little space, at times putting his life in jeopardy by flying into a rage if a guard happened to step on his bed. He brooded incessantly, thinking back on all the mistakes he'd made in life, his regrets, his offenses against God and family.



But here's the weird part -- all of this isolation actually serves to select for a particular personality type. This is especially perverse when solitary confinement is used in prisons -- prisoners who realign their social expectations for solitary confinement effectively become asocial at best, antisocial generally, and deeply psychotic at worst.

Everyone's identity is socially created: it's through your relationships that you understand yourself as a mother or a father, a teacher or an accountant, a hero or a villain. But, after years of isolation, many prisoners change in another way that Haney observed. They begin to see themselves primarily as combatants in the world, people whose identity is rooted in thwarting prison control.

As a matter of self-preservation, this may not be a bad thing. According to the Navy P.O.W. researchers, the instinct to fight back against the enemy constituted the most important coping mechanism for the prisoners they studied. Resistance was often their sole means of maintaining a sense of purpose, and so their sanity. Yet resistance is precisely what we wish to destroy in our supermax prisoners. As Haney observed in a review of research findings, prisoners in solitary confinement must be able to withstand the experience in order to be allowed to return to the highly social world of mainline prison or free society. Perversely, then, the prisoners who can't handle profound isolation are the ones who are forced to remain in it. "And those who have adapted," Haney writes, "are prime candidates for release to a social world to which they may be incapable of ever fully readjusting."


I think we just figured out why so many professors are so deeply, deeply weird.

Wednesday, March 11, 2009

Tool Cultures

Kevin Kelly on technology and group identity:

Technologies have a social dimension beyond their mere mechanical performance.  We adopt new technologies largely because of what they do for us, but also in part because of what they mean to us. Often we refuse to adopt technology for the same reason: because of how the avoidance reinforces, or crafts our identity.



Most of Kelly's aticle focuses on tool cultures among Highland tribes in New Guinea, but Kelly's also recently written about technology adoption among the Amish -- which is, of course, unusually explicit about the relationship between technology and group identity.



I'm not sure about this hedge, though:

In the modernized west, our decisions about technology are not made by the group, but by individuals. We choose what we want to adopt, and what we don’t. So on top of the ethnic choice of technologies a community endorses, we must add the individual layer of preference. We announce our identity by what stuff we use or refuse. Do you twitter? Have a big car? Own a motorcycle? Use GPS? Take supplements? Listen to vinyl? By means of these tiny technological choices we signal our identity. Since our identities are often unconscious we are not aware of exactly why we choose or dismiss otherwise equivalent technology. It is clear that many, if not all, technological choices are made not on the technological benefits alone. Rather technological options have unconscious meaning created by social use and social and personal associations that we are not fully aware of.


But aren't these choices still deeply social? Partly it's about access: if you don't have daylong access to the web (or access to the web at all) you ain't twittering, son. But you're also not likely to do it if your friends and coworkers and neighbors don't twitter. Group identity is a lot more complex in the modernized west, sure -- but pure individual choice it ain't. In fact, our adoption of technology actually helps us form new groups and social identities that are not quite tribal/ethnic -- or it helps us reinforce those bonds.



P.S.: My title, "tool culture," isn't from Kelly's article, but from paleoanthropology. One of the things I love about the study of groups like the Neanderthals is that we have evidence of their tool use long after we have fossilized remains. We can actually distinguish between Neanderthal and human settlements based on their tools.



Neanderthals and homo sapiens definitely coexisted. People aren't sure whether Neanderthals interbred with modern humans or not, which makes it hard to know when exactly the Neanderthals died out. Wouldn't it be interesting, though, if a group of anatomically modern humans adopted Neanderthal tools? That technologies could reach not just across ethnicities, but across species as well?



Sunday, March 01, 2009

The New Media and the New Military

Whoa -- retired Marine officer Dave Dilegge and military blogger Andrew Exum (spurred by Thomas Ricks's new book The Gamble) look at the effect of the blogosphere on how the military shares information and tactics:

Ricks cited a discussion on Small Wars Journal once and also cited some things on PlatoonLeader.org but never considered the way in which the new media has revolutionized the lessons learned process in the U.S. military. [...] Instead of just feeding information to the Center for Army Lessons Learned and waiting for lessons to be disseminated, junior officers are now debating what works and what doesn't on closed internet fora -- such as PlatoonLeader and CompanyCommand -- and open fora, such as the discussion threads on Small Wars Journal. The effect of the new media on the junior officers fighting the wars in Iraq and Afghanistan was left curiously unexplored by Ricks, now a famous blogger himself.


It seems clear that blogging and internet forums disrupt lots of traditional thinking regarding the way information is generated and disseminated -- but it's a testament to how powerful it can change readers'/writers' expectations that that disruption can carry through to the military, the top-down bureaucracy if ever there was one.

In related news, the recent New Yorker article about the low-recoil automatic shotguns mounted on robots was awesome.

Just as at a certain point, the military decided it was a waste to have a professional soldier cook a meal or clean a latrine, we'll come to see it as a waste for a professional soldier NOT to provide decentralized information that can help adjust intelligence and tactics: all soldiers will be reporters. Soon all of our wars are going to be fought by robots, gamers, and bloggers. Our entire information circuitry will have to change.

Thursday, November 13, 2008

Two Questions, Same Answer

Maybe I'm just feeling like a reductive ol' pragmatist lately, but I feel like these two very problems both have the same answer.

  1. Daniel Drezner, "Public Intellectual 2.0": "In the current era, many more public intellectuals possess social-science rather than humanities backgrounds. In Richard Posner's infamous list of top public intellectuals, there are twice as many social scientists as humanities professors. In a recent ranking published by Foreign Policy magazine, economists and political scientists outnumber artists and novelists by a ratio of four to one. Economics has supplanted literary criticism as the universal methodology' of most public intellectuals. That fact in particular might explain the strong belief in literary circles that the public intellectual is dead or dying."
  2. Kevin Kelly, "Anachronistic Science": "I've been wondering why science took so long to appear. Why didn't China, which invented so many other things in the first millennial, just keep on going and invent science by 1000 AD? For that matter why didn't the Greeks invent the scientific method during their heyday? What were they missing?... But they could have been, even back then. Aristotle appears to have lacked no materials which would have prevented him from doing simple experiments and observations. There were many things he could not see without telescope and microscope, but there is still hundreds, thousands, if not millions of things he could have measured with tools he did have. But he did not because he didn't have the mindset."
The answer is: methodology and mindsets are both circumlocutions for a more basic notion -- what problems do you need to solve?

I have lots of ideas about public intellectuals. For one thing, the academy is, if anything, WAY MORE publicly accessible than it has ever been, and more artists, critics, poets, novelists than ever have a home in universities. Folks like Walter Benjamin and Ezra Pound were "public" intellectuals because they couldn't get work anywhere but newspapers and magazines.

But I think the more interesting question to ask, rather than why public intellectuals have faded or shifted or drifted or whatever, is this: what problems would/do we need public intellectuals to solve?

Ditto science. It's a bit like asking why Charles Babbage didn't develop a "computer" like ours, with a typewriter and a screen. The guy made a machine that could print calculation tables. That was the problem he needed to solve.

Maybe this is less of a razor than I think it is. Readers, please help.

Monday, November 10, 2008

The Economic Powell Doctrine

Paul Krugman writes about how FDR's caution in doling out economic stimulus in the mid-30s nearly undid the whole recovery from the Great Depression:

After winning a smashing election victory in 1936, the Roosevelt administration cut spending and raised taxes, precipitating an economic relapse that drove the unemployment rate back into double digits and led to a major defeat in the 1938 midterm elections.

What saved the economy, and the New Deal, was the enormous public works project known as World War II, which finally provided a fiscal stimulus adequate to the economy’s needs.

In his blog, Krugman breaks it down:

Nearly every forecast now says that, in the absence of strong policy action, real GDP will fall far below potential output in the near future. In normal times, that would be a reason to cut interest rates. But interest rates can’t be cut in any meaningful sense. Fiscal policy is the only game in town.

Wait, there’s more. Ben Bernanke can’t push on a string – but he can pull, if necessary. Suppose fiscal policy ends up being too expansionary, so that real GDP “wants” to come in 2 percent above potential. In that case the Fed can tighten a bit, and no harm is done. But if fiscal policy is too contractionary, and real GDP comes in below potential, there’s no potential monetary offset. That means that fiscal policy should take risks in the direction of boldness.
After some sophisticated back-of-the-envelope calculations, Krugman comes up with a number of $600 billion. Short story is, if you go big early, you can squash it out -- if it works, you can always scale it back.

At WHYY's Y Decide, the great Dan Pohlig trumps Krugman's WWII-as-public-works gag for an even better take on the problem:
I wonder if there’s an analogy from recent history that even our conservative friends would agree with.  Since I can’t think of one, maybe I’ll make one up.

Let’s say you overthrow a horrible dictator because you claim that he had something to do with an attack on your country.  So that part goes pretty easily but then you find yourself with this pernicious and deadly insurgency because you forgot that rebuilding a country and providing security is far more difficult and resource intensive as blowing up a bunch of stuff and capturing one lone ex-dictator (which it turns out, is far easier than capturing one lone nomadic terrorist).  So you send a few troops over but you find that things continue to get worse.  Every time you stop one bit of the insurgency, it just retreats and attacks a different part of the country where your forces aren’t located.  Your troops are dying.  The people who live in the country are dying.  Infrastructure is being blown up just as quickly as it’s built.

So you decide, this little bit of time tactic just isn’t working and you decide to send a whole BUNCH of troops over at one time to stop all of the insurgents everywhere at once.  You could call it the “WAVE” or the “SWELLING” or maybe the “SURGE.”  After a few months, you find that by throwing a huge amount of resources over a short period of time, you can stop the insurgency for just enough time to let that nation’s own army come into line.  With things stabilizing, you can pull all of your troops out and call it Mission Accomplished… or something.
I'll go Dan one better, and make the hawkish liberal argument for both Iraq and the stimulus. The problem with the timing of the "surge," in both Iraq and (potentially) the economy, is that only by going big early can you really get a handle on the thing. Try to do it on the cheap, thinking you might be able to scale things up later, and you'll waste more blood and treasure (natch) putting out fires than you would spend in the first place by doing it right.

So we need to gather up our allies, get consensus, and do this thing with overwhelming force. We need to follow the Powell Doctrine for economic intervention.

Two quick notes. First -- why not Paul Krugman for Secretary of the Treasury? He hasn't always been kind to Obama, but not only is the dude awesome and liberal, he is a great communicator to the people, which is what we're going to need to get big-time buy-in from the American people for the big plans he'd want to support. Also, in case you hadn't heard, P-Krug just won the Nobel Prize.

Second -- a possible benefit of the gov't bailout of banks is that numbers in the hundreds of billions suddenly become feasible for economic projects in a way that they weren't even a year ago. You can say $600 billion for economic stimulus and not sound like you're overshooting it. You can push for universal health care, and the price tag doesn't seem overwhelming.

But we're also threatened by the fact that the bailout/stimulus becomes the default way we handle these problems. For instance, the Big Three's health care and pension plans are collapsing, a disaster that was bound to happen, and one which many commentators have said for years might create the political will for a move towards universal/socialized health care. But instead, it's happening while all this other mess is going on, so instead of changing the health care system, the Big Three will probably get some kind of gov't bailout, and still cut health care and pensions to retirees.

That is six kinds of suck. I'm with Dan and Paul and the gang. Obama should go big, long, and do the thing up right.

Tuesday, October 28, 2008

Overfished Derivative Waters

Jonah Lehrer on the failure of financial markets and the Grand Banks fisheries:

In the 1970's, the government instituted strict regulations that limited the total catch to just 16 percent of the total cod population. The tricky part, of course, was coming up with the population estimates in the first place. It's hard to know how many fish to catch if you don't know how many fish there are. But fishery scientists were confident that their sophisticated models were accurate. They had randomly selected areas of the ocean to sample and then, through the use of a complicated algorithm, arrived at their total estimate of the cod population. They predicted that the new regulations would allow the cod stock to steadily increase. Fish and the fishing industry would both thrive.

The models were all wrong. The cod population never grew. By the late 1980's, even the trawlers couldn't find cod. It was now clear that the scientists had made some grievous errors. The fishermen hadn't been catching 16 percent of the cod population; they had been catching 60 percent of the cod population. The models were off by a factor of four. "For the cod fishery," write Orrin Pilkey and Linda Pilkey-Jarvis, in their excellent book Useless Arithmetic: Why Environmental Scientists Can't Predict the Future, "as for most of earth's surface systems, whether biological or geological, the complex interaction of huge numbers of parameters make mathematical modeling on a scale of predictive accuracy that would be useful to fishers a virtual impossibility."

People love models, especially when they're big, complex and quantitative. Models make us feel safe. They take the uncertainty of the future and break it down into neat, bite-sized equations. But here's the problem with models, which is really a problem with the human mind. We become so focused on the predictions of the model - be it the cod population, or the risk of mortgage derivatives - that we stop questioning the basic assumptions of the model. (Instead, the confirmation bias seeps in and we devote way too much mental energy to proving the model true.) It's not just about black swans or random outliers. After all, there was no black swan event that triggered this most recent financial mess. There was simply an exquisite model, churning out extremely profitable predictions, that happened to be based on a false premise. Hopefully, the markets will recover quicker than the Atlantic cod.

Sunday, October 19, 2008