"The Craziest Space Racists Of All Time" at io9.com offers a decent overview of allegories of race and racism in science fiction -- although apparently racism magically enters sci fi only when it's conscious, explicit, and denounced -- but its real value is its citation of the great Mr Show sketch "Racist in the Year 3000":
Saturday, March 28, 2009
"The Craziest Space Racists Of All Time" at io9.com offers a decent overview of allegories of race and racism in science fiction -- although apparently racism magically enters sci fi only when it's conscious, explicit, and denounced -- but its real value is its citation of the great Mr Show sketch "Racist in the Year 3000":
Blogging for the NYT is a little like writing/directing your own movie:
Via Mark Thoma, Anatole Kaletsky writes:
Smith, Ricardo and Keynes produced no mathematical models.
Now, I have
Marshall McLuhanJohn Maynard Keynes right here. Lets ask him:
Let Z be the aggregate supply price of the output from employing N men, the relationship between Z and N being written Z = φ(N), which can be called the aggregate supply function. Similarly, let D be the proceeds which entrepreneurs expect to receive from the employment of N men, the relationship between D and N being written D = f(N), which can be called the aggregate demand function...
Tuesday, March 24, 2009
Souleymane Sy Savané [Solo] is from the Ivory Coast. Red West [William] is from Memphis. We believe it. They fit into their roles like hands into gloves. You look at Red West and think, this man has been waiting all his life to play this role. He is 72, stands 6'2." You may have heard the name. He was a member of Elvis Presley's Memphis Mafia, a friend, driver and bodyguard starting in 1955, who appeared in bit parts in 16 Elvis movies. Since then he has worked for such directors as Robert Altman and Oliver Stone.
"I wanted a real Southerner," Bahrani told me after the film's premiere at Toronto 2008. "I wanted the accent, I wanted the mentality of the South. Red sent a video of himself doing a reading of the first scene. I think I watched it for three seconds; I hit pause and said, this is the guy that I wrote about. This is the guy. I called him; I said, 'Red, can you not point when you do the reading?' And I gave him one other direction, just to see, would he hear what I said and would he do it? He did it, he taped it, he sent it back; he had listened to everything I said. I brought the guy in and, I mean, there was just no doubt about it. He was the man."
Bahrani only asked him once about Elvis. "He told a great story. I think it was Elvis' cousin that was bringing drugs to him in the end, and Red didn't like it, which was one of the big conflicts of their falling-out. He said, the guy brought drugs, and he broke his foot and said, 'I'll work my way from your foot up to your face.'
The other thing you should know about Red West is that he was in Road House, playing a character named Red Webster. That is so bad ass.
Historian of Europe Karl Schlögel on the molecular movements of history:
The grand moments with which history usually preoccupies itself are inconceivable without the molecular events that make them possible. And the Europeans who make a career out of standing and speaking for Europe are nothing at all without the unknown Europeans whose stories are never told. We all know the stages of Brussels, Strasbourg, Paris or Maastricht, upon which "Europe's representatives" play their parts. It's not enough that that we're kept up to date on all their entrances and declarations. It's always the same names, the same faces, the same gestures. In 1949, a group of townspeople from Aachen, Europeans of the first hour, created the Charlemagne Prize for "persons who have advanced the ideas of European understanding in political, economic, and spiritual relations."
In the list of those honoured since 1950, one more or less finds all the great Europeans, from Count Coudenhove-Kalergi to Vaclav Havel, from Jean Monnet to the Euro. One can extrapolate this line and list easily and without a great deal of imagination. But one could also award the prize to people who were indispensable to the Europe that has evolved since 1989. There are more than a few claimants for these honours: the transportation ministers and the engineers who built the bridges, streets, and rails that paved the way to a new Europe and brought Europeans closer to one another. The shippers and logistics experts who have made careers out of shortening distances and creating a sense of proximity should also be eligible. Nor should one leave out the transportation companies and founders of discount airlines who have radically altered the map of Europe in our heads. Now, we not only know where Palermo is, but also Tallinn; not only Lisbon, but also Riga and Odessa. They have established lines of transit between the Rhein-Main area and Galicia, between Warsaw and the English Midlands, between Lviv and Naples. The discount airlines have made Berlin a neighbour of Moscow and contributed to an increase in cosmopolitanism. Krakow now has a connection to Dublin.
Entire economies can no longer function without this flow of traffic. The renovation of apartments, the care for pensioners and for the infirm in cities - even those located far from the border - now lie in the hands of personnel crossing over our borders. The Aachen Prize Committee could easily get an idea of the eligibility of candidates by looking at their timetables, price lists, and bookkeeping methods. They would determine that there's not a place in Europe that can't be looked up. Every act of research would become a joyous virtual journey to the New Europe.
One of the arguments that Schlögel makes is that the fall of the Berlin Wall mattered less than the mid-1980s institution of an express train line between Moscow and West Berlin, connecting the Communist states to the allied "island" in West Berlin, enabling all sorts of traffic of black-market goods, ideas, and people across what had seemed like impermeable borders. "To this day, there is no memorial for the anonymous black marketeers of Patrice Lumumba University at the Zoological Garden railway station. Instead, a freedom memorial is being planned for the exact spot where absolutely nothing happened."
I really like this idea that a city is not only a place, or a set of people, but also a mental/kinetic map of all the places, people, and things connected to that place -- a perpetually unexhausted, evolving set of possibilities.
We've got language on the brain lately here at Snarkmarket, so Ron Silliman's link to a talk abstract by linguist Bob Port at Berkeley caught my eye.
Most of it's written in linguistese, but the main idea is that when we're talking, we're not manipulating a storehouse of meaningful sounds that we're carrying around in our heads, but kicking around each other's speech in a way that approximates but can't be reduced to these fixed categories. But we think that that's what we're doing, because when we learn how to read (matching symbols to sounds), that is kind of what we're doing, even if it isn't when we speak.
Here's the kicker. To explain/summarize this idea, Port writes: All alphabets are a recent technology for low-bitrate representation of language.
Let me explain why I like this.
Language is one of our oldest technologies, and probably the most important. It's inevitable that we use other technologies to try to understand how it works. One of our other really old, really important technologies is writing, which is, in its own way, an heroic and powerful attempt to understand and functionalize how language works.
But writing is too powerful; not only does it change the way that the whole field of language works, it "restructures thought," as Father Ong would say, not least by making the whole field of language look a little more like writing.
Alphabetic writing alone isn't the only communication technology that affects how we see language; clay tablets, books and scrolls, dictionaries, the telegraph, file cabinets, and computer programming all give us different metaphors for thinking about how signs and communication work. But we've got a richer set of storage and communication technologies than ever before, which means we have a broader set of metaphors. We've got more metaphorical memory and processing power, kids!
Which means that we don't have to think of an alphabet as a permanent stone etching, an engraving on the heart, of what a linguistic sound looks like. We can think about it as a low-res copy, a functional representation, that flows in and out of our memory, gets remixed and mashedup and commented on and tagged by friends -- an evolving document.
I think it's a mistake to spend too much time dwelling on whether our current technology just introduces new distortions, because it inevitably does. It's just that asking language (which is what we're talking about) to give you something else is to ask language (even written language) to do something it does not really do. And that itself is three-quarters of the insight.
Monday, March 23, 2009
The other day, a group of my friends, including two other PhDs, discussed the high rate of depression among graduate students. "It's the stress," one said; "the money!" laughed another. But I made a case that it was actually the isolation, the loneliness, that had the biggest effect. After all, you take a group of young adults who are perversely wired for the continual approval that good students get from being in the classroom with each other, and then lock them away for a year or two to write a dissertation with only intermittent contact from an advisor. That's a recipe for disaster.
So I read Atul Gawande's account of the human brain's response to solitary confinement with an odd shock of recognition:
Among our most benign experiments are those with people who voluntarily isolate themselves for extended periods. Long-distance solo sailors, for instance, commit themselves to months at sea. They face all manner of physical terrors: thrashing storms, fifty-foot waves, leaks, illness. Yet, for many, the single most overwhelming difficulty they report is the 'soul-destroying loneliness,' as one sailor called it. Astronauts have to be screened for their ability to tolerate long stretches in tightly confined isolation, and they come to depend on radio and video communications for social contact...
[After years of solitary, Hezbollah hostage Terry Anderson] was despondent and depressed. Then, with time, he began to feel something more. He felt himself disintegrating. It was as if his brain were grinding down. A month into his confinement, he recalled in his memoir, "The mind is a blank. Jesus, I always thought I was smart. Where are all the things I learned, the books I read, the poems I memorized? There's nothing there, just a formless, gray-black misery. My mind's gone dead. God, help me."
He was stiff from lying in bed day and night, yet tired all the time. He dozed off and on constantly, sleeping twelve hours a day. He craved activity of almost any kind. He would watch the daylight wax and wane on the ceiling, or roaches creep slowly up the wall. He had a Bible and tried to read, but he often found that he lacked the concentration to do so. He observed himself becoming neurotically possessive about his little space, at times putting his life in jeopardy by flying into a rage if a guard happened to step on his bed. He brooded incessantly, thinking back on all the mistakes he'd made in life, his regrets, his offenses against God and family.
But here's the weird part -- all of this isolation actually serves to select for a particular personality type. This is especially perverse when solitary confinement is used in prisons -- prisoners who realign their social expectations for solitary confinement effectively become asocial at best, antisocial generally, and deeply psychotic at worst.
Everyone's identity is socially created: it's through your relationships that you understand yourself as a mother or a father, a teacher or an accountant, a hero or a villain. But, after years of isolation, many prisoners change in another way that Haney observed. They begin to see themselves primarily as combatants in the world, people whose identity is rooted in thwarting prison control.
As a matter of self-preservation, this may not be a bad thing. According to the Navy P.O.W. researchers, the instinct to fight back against the enemy constituted the most important coping mechanism for the prisoners they studied. Resistance was often their sole means of maintaining a sense of purpose, and so their sanity. Yet resistance is precisely what we wish to destroy in our supermax prisoners. As Haney observed in a review of research findings, prisoners in solitary confinement must be able to withstand the experience in order to be allowed to return to the highly social world of mainline prison or free society. Perversely, then, the prisoners who can't handle profound isolation are the ones who are forced to remain in it. "And those who have adapted," Haney writes, "are prime candidates for release to a social world to which they may be incapable of ever fully readjusting."
I think we just figured out why so many professors are so deeply, deeply weird.
Idris Elba, best known for playing Stringer Bell in seasons 1-3 of The Wire, is now playing Charles Minor, Michael's new boss on The Office. (Which, when you think of it, if David Simon had ever gotten around to telling the story of put-upon postmillennial office workers in America, is essentially the same story.)
Part of Stringer's conceit on The Wire is that he wants to turn drug dealing into a modern business. He wants even his front businesses to run well. But it's still dissonant, to say the least, to watch this Baltimore man-god walk among the paper salesmen in Scranton. Rex and the commenters at Fimoculous cracked me up.
Rex: Yeah, that totally threw me too: Stringer Bell on The Office last night...
kittyholmes: I guess he's finally using all those business classes.
jed: Well, he did run the copy shop.
Nancy Franklin on the not-so-secret geography of NBC's Kings:
Watching the show, you feel a tension as you try to decide whether it's holding a mirror up to the present or whether it's making an argument about where the world may soon be headed. We have already noticed, in the aerial establishing shots of Shiloh, that "Kings" is filmed in Manhattan, and that the city isn't just a film location. It's never stated, but it's clear that Shiloh was New York City, before it was destroyed to the point where even its name disappeared. There are inconsistencies that give you pause: the Time Warner Center is still standing -- in fact, it's the home of the King's court -- but the Empire State Building, I noticed with an actual start, is gone, as is the Chrysler Building. A tall building that resembles the planned Freedom Tower is (thanks to special effects) in midtown. The exterior of the palace is a well-known apartment building, the Apthorp, on the Upper West Side, a block from Zabar's and H & H Bagels. (We don't see those emporiums in the show, but I'm going to assume that they still exist in the world of "Kings"; otherwise, let me tell you, there is real cause for despair in the realm.)
I like the show, but it might be a bad sign for its longevity that even I, who made a point of watching and actually liked the pilot episode, missed the broadcast of episode two last night (and rewatched Lost online with my wife instead). Oops.
It's weird to talk about "the best show on TV" now that The Sopranos and The Wire are off the air, and the end of Battlestar Galactica brings that particular third-way contrarian option to a close.
There's the old Yeats joke; when Swinburne died, WBY said, "Now I'm king of the cats" -- and he was (probably) for the next thirty years. It's strange now that the new king of cats might actually be on broadcast TV rather than cable -- but Mad Men aside, that's where we seem to be -- and there are a LOT of genuinely ambitious network shows out there.
Rex makes the case for Dollhouse, which has indeed picked up. If you were (figuratively) buying stock in a show, it'd be a hot bet. But I'm going to stick with Lost in the drama category (no one does it like you), 30 Rock for character-based comedy (Liz Lemon is our decade's female answer to Homer Simpson), and The Daily Show/The Colbert Report hour for sheer cultural relevance -- simply put, nothing else is essential.
Sorry if those answers seem boring, but that's just how it is sometimes.
A.O. Scott on the new Neorealism in American cinema:
WHAT KIND OF MOVIES do we need now? Its a question that seems to arise almost automatically in times of crisis. It was repeatedly posed in the swirl of post-9/11 anxiety and confusion, and the consensus answer, at least among studio executives and the entertainment journalists who transcribe their insights, was that, in the wake of such unimaginable horror, we needed fantasy, comedy, heroism. In practice, the response turned out to be a little more complicated some angry political documentaries and earnest wartime melodramas made it into movie theaters during the Bush years, and a lot of commercial spectacles arrived somber in mood and heavy with subtext but such exceptions did little to dent the conventional wisdom.
And as a new set of worries and fears has crystallized in recent months lost jobs and homes, corroded values and vanished credit the dominant cultural oracles have come to pretty much the same conclusions... But what if, at least some of the time, we feel an urge to escape from escapism? For most of the past decade, magical thinking has been elevated from a diversion to an ideological principle. The benign faith that dreams will come true can be hard to distinguish from the more sinister seduction of believing in lies. To counter the tyranny of fantasy entrenched on Wall Street and in Washington as well as in Hollywood, it seems possible that engagement with the world as it is might reassert itself as an aesthetic strategy. Perhaps it would be worth considering that what we need from movies, in the face of a dismaying and confusing real world, is realism.
Postwar Italy turned inward after fascism, wartime defeat, and economic collapse to create some of the greatest films in history, by Roberto Rossellini, Luchino Visconti, the young Federico Fellini, and possibly my favorite postwar filmmaker, Vittorio De Sica. These films, usually using amateur actors (with glorious exceptions like the great Anna Magnani), location settings, and astonishingly free yet lucid cinematography and editing, portrayed hidden corners of the world from the networks of the Italian resistance to the pawnshops of the impoverished Italian (non)working classes.
The best heir to De Sica's throne is doubtlessly Ramin Bahrani, whose debut Chop Shop has been the best new independent movie I've seen in years. He's got a new one, Goodbye Solo, that also looks great. If Chop Shop was Bahrani's Bicycle Thief, then Goodbye Solo looks like his Umberto D.
Thursday, March 19, 2009
When I first heard that Sopranos creator David Chase was making an HBO miniseries about the movie business, I thought it would be a roman a clef or something entertaining but insidery like The Player or "Entourage." But this actually sounds pretty cool:
The series, ‘Ribbon of Dreams,’ will begin with the behind-the-scenes roles played by two fictional characters — one a cowboy with some violence in his past, the other a mechanical engineer — who work for the famous early film director D. W. Griffith. It will follow them and their professional heirs through the development of the movie business..
The project is expected to cover each period of Hollywood movies, beginning with silent westerns and comedies, through the golden era of the studio system, to the emergence of auteur film directors in the 1970s, and up to the current mix of studio blockbusters and independent films. The cast of characters will also include many of the biggest names of Hollywood’s past, including John Wayne and Bette Davis
I love this stuff, and I bet I will be very into this.
Wednesday, March 18, 2009
Sparta had a great army, lots of places had great olive oil, and plenty of city-states had plebiscite democracy. So why was life in Athens so great?
[Josiah] Ober's hypothesis is that Athens's participatory institutions essentially turned the city into a knowledge-generating and knowledge-aggregating machine, and also supported the effective deployment of useful knowledge over time. Athenian institutions and culture functioned so that the right useful knowledge made it to the right people at the right time, resulting in the production of consistently better-than-average decisions. Athenian institutions and culture also functioned to provide an effective balance between innovation, on the one hand, and, on the other, learning or routinization, which brings efficiency. To overcome the problem of dispersed and latent knowledge, the Athenians used "networking and teaming." To overcome alignment problems, they built up stores of common knowledge through extensive publicity mechanisms and an emphasis on "interpresence"--frequent and large public gatherings--and "intervisibility" in public spaces, the capacity of all members of an audience to see each other as well as the speaker; and these stores of common knowledge worked particularly well to sustain systems of reward and sanction able to motivate ordinary citizens. To minimize transaction costs in areas such as trade, they standardized rules and exchanged practices and widely disseminated knowledge about them. The Athenians invested more resources than did their competitors in ensuring that their laws did not contradict each other, and in archiving and widely publishing final versions.
One particular example that the reviewer Danielle Allen (aka The Smartest Classicist I Know) examines is a ship-building competition authorized by the citizens of Athens: not only did public competitions like these encourage innovation in building, but since they were publicly judged, they helped disseminate expert knowledge throughout the populace, as the people learned what made one ship better than another.
Allen also looks long at what lessons American democracy can learn from Athens; one big (if obvious) conclusion is that the polis is a lot more nimble than an empire or even a republic, but from the interconnected micropolitical structures of the polis, one might actually be able to sustain a the macropolitics of a democratic republic:
As Ober notes, the immediate usefulness of the Athenian model pertains not directly to nation-states that are vastly larger than the city-state of Athens, with its population of approximately 250,000, but to the wide variety of smaller scale organizations that make up the sub-units of any given nation-state. To unleash the full value of participatory democracy at the level of the nation-state, a citizenry would do best to focus on tapping participatory democracy at the local level and throughout the variety of organizational types that make up modern society. Then there would be the further question of how well each of these sub-units is connected to the rest. If participatory democratic practices on a smaller scale and in various contexts do indeed increase the knowledge resources of the citizenry of a nation-state as a whole, then the structures of representative government, too, should function better.
It's a very Athenian conclusion, that democracy is a function of knowledge (and vice versa), but I think it's a welcome one.
Sunday, March 15, 2009
The challenge however is not to reshape Paris, but rather to extend its inherent beauty to its outskirts, les banlieues -- a web of small villages, some terribly grand and chic (Neuilly, Versailles, Saint Mandé, Vincennes, Saint Germain-en-Laye), others modest and provincial-looking (Montreuil, Pantin, Malakoff, Montrouge, Saint Gervais) and others still, socially ravaged and architecturally dehumanised (La Courneuve, Clichy-sous-bois). And also to link them. But how do you bring together so many different styles and the city's "enormous disparity", as Richard Rogers calls it, into one Grand Paris -- especially when the city is so clearly defined geographically by its gates, shadows of former fortifications, and now le périphérique, the circular road encasing Paris? The simple answer is: by being bold. But also by understanding the fabric of French society and its psyche...
As a Parisian born and bred, I thought the most convincing presentation came from Parisian architect and sometime presidential candidate Roland Castro. He seems the only one to really understand the Parisian mentality, the importance of architecture and politics, grandeur and charm, poetry and citizenship. He not only suggests moving the Elysée Palace to the tough north-eastern suburbs, but also proposes to create new cultural landmarks and governmental buildings, together with a New York-style Central Park on the grim housing project of La Courneuve. The idea is to inject grandeur (as conveyed by the cultural and official institutions) and if possible, beauty, to Paris's many environs.
Via io9.com. You can read about the actual Haussmann here.
Cross-posted (as always) at Snarkmarket.
Saturday, March 14, 2009
During the wrenching transition to print, experiments were only revealed in retrospect to be turning points. Aldus Manutius, the Venetian printer and publisher, invented the smaller octavo volume along with italic type. What seemed like a minor change -- take a book and shrink it -- was in retrospect a key innovation in the democratization of the printed word, as books became cheaper, more portable, and therefore more desirable, expanding the market for all publishers, which heightened the value of literacy still further..
That is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn't apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can't predict what will happen. Agreements on all sides that core institutions must be protected are rendered meaningless by the very people doing the agreeing. (Luther and the Church both insisted, for years, that whatever else happened, no one was talking about a schism.) Ancient social bargains, once disrupted, can neither be mended nor quickly replaced, since any such bargain takes decades to solidify.
And so it is today. When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. They are demanding to be told that old systems won't break before new systems are in place. They are demanding to be told that ancient social bargains aren't in peril, that core institutions will be spared, that new methods of spreading information will improve previous practice rather than upending it. They are demanding to be lied to.
There are fewer and fewer people who can convincingly tell such a lie
Also see Shirky ventriloquize our own Matt Thompson: "Society doesnt need newspapers. What we need is journalism. For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. Thats been a fine accident to have, but when that accident stops, as it is stopping before our eyes, were going to need lots of other ways to strengthen journalism instead."
Friday, March 13, 2009
Jon Stewart's The Daily Show has never, in my memory, turned its entire half hour into an interview of a single guest -- and they get huge guests. But that's what they did yesterday for CNBC's Jim Cramer. And it's a doozy.
Last week, as part of its Santelli-inspired critique of CNBC, Stewart ran two series of clips of Cramer offering pretty terrible financial advice, first with a bunch of other CNBC pundits, and then (after Cramer loudly and publicly complained) of Cramer by himself. In this interview, Stewart shows unaired clips of Cramer (who used to run a hedge fund) from 2006:
- talking about how easy it is to manipulate the markets through the media;
- admitting that he used to do it, particularly to make money on a short sell;
- suggesting that other hedge fund managers do the same, as it's a fast and satisfying way to make money;
- offering specific advice on how to do this right then with a particular stock (Apple Computer).
As Stewart says, we want Jim Cramer the journalist to protect us from Jim Cramer the financial schemer. Instead of being a watchdog, CNBC became a cheerleader.
The entire interview is amazing. Let me quote James Fallows and Sean Quinn on what went down.
Fallows, "It's true: Jon Stewart has become Edward R. Murrow":
Yes, it is cliched to praise Stewart as the "true" voice of news; and, yes, it is too pinata-like to join the smacking of CNBC.... But I found this -- the Stewart/Cramer slaughter -- incredible...
Just before leaving China -- ie, two days ago -- I saw with my wife the pirate-video version of Frost/Nixon, showing how difficult it is in real time to ask the kind of questions Stewart did. I know, Frost was dealing with a former president. Still, it couldn't have been easy to do what Stewart just did. Seeing this interview justified the three-day trip in itself.
Sean Quinn, "Stewart Destroys CNBC, Cramer, Disses 'Doucheborough'":
On the day in October 2004 that Jon Stewart made up his mind to end CNNs Crossfire, viewers didnt have advance warning. By contrast, last nights epic takedown of CNBC and
Fast MoneyMad Money host Jim Cramer that built over an eight-day period, including the advance hype of a Thursday morning front-page, above-the-fold story on Americas most widely-circulated newspaper, USA Today.
It did not disappoint. In addition to an extensive confrontation that included footage of Cramer admitting to the ease of manipulating markets, Stewart indicted CNBCs sins of commission in fueling hype that led to the economic crisis.
Quinn also pulls the money quote from Stewart:
I understand you want to make finance entertaining, but its not a (bleeping) game. And when I watch that, I get, I cant tell you how angry that makes me. Because what it says to me is: you all know. You all know whats going on. You know, you can draw a straight line from those shenanigans to the stuff that was being pulled at Bear, and AIG, and all this derivative market stuff that is this weird Wall Street side bet.
Watch it, it's worth it.
Crossposted (w/clips!) at Snarkmarket.
Thursday, March 12, 2009
My friend (and fellow Penn Comparative Literature alumnus) Mark Sample on what's uncritical about the critical essay:
[C]ritical thinking stands in opposition to facile thinking. Critical thinking is difficult thinking. Critical thinking is being comfortable with difficulty. And this is something else that separates the expert learner from the novice learner: experts are at ease with uncertainty, while novices are uncomfortable with what they don’t understand, and they struggle to come up with answers — and quickly come up with answers — that eliminate complexity and ambiguity. The historian and cognitive psychologist Samuel Wineburg calls this tendency to seek answers over questions “schoolish” behavior, because it is exactly the kind of behavior most schools reward.
I want my students to break out of this schoolish mode of behavior. Instead of thinking like students — like novices, I want them to think more like experts, and I must coach them to do so. It requires intellectual risk-taking on their part, and on my part, it requires mindfulness, patience, and risk-taking as well.
I love what Mark's asking his students to do instead:
This is the primary reason I’ve integrated more and more public writing into my classes. I strive to instill in my students the sense that what they think and what they say and what they write matters — to me, to them, to their classmates, and through open access blogs and wikis, to the world.
In addition to making student writing public, I’ve also begun taking the words out of writing. Why must writing, especially writing that captures critical thinking, be composed of words? Why not images? Why not sound? Why not objects? The word text, after all, derives from the Latin textus, meaning that which is woven, strands of different material intertwined together. Let the warp be words and the weft be something else entirely.
With this in mind, I am moving away from asking students to write toward asking them to weave. To build, to fabricate, to design. I don’t want my students to become miniature scholars. I want them to be aspiring Rauschenbergs, assembling mixed media combines, all the while through their engagement with seemingly incongruous materials, developing a critical thinking practice about the process and the product.
"Aspiring Rauschenbergs!" I'm way more committed to writing (writ large) and literature (read wide) than Mark is -- but still, this makes me feel even more excited to seek out new modes of anti-teaching. Let's stay on the move.
Increasingly, Chinese people don't actually have to write (rite? right?) out these characters by hand. More and more, they key them in with mobile phones or at computers. And when they do that, it's just as easy to 'write' a traditional-style, complex, information-dense character as a streamlined new one. (Reason: you key in clues about the character, either its pronunciation or its root form, and then click to choose the one you want.) So -- according to current arguments -- the technology of computers and mobile phones could actually revive an important, quasi-antique style of writing.
Hmm -- Fallows is definitely one-up on me, since he reads Chinese and I don't, but I wonder whether other considerations (e.g. screen size and corresponding size of characters) might still put some pressure towards some kind of simplification of the character form. A lot of that information-density just turns into noise if it has to be packed into a tiny space.
Alternatively, kids (it's always kids, at first) might start using "abbreviations" that minimize the number of keystrokes required to type useful phrases -- maybe by not choosing the precisely "correct" character but an approximation of it (the root or a related pronunciation or whatever), like our "lol," "brb," "btw," etc.
In short, technology rarely has a purely stabilizing effect on tradition -- it might help block a particular chirographic attempt at reform/revolution, but only to displace it in favor of its own matrix. (And yes, I just quoted Spock from The Wrath of Khan.)
Wednesday, March 11, 2009
One reason why Alan Moore (like lots of other people) may have thought that Watchmen was unfilmable was the use of subtle associations and tiny messages that could only be revealed by long scrutiny of the individual pages and panels. According to Moore, in Watchmen we see:
sort of “under-language” at work ... that is neither the “visuals” nor the “verbals,” but a unique effect caused by a combination of the two. A picture can be set against text ironically, or it can be used to support the text, or it can be completely disjointed from the text - which forces the reader into looking at the scene in a new way.... the reader has the ability to stop and linger over one particular “frame” and work out all of the meaning in that frame or panel. (Quoted in "Reading Space in Watchmen.")
Well, movies don't allow that same kind of attention at full speed in the theater. They DO allow it in the freeze-frame -- and Zack Snyder's Watchmen title sequence actually slows down and freezes the frame for you. Now Meredith Woerner's got the goods on the easter eggs in the title sequence for Watchmen, and at least one is a doozy:
The opening shot, with Nite Owl giving a fist full of justice has a big Batman reference. First, check out the posters to the right. Look familiar? And isn't that Mr. and Mrs. Wayne at the back entrance of the opera, being saved from a bloody death? And according to commenter Rainbucket, the opera bills say: "Die Fledermaus" (The Bat). So can we safely come to the conclusion that the original Nite Owl stopped Batman from popping up in their universe?
Technologies have a social dimension beyond their mere mechanical performance. We adopt new technologies largely because of what they do for us, but also in part because of what they mean to us. Often we refuse to adopt technology for the same reason: because of how the avoidance reinforces, or crafts our identity.
Most of Kelly's aticle focuses on tool cultures among Highland tribes in New Guinea, but Kelly's also recently written about technology adoption among the Amish -- which is, of course, unusually explicit about the relationship between technology and group identity.
I'm not sure about this hedge, though:
In the modernized west, our decisions about technology are not made by the group, but by individuals. We choose what we want to adopt, and what we don’t. So on top of the ethnic choice of technologies a community endorses, we must add the individual layer of preference. We announce our identity by what stuff we use or refuse. Do you twitter? Have a big car? Own a motorcycle? Use GPS? Take supplements? Listen to vinyl? By means of these tiny technological choices we signal our identity. Since our identities are often unconscious we are not aware of exactly why we choose or dismiss otherwise equivalent technology. It is clear that many, if not all, technological choices are made not on the technological benefits alone. Rather technological options have unconscious meaning created by social use and social and personal associations that we are not fully aware of.
But aren't these choices still deeply social? Partly it's about access: if you don't have daylong access to the web (or access to the web at all) you ain't twittering, son. But you're also not likely to do it if your friends and coworkers and neighbors don't twitter. Group identity is a lot more complex in the modernized west, sure -- but pure individual choice it ain't. In fact, our adoption of technology actually helps us form new groups and social identities that are not quite tribal/ethnic -- or it helps us reinforce those bonds.
P.S.: My title, "tool culture," isn't from Kelly's article, but from paleoanthropology. One of the things I love about the study of groups like the Neanderthals is that we have evidence of their tool use long after we have fossilized remains. We can actually distinguish between Neanderthal and human settlements based on their tools.
Neanderthals and homo sapiens definitely coexisted. People aren't sure whether Neanderthals interbred with modern humans or not, which makes it hard to know when exactly the Neanderthals died out. Wouldn't it be interesting, though, if a group of anatomically modern humans adopted Neanderthal tools? That technologies could reach not just across ethnicities, but across species as well?
Sunday, March 08, 2009
Robert Sullivan in the New York Times, has some suggestions to remedy the venial sins of cyclists:
NO. 1: How about we stop at major intersections? Especially where there are school crossing guards, or disabled people crossing, or a lot of people during the morning or evening rush. (I have the law with me on this one.) At minor intersections, on far-from-traffic intersections, lets at least stop and go.
NO. 2: How about we ride with traffic as opposed to the wrong way on a one-way street? I know the idea of being told which way to go drives many bikers bonkers. That stuff is for cars, they say. I consider one-way streets anathema they make for faster car traffic and more difficult crossings. But whenever I see something bad happen to a biker, its when the biker is riding the wrong way on a one-way street.
There will be caveats. Perhaps your wife is about to go into labor and you take her to the hospital on your bike; then, yes, sure, go the wrong way in the one-way bike lane. We can handle caveats. We are bikers.
NO. 3: How about we stay off the sidewalks? Why are bikers so incensed when the police hand out tickets for this? Im only guessing, but each sidewalk biker must believe that he or she, out of all New York bikers, is the exception, the one careful biker, which is a very car way of thinking.
NO. 4: How about we signal? Again, I hear the laughter, but the bike gods gave us hands to ring bells and to signal turns. Think of the possible complications: Many of the bikers behind you are wearing headphones, and the family in the minivan has a Disney DVD playing so loudly that its rattling your 30-pound Kryptonite chain. Let them know what you are thinking so that you can go on breathing as well as thinking.
As a pedestrian and transit rider, I heartily concur -- cyclists shouldn't believe themselves incapable of doing harm just because they are marginally less sucky than motorists. And longtime readers, if you're wondering, yes -- I am still pissed off at Will Wilkinson.
Hat tip to LF, Hong Kong Snarkorrespondent.
New York to San Francisco in one week on an Amtrak sleeper car. My wife forwarded me this email with one sentence: "This is my dream trip."
There are two reasons why people lose economic confidence. In the first case, there's enough instability that you just don't know what's going to happen. In the second case, you have a pretty good idea about tomorrow, but you know that things are going to be genuinely bad.
If you know things are going to be genuinely bad, then given sufficient resources, you can prepare for them: save money, make a budget, gather information and make plans. In particularly, if you know (for example) that your income is going to drop or your rent is going to go up by a preset amount, you can budget accordingly. But if you really have no idea about tomorrow -- whether you could get your pay cut, or get outright fired, whether gasoline prices could halve or double -- then you just lurch from day to day, not knowing quite what to do, afraid to spend, afraid to save, generally, afraid.
This is where colleges and universities are now:
Colleges facing a financial landscape they have never seen before are trying to figure out how many students to accept, and how many students will accept them.
Typically, they rely on statistical models to predict which students will take them up on their offers to attend. But this year, with the economy turning parents and students into bargain hunters, demographics changing and unexpected jolts in the price of gas and the number of applications, they have little faith on those models.
Trying to hit those numbers is like trying to hit a hot tub when youre skydiving from 30,000 feet, said Jennifer Delahunty, dean of admissions and financial aid at Kenyon College in Ohio. Im going to go to church every day in April.
As the article points out, this uncertainty generally favors students -- colleges are throwing the kitchen sink at applicants, throwing aid packages, higher admit rates, etc. -- except in those cases where universities (like in California) don't know whether they can seat everyone they admit in the event of a budget failure.
It's also pretty hellish on teaching applicants, too. I'm on the academic job market this year, and as the first wave of the economic catastrophe hit endowments and state budgets in October and November, schools cancelled tenure-track searches, suddenly uncertain about whether they could invest in long-term positions. Don't worry, everyone said. There will be plenty of visiting and term-limited positions -- after all, the schools still need someone to teach the courses, right? Then, the enrollment projections got all screwy. Before, schools weren't sure whether they could take a chance on hiring you for a lifetime. Now they don't know whether they can hire you for a year, or if there will be one or ten dozen students when you get there.
On the other hand, both of the two schools where I already work are unusually aggressive in getting me to stick around for next year. I'd like to think it's just because I am so awesome, but I know that economic catastrophe puts the entire already-migratory part-time teaching corps in flux: folks content to work for not a lot of money suddenly find their spouses out of work or forced to move. Some places know that they're going to have way more students and less time and money to devote to them -- so any teacher they can count on to reliably fill a classroom for adjunct money is like a U.S. treasury note: sure, it's not ideal, but where else are you going to put your cash?
It's all gone screwy, and nobody seems to know what's going to happen -- in the one industry where we still enjoy a competitive advantage with the rest of the world. And it's made an already weird job process a hundred times weirder.
Cross-posted at Snarkmarket.
Saturday, March 07, 2009
In English, the names of (some) vowel sounds are given by a smaller subset of those sounds -- so "A" involves one of the pronunciations of "a," ditto "E," "I," and "O," with the exception of "U," which by all rights ought to be "oo" instead of "yoo." Let's just chalk this up to the Y-as-an-assistant-vowel phenomenon, whereby the "U" in words like "cute" or "fume" is pronounced "yoo." And "I" is a dipthong, but that's neither here nor there.
Consonants are generally either given by a pronounciation of a consonant plus a vowel ("B" = "bee") or a vowel plus the consonant ("S" = "ess"). "W" is weird, as is "H," "Y" is and always shall be a mess. "Q" is, surprisingly, not bad; even if it slights the typical sound of the consonant -- arguably, so does "C."
Consonants are even harder than vowels to articulate completely in isolation, so it seems obvious that you need SOME vowel with the consonant. But why do some letters get the vowel in front and others the vowel in behind? And while most letters get the short e in front or the long E behind, this isn't universal - "J" and "K" could just as easily by "Jee" and "Kee" (assuming that "G" was "ghi" or "gay" or "goo" or something else).
You could say that as a general rule, names of letters avoid being homonyms with meaningful words, but "B," "C," "J," "P," and "T" violate this rule -- in the case of "B," pretty drastically.
I'm willing to entertain the possibility that there is some partial motivation for the sounds we use -- maybe "M, "N," or "S" appear more often at the end of words than other letters, so they get known by an end-consonant sound.
Think with me -- imagine an alphabet where all the names of consonants were reversed, so that:
"B" = "ebb"
"C" = "ack" / K = "eck"
"D" = "edd,"
"M" = "mee"
"N" = "nee"
and so on. What would be wrong with that pronunciation of the alphabet?
Cross-posted at Snarkmarket.
Friday, March 06, 2009
Ron Charles looks for college radicals -- er, kids reading radical books:
Here we have a generation of young adults away from home for the first time, free to enjoy the most experimental period of their lives, yet they're choosing books like 13-year-old girls -- or their parents. The only specter haunting the groves of American academe seems to be suburban contentment.
Where are the Germaine Greers, the Jerry Rubins, the Hunter Thompsons, the Richard Brautigans -- those challenging, annoying, offensive, sometimes silly, always polemic authors whom young people used to adore to their parents' dismay? [Abbie] Hoffman's manual of disruption and discontent -- "Steal This Book" -- sold more than a quarter of a million copies when it appeared in 1971 and then jumped onto the paperback bestseller list. Even in the conservative 1950s, when Hemingway's plane went down in Uganda, students wore black armbands till news came that the bad-boy novelist had survived. Could any author of fiction that has not inspired a set of Happy Meal toys elicit such collegiate mourning today? Could a radical book that speaks to young people ever rise up again if -- to rip-off LSD aficionado Timothy Leary -- they've turned on the computer, tuned in the iPod and dropped out of serious literature?
Gotta love that "13-year old girls" crack -- because 13-year old boys, you know, they're all reading Middlemarch. Is Steal This Book "serious literature" now? This whole schtick is some kind of weird fever dream, muddling nostalgias, a botched amalgam of Thomas Frank and Harold Bloom. It can't quite make up its mind which version of cultural decay it wants to endorse.
Speaking from ground zero, kids are as hard up for reasonably radical social messages as ever -- remember No Logo? Remember Fight Club? My students do. It wasn't so long ago.
Ultimately, though, radical literature is only as strong as the social movements that nourish it. Malcolm X, Eldridge Cleaver, Hunter S. Thompson, and co. had lots of readers because Something Big was happening, people were organizing and doing things, living new ways and trying new politics, and other people wanted to know what it was all about. If people are tuning into the internet rather than books, or rather than the newspaper, or rather than television or anything else, it's not least because it's on the internet that they're finding out all about what's new. Which means that all of those other media begin to serve a slightly different function. I think escapist YA lit is stealing more of its audience from television and the movies than campus radicals, but that's just my guess -- which is apparently as good as Charles's.
Cross-posted at Snarkmarket.
Posted by Tim at 7:25 PM
Thursday, March 05, 2009
There are so many reasons to enjoy Maximum PC's"Computer Data Storage Through the Ages -- From Punch Cards to Blu-Ray," but I like the way it relates the technologies to the broader culture. For instance:
Elvis Presley, Buddy Holly, and magnetic tape all rose to prominence in the 1950s, and it was the latter that helped shape the recording industry. Magnetic tape also changed the computing landscape by making long-term storage of vasts amount of data possible. A single reel of the oxide coated half-inch tape could store as much information as 10,000 punch cards, and most commonly came in lengths measuring anywhere from 2400 to 4800 feet. The long length presented plenty of opportunities for tears and breaks, so in 1952, IBM devised bulky floor standing drives that made use of vacuum columns to buffer the nickel-plated bronze tape. This helped prevent the media from ripping as it sped and up and slowed down.
Likewise, audio quality of cassette tapes improved, "ushering in the era of boom boxes and parachute pants (thanks M.C. Hammer." And "the floppy disk might one day go down as the only creature as resistant to extinction as the cockroach."
But my favorite digital storage media, hands-down, is paper tape:
Similar to punch cards, paper tape contained patterns of holes to represent recorded data. But unlike its rigid counterpart, rolls of paper tape could feed much more data in one continuous stream, and it was incredibly cheap to boot. The same couldn't be said for the hardware involved. In 1966, HP introduced the 2753A Tape Punch, which boasted a blistering fast tape pinch speed of 120 characters per second and sold for $4,150. Yikes!
One thing I've always wondered about these early paper-based computer programs is whether they were copyrighted -- and whether that, in part, led to the adoption of paper. One of Thomas Edison's clever exploitations of copyright loopholes was to take celluloid moving pictures (which weren't initially eligible for copyright) and copy them onto a long, continuous paper print -- this meant that an entire feature film could be copyrighted as a single "photograph."
I also wonder if/why early computer programmers didn't use celluloid instead of paper. You can move it a lot faster than paper tape, and it's generally stronger -- except, perhaps, if you punch it with lots of little holes.
Cross-posted at Snarkmarket.
Kottke reports that there's a "Pepsi Natural" on the way to market -- featuring cane sugar in lieu of corn syrup, and served in that most magnificent of beverage transportation devices, a solid glass bottle.
Needless to say, I approve of both of these retro-novations. In fact, I make semi-regular trips to my local Mexican wholesaler to pick up soda served this way. But I'm strictly a Coca-Cola man. Let's hope Coke follows Pepsi's lead, and soon.
This is the part of the post where I quote Andy Warhol:
What's great about this country is America started the tradition where the richest consumers buy essentially the same things as the poorest. You can be watching TV and see Coca-Cola, and you can know that the President drinks Coke, Liz Taylor drinks Coke, and just think, you can drink Coke, too. A Coke is a Coke and no amount of money can get you a better Coke than the one the bum on the corner is drinking. All the Cokes are the same and all the Cokes are good.
Let's make sugar cheap, corn expensive, and bring back those Cokes!
Some time yesterday afternoon, the six-story Cologne Archives, housing documents dating as far back as the tenth century, as well as the private papers of writers such as Karl Marx, Hegel, and Heinrich Böll, and also all of the minutes taken at Cologne town council meetings since 1376, collapsed as if hit by a missile, only there was no missile, but rather, some sort of structural flaw that caused the building to start cracking and tumbling down. Most visitors, plus some construction workers on the roof, were able to get out in time, although two or three persons may be buried underneath the rubble. Ironically, the Archives contained many documents that had been recuperated from library buildings destroyed by Allied bombing during the Second World War, and a small nuclear bomb-proof room that had been constructed in the basement to house the most rare materials was, at the time of the building's collapse, only being used to store cleaning materials...
Ever since about 1999, when I first discovered what had been going in Chechnya, I have been kind of obsessed with the city of Grozny and its "disappearance," so to speak, as well as with the ways in which Grozny's destruction has not registered as more than a "blip" on the consciousness of the international media [ask most people, even educated people, if they know what happened in Grozny and they will look at you, like, huh?; show them the photographs, especially the ones in black and white, and ask them to guess where they are, and most answer "Dresden"], and I have returned often to this site as way of thinking through certain questions that have to do with ruins, traumatic history, and memory. The Russian government purposefully refused to re-build the city for almost a decade as a "lesson" to the Chechen rebels and yet many Chechens [not counting the 500,000 or so displaced by the bombings who chose to migrate elsewhere, such as Georgia] continued living there in pretty much post-apocalyptic living conditions [a perfect breeding ground, too, for the suicide terrorism that soon flourished there]. More recently, Russia has rebuilt the city but at such lightning speed that the whole place looks like one of those towns that spring up overnight near Disney World in Florida.
Wednesday, March 04, 2009
Starting Wednesday, owners of these Apple devices can download a free application, Kindle for iPhone and iPod Touch, from Apple’s App Store. The software will give them full access to the 240,000 e-books for sale on Amazon.com, which include a majority of best sellers.
The move comes a week after Amazon started shipping the updated version of its Kindle reading device. It signals that the company may be more interested in becoming the pre-eminent retailer of e-books than in being the top manufacturer of reading devices.
But Amazon said that it sees its Kindle reader and devices like the iPhone as complementary, and that people will use their mobile phones to read books only for short periods, such as while waiting in grocery store lines.
‘We think the iPhone can be a great companion device for customers who are caught without their Kindle,’ said Ian Freed, Amazon’s vice president in charge of the Kindle. [emphasis mine]
Mr. Freed said people would still turn to stand-alone reading devices like the $359 Kindle when they want to read digital books for hours at a time. He also said that the experience of using the new iPhone application might persuade people to buy a Kindle, which has much longer battery life than the iPhone and a screen better suited for reading.
I think is pretty cool, and can potentially benefit everybody -- if reading e-books on the iPhone takes off, iTunes could make a play for the market. In the meantime, it might even help them sell some iPhones -- for Apple, the money's in the hardware. Meanwhile, Amazon gets to take a crack at a bunch of readers who can now read e-books on a device that, whatever its relative limitations for reading, is one they already own.
As the only Kindle-less Snarkmaster, let me say this: I'd really like a freeware Kindle Reader for my MacBook Pro. I like to read to relax, sure; but I also like to read where I do my work (a good deal of which involves reading books). I'm sure whatever prohibitions you'd wind up having to put on the books (no cut-and-pasting?) would make the experience stink. But it is one I would be willing to accept.
Let me put forward this thesis. There will be a lot of portable digital reading devices in the near future: dedicated readers, phones and PDAs, digital paper that you can wad up and throw away, tiny projectors that can use any sufficiently bright surface. But the most important one is and will continue to be the laptop computer. People in the electronic reading business need to continue to think about how they can make that experience both better and sustainable.
And let me also advance thesis #2: Don't let the race to greater portability convince you that this is the end of the game. We need software and hardware that take advantage of BIG reading surfaces -- from the TV-sized screen in your kitchen or living room to Penn Station and the Library of Congress. We don't all always read tucked away in our own private worlds, nor should we -- sometimes reading needs to be a spectacle, on a big public wall, where you can always be dimly aware of it, where it can't ever be fully ignored.
Tuesday, March 03, 2009
A fistful of education-related tabs that have been sitting in my RSS reader, waiting for me to say something insightful about them:
- The Library Web Site of the Future (Inside Higher Ed): "Several years ago academic institutions shifted control of their Web sites from technology wizards to marketing gurus. At the time there was backlash. The change in outlook was perceived as a corporate sellout, a philosophical transformation of the university Web site from candid campus snapshot to soulless advertiser of campus wares to those who would buy into the brand... I was one of the resisters. Now I think the marketing people got it right. The first thing librarians must do after ending the pretense that the library Web site succeeds in connecting people to content is understand how and why the institutional homepage has improved and what we can learn from it. Doing so will allow academic libraries to discover answers to that first question; how to create user community awareness about the electronic resources in which the institution heavily invests." My thoughts: Isn't it weird to have a portal at all? Why not something like Firefox's Ubiquity, that just lets you type "pubmed liver cancer" to connect directly to the resource? (Note: part of the genius of Ubiquity is that it shows you what commands are possible! it is potentially more user-friendly than any drilldown portal.)
- To Keep Students, Colleges Cut Anything But Aid (New York Times): "The increases highlight the hand-to-mouth existence of many of the nation’s smaller and less well-known institutions. With only tiny endowments, they need full enrollment to survive, and they are anxious to prevent top students from going elsewhere. Falling even a few students short of expectations can mean laying off faculty, eliminating courses or shelving planned expansions. 'The last thing colleges and universities are going to cut this year is financial aid,' said Kathy Kurz, an enrollment consultant to colleges. 'Most of them recognize that their discount rates are going to go up, but they’d rather have a discounted person in the seat than no one in the seat.'" My thoughts: It's weird. If students don't enroll, we'll have to lay off faculty. So, in order to pay for an increased aid budget, we must lay off faculty.
- In Tough Times, Humanities Must Justify Their Worth (NYT): "As money tightens, the humanities may increasingly return to being what they were at the beginning of the last century, when only a minuscule portion of the population attended college: namely, the province of the wealthy. That may be unfortunate but inevitable, Mr. Kronman said. The essence of a humanities education — reading the great literary and philosophical works and coming 'to grips with the question of what living is for' — may become 'a great luxury that many cannot afford.'" My thoughts: Boooooo. This article stinks.
- See Also: Siamese Twins (Wyatt Mason/Harpers): "Fowler’s Modern English Usage, in any of its incarnations, is pure pleasure. There’s doubtless a medicinal value to its entries, but they entertain so deeply and purely that it all goes down very sweetly. Over the years, I’m sure I’ve read it more for pleasure than with purpose, less in the hope of resolving a confusion over 'pleonasm' than to discover that “pleonasm” was something at all. Where the New Oxford American Dictionary defines the term as 'the use of more words than are necessary to convey meaning, either as a fault of style or for emphasis,' Fowler’s offers a little lesson." My thoughts: I love this.
- Collective Graduate School Action (The Economist): "If you're going to go back to school, now is the time to do it. Not only is the opportunity cost of the time spent extremely low—wages aren't likely to rise any time soon, and there may not be a job available anyway—but so to is the opportunity cost of the money invested. What, you'd rather have that tuition sitting in the market right now? Or in a home?" My thoughts: Clearly, it depends on the school and your goals. But not everyone should listen to that siren song. I entered graduate school during the last Big Recession. Now I'm leaving during the next Great Depression. There are no sure-fire ways to ride these out -- and a dissertation can be as much an anchor as a lifeboat.
Monday, March 02, 2009
Where are the antiquaries of yesteryear? Do they now collect twentieth century pulp fiction? Classic sci-fi? Modernist design magazines? Is it too expensive to collect earlier works? Are collectors and antiquaries the same thing, anyway?
Part of a longer, typically smart post about amateur scholars' access to materials -- particularly those electronic databases for which colleges and universities pay through the nose. Vive Digital Humanism!
The mutual dependency of city and suburb is both physical and psychological. City dwellers and suburbanites need each other to reinforce their own sense of place and identity despite ample evidence that what we once thought were different places and lifestyles are increasingly intertwined and much less distinct. The revenge of the suburb on the city wasn’t simply the depletion of its urban population or the exodus of its retailers and office workers, but rather the importation of suburbia into the heart of the city: chain stores and restaurants, downtown malls, and even detached housing. If the gift of urban planners to suburbia was the tenets of the New Urbanism, it has been re-gifted, returned to cities not as tips for close-knit communities but as recipes for ever more intensive consumer experiences. Suburbia has returned to the city just as most suburbs are experiencing many of the things about city life it sought to escape, both positive and negative: congestion, crime, poverty, racial and ethnic diversity, cultural amenities, and retail diversity. At the same time, cities have taken on qualities of the suburbs that are perceived as both good and bad, such as the introduction of big box retailing, urban shopping malls, and reverse suburban migration by empty nesters, who return to the city to enjoy the kind of life they lived before they had kids to raise. For every downtown Olive Garden there is an Asian-fusion restaurant opening in a strip mall; for every derelict downtown warehouse there is an empty suburban office building waiting to be converted into lofts; the Mall of America is the largest shopping center in the country, but SoHo may be the nation’s largest retail neighborhood; and everywhere we have Starbucks.
Blauvelt's exhibit on suburbia, Worlds Away: New Suburban Landscapes, is at the Walker Art Center in Minneapolis -- in the Target Gallery. Where else?
Sunday, March 01, 2009
Whoa -- retired Marine officer Dave Dilegge and military blogger Andrew Exum (spurred by Thomas Ricks's new book The Gamble) look at the effect of the blogosphere on how the military shares information and tactics:
Ricks cited a discussion on Small Wars Journal once and also cited some things on PlatoonLeader.org but never considered the way in which the new media has revolutionized the lessons learned process in the U.S. military. [...] Instead of just feeding information to the Center for Army Lessons Learned and waiting for lessons to be disseminated, junior officers are now debating what works and what doesn't on closed internet fora -- such as PlatoonLeader and CompanyCommand -- and open fora, such as the discussion threads on Small Wars Journal. The effect of the new media on the junior officers fighting the wars in Iraq and Afghanistan was left curiously unexplored by Ricks, now a famous blogger himself.
It seems clear that blogging and internet forums disrupt lots of traditional thinking regarding the way information is generated and disseminated -- but it's a testament to how powerful it can change readers'/writers' expectations that that disruption can carry through to the military, the top-down bureaucracy if ever there was one.
In related news, the recent New Yorker article about the low-recoil automatic shotguns mounted on robots was awesome.
Just as at a certain point, the military decided it was a waste to have a professional soldier cook a meal or clean a latrine, we'll come to see it as a waste for a professional soldier NOT to provide decentralized information that can help adjust intelligence and tactics: all soldiers will be reporters. Soon all of our wars are going to be fought by robots, gamers, and bloggers. Our entire information circuitry will have to change.