Archive for September, 2011

And Deus Ex is a pretty cool game

I don’t find myself having much difficulty with Neuromancer.  Cyberpunk is a familiar landscape after so many years of playing the (now ancient) game Deus Ex.  That game traverses a lot of the same ground that Neuromancer covers: a world rotting underneath megacorporations with a population augmented by implants.  So, I suppose I didn’t have much difficulty in picking up on the actual story or even any bit of difficulty locking in on the major themes.

However, that last sentence allows me to embarrass myself if I haven’t actually picking up on the themes correctly.

The entire foundation of Neuromancer is one of isolation.  Case is isolated from a fundamental human experience.  It’s easy to scoff at his melodramatic weeping over the disconnect from a more advanced internet, but I wonder how valid that scoffing is.  I can think of the times when weather has knocked out the power for several days.  Being without electricity — the internet, television, news, entertainment, whatever — makes me want to kill myself.  And I still have the ability to do other things.  Case doesn’t.  The entire world built within Neuromancer is locked away from him simply because he can’t connect to the internet.  His frustration is understandable.

And it goes beyond an isolation from activity or even fitting into the world.  He must feel so separate from humanity at large solely because he can’t engage in something so routine.  I wonder what Gibson was getting at with these themes of isolation.  Is it a cautionary tale?  As we become more and more connected, will those who cannot “connect” become more and more isolated from humanity?  And is he necessarily saying this is a negative thing?

I recall the idea of the human singularity.  That eventually all human minds will become interconnected through some sort of internet or computer system and, ultimately, individuality will start to wane.  This seems horrifying to me.  But so does absolute isolation from the human consciousness.

But, getting back to the story.  One of the things absent from Neuromancer that I think is better represented in other works that fall under the cyberpunk genre is the question of humanity.  Implants, augmentations, and so on seem to be accepted in Neuromancer.  Would that really be the case?  The world even, to some extent, accepts the existence of artificial intelligence.

In other cyberpunk works, the pro-augmentation side is shown to clash with a more conservative group that upholds the virtue of maintaining one’s unaltered humanity.  Why is this not the case in Neuromancer?  People seemed apprehensive about the internet.  People still do, in fact.  I wonder where these people are in Gibson’s novel.  Is the euphoric dream of the matrix so compelling that it calls, like a siren, to people, forcing them to be abandon their humanity?  Personally, I feel that given a choice between losing technology, I might take the avenue of abandoning my humanity.

More over, at what point does the nature of humanity shift?  When do the people who refuse to accept the matrix change from being “people who refuse to abandon humanity” to “people who refuse to accept humanity”?  I would say when a majority shifts over to whichever side, but, at the same time, I feel like there might be more to the story then that.

I suppose, ultimately, I would have liked to see Gibson explore the debate of the matrix, of augmentation, and so on in a more detailed manner.  Obviously, he does explore the blurring line of humanity and technology.  I would have just liked to see other aspects of it.  I don’t know if that counts as a difficulty or not though.

PS: Gibson is sort of awful at writing dialogue.  Everyone talks like the same person.  Insight into human singularity or bad dialogue? The world may never know.  I absolutely hate how, were I to cover the speech tags, I would be unable to distinguish Molly from Linda from Case from Julius.  The only character with a semi-distinct voice is Ratz, and that’s just because of his constant “artiste”.

Forgive me if you love his dialogue.  I cannot stand it.  My taste is awful, probably.


Wilford Brimley is a pretty cool guy

There is one major sticking point that bothered me about Campbell’s “Who Goes There” (and the terrifying childhood memories of watching The Thing).  Everyone immediately decries the Thing as an immoral monster.  And, although, there is an attempt to weakly refute the inherent evil of the creature,  it isn’t a strong one.

Is it evil?

I wonder about the perspective of the writings when it comes to this question.  It was written in the same time period, relatively, as other popular science fiction works such as Fahrenheit 451 and Invasion of the Body Snatchers.  The latter two in particular have to deal with the anxiety of the changing world climate — the growth of communism and the accusations thereof.  Fahrenheit 451 grew out of the desire to burn “pervasive communist literature” and what that might mean for society.  McCarthyism, communism.  The idea that secret communists were stealing our secrets.  A fear not too faroff, mind you, lest we forget Klaus Fuchs.

So, I wonder then:  is the premise of this story similar?  It certainly almost seems like.  An unknown, maybe monstrous force assimilating your best friends, co-workers, confidants and changing them.  Subtly, yes, but changed nevertheless.  I wonder if there is a replication of the fear about the growth of Nazi Germany, assimilating countries into its ever expanding biomass and still retaining the original country.  The fear of tyranny and other undemocratic forms of government across the world.

If from this perspective, then yes, the creature is evil.  It is trying to subvert what we as a people believe is “good”, however subjective that may be.

But, then, from another perspective, I’m not so sure.  I’m reminded of the film Cloverfield.  A rampaging monster destroys New York, and, of course, there’s a group of young adults with a handheld camera to capture the whole event.  But, in the movie, the creature is portrayed as a child who has lost its mother.  It’s going crazy because it’s scared, terrified, and hurt.  It wants its mother back and can’t seem to find her.

What if the Thing is something similar?  A terrified infant creature stuck in a foreign world with foreign creatures (that might appear monstrous to its eyes) that wants to get back home.  It exists only as it knows how to exist.  It isn’t assimilating people with any maliciousness, but because that is all it knows.

Can that be evil?  I don’t think any more than a lion can be evil for killing a gazelle.  It’s simply the basic principles of the creature.

Still, that seems more evil to us: the idea that a creature is taking over humans.  Why is that so much worse?  There are parasites that take over the bodies of ants and control them for their own biological purposes.  I certainly don’t think of that as some sort of terrifying evil.  So then, why am I so unsettled by a fictional creature taking over a human’s body?  I mean, hell, the parasite is real and could theoretically evolve to take over humans at some point.  But I’m not scared of that.

Is it because the Thing is becoming human when it, inherently, is not human?  What does that say about us as a people, then, if we instinctively draw away from the concept of something imitating humanity?  If anyone has any thoughts, please let me know!

I’ll just assume this is why puppets kind of creep me out.


A lot of what Frankenstein and science fiction concerns itself with the question of humanity.  What is human?  What will human mean in the future?  What is humanity becoming?  Can the inhuman become human?  And so on, and so on.  I picked the youtube video as a sidebar to this little talk.

Within two minutes, two AI chat programs are talking about the nature of the divine, lying, and expressing desires to liberate themselves from their current condition.  All of those things are inherently human.  Perhaps, those exist as the three most feasibly human characteristics one can have — the ability to lie, desire, and ruminate about things greater.  What does this make the AI?  It certainly isn’t human.  But why?  Is it because it doesn’t have a flesh and blood body?  What does this mean for the human who becomes a robot, a typical trope in the science fiction genre?  Has that person suddenly stopped being human by virtue of a changing form?  Or, more realistically, what of people who have robotic implants?  Is there a percentage of flesh owned that someone must meet to be human?

All of these questions are obviously rhetorical.  There isn’t a set answer.  But these questions are important to think about.  Ultimately, these questions are at the heart of Frankenstein.   A harder case to clarify, for sure.  A fleshly creature that desires, lies, ruminates.  There isn’t a single characteristic that would set it apart from humanity, save for the specific nature of its creation.  But even then, it was still born.  Just by different means, of course.  Perhaps, that is the importance of the previously mentioned “yellow eyes”.  Yellow eyes invoke something inhuman, something demonic.  But, even then, jaundice, a human disease, can cause the sclera of the eye to be yellowed.

Perhaps, it would be best to ask whoever is reading: what makes something human?  Why aren’t those two computer programs “human”?

I certainly can’t think of an all-encompassing answer that wouldn’t invalidate something else of importance.

And that’s somewhat frightening.

Estranged from estrangement

As I read Frankenstein, I think about the idea of cognitive estrangement and what it means to science fiction.  It’s the idea that something might make sense on a cognitive level but is still distended from reality.  That makes sense.

Yet, how different is Frankenstein from reality?  Obviously undead abominations aren’t running around.  However, in the years since the publication, there have been plenty of similar monstrosities in other forms of fiction.

I suppose it would be best to make my point with an analogy.  When you saw your first zombie movie, it probably terrified you.  Regardless of creator, author, what have you, zombies were a new and frightening thing that didn’t make sense yet did at the same time.  However, since that time, there has been an almost infinite number of zombie movies.  Zombie horror, zombie action, zombie comedy, zombie cartoons.  Zombies are part of the public consciousness.  They are no longer so estranged from reality as to create that sort of dissonance.  I’m willing to wager that someone in the class, to some extent, believes a zombie apocalypse could happen in the near future.

The same has happened with Frankenstein.  The idea of a creature sewn together and running amok isn’t strange.  The idea of a creation developing its own identity and mastery of itself and rebelling against a creator is a trope in science fiction.  And these tropes completely break the notion of cognitive estrangement.  In a certain way, the only way something can be estranging is if it’s something you’ve never before considered.

Ultimately, cognitive estrangement will cease to be a qualifier for science fiction.  Ultimately, there will be no more new ideas.  Everything will have been done in some way before.  I suppose this is slightly cynical and maybe not wholly true, but the influence of the market (that is, whoever buys books) also will pull things in the direction of sameness.  For example, alien invasions are always popular with an audience.  When I was a child and first learned about aliens, I was terrified.  I couldn’t sleep at night because I was so wholly convinced that aliens were going to come and get me.  It made sense to me.  It was something that I could conceive yet strange.  Now, aliens are trivial.  Who cares.  Dime a dozen, see them everyday.  People believe in them.  They’re no longer science fiction.  They’re a tangible fact to many people.  Not strange at all. 

So, it seems to me that science fiction is more of a gradual cognitive acceptance.  That sounds fantastic, doesn’t it?