Sunday, November 29, 2009

The Future of the Internet: Applications

FarmVille and MafiaWars! Just the names of the two Facebook applications that dominated my Live Feed (until I recently removed them) drive me crazy. The open platform that allows them to exist on Facebook also opened the gateway to other types of spam. In a recent NPR interview, former Facebook spammer, Dennis Yu, described several different ways people let spammers into their Facebook lives, simply by participating in applications. Yu explains:
"When a user clicks Accept, that they want to join, most of their profile information is now available. That can be used to create a very intense, addictive game, but it can also be used to [sic] advertising, and when you can use that data inside an ad to inject a user's name, their profile picture, the information of their friends, it creates highly relevant, highly targeted advertising, very smart ads. We can call them appvertising..."
These "appvertisements" can now be opted out of by in-the-know users, but many of the less savvy users might be taken in by these targeted ads. The real trouble comes when spammers use the collected information to hack profiles. This has happened to several friends of mine, who suddenly send uncharacteristic comments, messages, or status updates wanting friends to go to certain websites. Luckily, these incidents can be mostly alleviated by changing passwords.

Appvertising and spammers are just a few of the downsides to open platforms found on Facebook and MySpace before that. Identity theft, Phishing, and viruses have also become problems on these once mostly safe sites. Before Facebook opened its platform in 2007, there were only a select few Facebook created Applications for sharing photos, groups and events. Now, it seems like there is an application for everything, including those crazy addictive FarmVille type games. Some people would vote for a more sterile Facebook of the past, but others clearly enjoy the new offerings an open platform can bring, even if the bad comes along with the good.

This is the dilemma faced in Jonathan Zittrain's book, "The Future of the Internet -- and How to Stop It." Zittrain argues that the open platforms or Generative-ness of PCs and the Internet, while beneficial for their initial development and popularity, are now becoming more of a negative than a positive. In 2008, when his book was published, Zittrain noted that 90 percent of email was spam (p.99) and that 80 percent of it came from zombie computers sending it without their owners' knowledge (p.46) and nearly half of those computers were in North America. These are staggering statistics, and they are beginning to make people sit up at take notice. Last week I wrote about the prevalence viruses on Windows-based PCs, but according to Zittrain, the truth is that all personal computers are likely to be infected with some kind of virus, especially if they are hooked up to a constantly open Internet source like broadband or a T1 line.

The solution in the past has been to just throw more bandwidth at the spam problem, so that the email lines don't get so clogged that the relevant email can't get through (p.99). However, at some point that solution will not work any more, and then people will have to face the real problem: there are people exploiting the Generative nature of PCs and the Internet, and they're ruining it for everybody. People like Alan Ralsky, the self-proclaimed "King of Spam" and many even more malicious people have used the easily re-programmable nature of PCs and the Internet to not just make a profit, but to take advantage of and hurt people in the process. Although Ralansky was caught and punished, there are thousands more to take his place, and as more patches are made to the system, even more work around programs will be written.

For some, the solution to these problems is to just close the whole thing off and create tethered technologies. Tethered (or appliance) technologies like iTunes, TiVo, and XBoxes are not generative, meaning that they can not be changed or reprogrammed by the consumer, but must be used in the way the manufacturer intended (p.3). These technologies can be reprogrammed and often are, but only by the remote manufacturer. Tethered technologies still serve many useful functions, but they allow for less innovation and their content and functionality are not fully controlled by the user, even if the user has technically bought the product outright. This can of course lead to undesirable outcomes, like a service getting changed or canceled, and the user losing content or functionality that he or she already paid for.

Zittrain includes iPhones to the tethered technology list, but I disagree with him, since there are thousands of applications (written by third and fourth parties) that can be downloaded and added to iPhones, changing what they can do. Perhaps technologies like the iPhone could be the happy medium between too much freedom and too little. Apple does allow applications created by virtually anyone to be uploaded to their App Store. I say virtually, because although anyone can create and upload applications, they first have to pay to become a developer, and then go through online training to learn how to develop, and finally their applications have to pass standards before they can be added to the App Store. Except for the having to pay for the privilege part, this program seems to me to be a great solution to the programming woes of traditional generative technologies. In the Apple Application setting, programming is allowed and even encouraged, but standards have to be met, before new programs can be released to the general public. And, unlike Facebook, iPhone/iPod Touch users do not have to send applications all of their personal information in order to be able to download them. Apple's program may not be perfect, but I think it's on the right track.

Sunday, November 22, 2009

Exploit: The Dark Side of Networks

My computer's hard drive died this weekend. Although it is clearly inconvenient, and will be costly to repair/replace, it's not the end of the world. In fact, I did not lose any pertinent information to this or the other class that I am taking this semester, because all of it is stored on the web, either via this blog or files I have emailed to myself. In this case, connectivity via the internet saved my academic butt.

However, not all stories of connectivity end quite so happily. My office subscribes to Vyvx, a nation-wide fiber network that is used, among other things, to broadcast live video feeds. Their headquarters and main hub is in Tulsa, Oklahoma, pretty much the center of the US, for easy central access to the continental US. But in the spring of 2006, their basement (which contains the servers) flooded, causing the nation-wide network to go down. Since Vyvx nearly has a monopoly on the fiber connectivity world, almost every live shot for every network scheduled that day had to be canceled or re-booked with a satellite truck (if possible). Not only did Vyvx lose thousands of dollars that day, but so did the TV networks and production companies that use their service. The stations lost more than money, they lost content for their shows, which their producers had to scramble to replace at the last minute. Although Vyvx works for all of their clients 99.9% of the time, that day has been etched into the memories of all involved. It is truly an example of how "...networks fail only when they succeed. Networks cultivate the flood, but the flood is what can take down the network" (p. 96). Although the flood was not a malicious attack, it exposed the vulnerability of the Vyvx network to exploitation.

According to Merriam-Webster, to exploit is "to make productive use of , or to make use of meanly or unfairly for one's own advantage."

According to Wikipedia, an exploit is " a piece of software, a chunk of data, or sequence of commands that take advantage of a bug, glitch, or vulnerability in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized)."

In their book, The Exploit: A Theory of Networks, Galloway and Thacker emphasize that the more homogeneous network is (especially a software driven network), the easier it is to exploit, once a vulnerability has been found. And in particular, " the more Microsoft solidifies its global monopoly, the greater chance for a single software exploit to bring down the entire grid "(p. 17). Computer viruses tend to target Microsoft computers (PCs) way more often than Apple Computers (Macs) or others for this very reason.



Big computer viruses or worms come around every now and then, the latest one being the Conficker Worm, which may have just been one big April Fools joke on everyone, but it really got the Windows world in a panic, causing many patch downloads, and fearful computer users on April 1, 2009. More recent viruses are connected with the Twilight craze. And again, they only effect Windows users. These Twilight viruses exploit two overlapping groups; they exploit people who are into Twilight and they exploit Windows based computers, both of which happen to be very popular right now. Twilight fans tend (for the most part) to be younger females, who are very likely to be on the web. These fans are excited to get the inside scoop on Twilight: New Moon, which came out this week in theaters, and will be searching online for interviews with the actors and any other inside material they can find. Knowing this, hackers have created websites that will show up in search results, prompting unsuspecting fans to download viruses, thinking that they are getting interviews or a sneak peak of the movie.

Although it would be possible to exploit these groups on a case by case basis, it is made infinitely easier with the network of the World Wide Web. Web browsers are also susceptible to virus contamination, especially, you guessed it, Internet Explorer. With a usage percentage of nearly 65%, Internet Explorer is by far the most popular web browser. So, since "computer viruses thrive in environments that have low levels of diversity" (p. 84), viruses have targeted Internet Explorer much more often than Firefox and Safari, the second and third most popular browsers. The reason for this is obvious, apart from homogeneity, monopolies provide a way to use the network to the viruses' advantage and affect the largest group of people possible.

So, is the solution to vulnerability to buck the trend or get out of the network altogether? Would the relief that comes with knowing you won't have to worry about exploitation
be worth the pain of being disconnected? And even if it is, “the idea of connectivity is so highly privileged today that it is becoming more and more difficult to locate places or objects that don’t, in some way, fit into a networking rubric” (p. 26). Plus, you would lose the benefits to gained from being connected, like not losing everything you've done all semester when your computer dies, among other communication and information losses. So,maybe the simple solution (if there is one) is this: be smart. Have a contingency plan. And back up your files.

Sunday, November 15, 2009

The Science of an Infected Age

When reading this week's book, "Six Degrees: The Science of a Connected Age" by Duncan Watts, I couldn't stop thinking about the Swine Flu epidemic. Watts discusses networks, how they are organized, how to use them, and how they effect our lives. One of the many ways in which networks effect us is by spreading things: trends, ideas, information, and even diseases. The more connected a person is, the more access/ influence they have to/on the world.  But being highly connected also has its drawbacks.

This has been made very apparent with the Swine Flu outbreak. The group that has been effected by the outbreak the most is people in schools, whether they work at or go to them, this group seems to be the most likely to get the disease. Not just because children are susceptible to disease, (As it turns out, everyone is likely to get it, but interestingly people over 65 are less at risk, possibly since they survived a previous outbreak.), but because they are in contact with a large number of people (who may not have the best hygiene) every day. As you may have noticed, the epidemic went mostly into remission over the summer, while school was not in session, but as soon as classes started again, here came Swine Flu all over again. 

But not just school people have gotten the flu. Some people who have not seen the inside of a school building in years still manage to get it. This is where networks come into play. For example, little boy gets Swine Flu from someone in his class, then comes home and infects his dad, who goes to work and infects someone there, who goes to worship and infects someone there, etc. Pretty soon the disease has spread to people who have never even heard of the original little boy, but they are connected to him, though the chain of disease. Not a pretty story for sure, but it does illustrate the chain of connections through network. 

By now almost all of us at least know someone who has had swine flu, if we haven't had it ourselves (knock on wood). A year ago, we have never even heard of it, 
and feared the "looming" Bird Flu pandemic. The high level of contagiousness of this disease made the "slow growth phase" pretty much non-existent, and pretty much just skipped straight to the "explosive phase (p.172)." Luckily, unlike his example of *shudder* the Ebola virus, the Swine Flu does not guarantee death, but because of this, it does not reach the "burn out" phase quite as fast.  If God forbid, something like the Ebola virus did break out in the United States, or any highly populated area, it could be devastating, like the plague, but faster. According to Watts, the only people who are "safe" from an outbreak are those who have been "removed", either by recovery, inoculation, or death (p. 168). 

So what do you do to control the disease? UT Dallas has this advise on the Student Health Center website:

Take these everyday steps to protect your health:

  • Cover your nose and mouth with a tissue when you cough or sneeze. Throw the tissue in the trash after you use it.
  • Wash your hands often with soap and water, especially after you cough or sneeze. Alcohol-based hand cleaners are also effective.
  • Avoid touching your eyes, nose or mouth. Germs spread this way.
  • Try to avoid close contact with sick people.
  • CDC recommends that people with influenza-like illness remain at home until at least 24 hours after they are free of fever (100° F [37.8°C]), or signs of a fever without the use of fever-reducing medication

Once you have contracted a virus best way to protect yourself and others is to cut yourself off from your network.  This is also true for computer viruses. Once a computer is infected with a virus, it should be cut off from the network until the virus has been eliminated. 

But what about the massive power outages or factory breakdowns Watts discusses? In the situation of the power outages, cutting off "infected" areas by the automatic breaker system, which was supposed to protect the system actually ended up making the situation worse, by rerouting power to other branches of the system and overwhelming them to the point of overload and meltdown. Sometimes cutting off a problem area in a network is like cutting off your nose to spite your face...it does more harm than good. 

The networks in our lives are very powerful, but also very fragile tools. The same chain or group of people that can help you get a new job or introduce you to your future spouse can also spread hurtful gossip or expose you to the swine flu. So wash your hands and be careful who you defriend, you never know how it can affect you down the line.

Monday, November 9, 2009

If you're not on MySpace (or Facebook), you don't exist.

In 2005 , at the age of 23, I finally gave into peer pressure and joined Facebook, "only to look at pictures." Now, 4 years later, it is the main mode of communication between many of my friends and me.  A lot has changed in those 4 years. For example, I when I joined, Facebook was only for people with college email addresses, and only certain colleges were included. I was slightly troubled when High Schoolers were allowed to join, but utterly dismayed when it was opened up  to everybody. Especially when my parents (and in-laws) started to join!  Although I am an adult, married, financially independent,  and really not very scandalous, there are things on Facebook that I do not need or want my parents (and later my boss) to see. Because of this, I had to use lots of privacy setting on them.  Most of the stuff  I don't want certain people to see is pictures and comments that other friends post, what danah boyd, in her dissertation, calls "co-constructed"(p. 136) material on my profile. Other things are lived out loud and publicly. As soon as my husband and I got engaged, we rushed home, and "made it official" on Facebook. All of this is to say that although I am not and have never been a teenager on Facebook (or MySpace), I can still easily relate to many teens' situations involving social media and adults.

Social media sites are places to connect (or reconnect) with friends. They are a place to hang out with friends you don't see very often and to continue conversations with friends you see all the time.  And to share with both. Like I said earlier, I joined Facebook solely to look at and share pictures with my friends (specifically, pictures of a cruise we had just been on), but soon after joining, I was hooked, refining my profile, and checking others' to see what they were doing.  Like boyd's teens, I didn't want to let my profile get stale, because I thought this would leave a bad impression (p. 141). But then, like some of the other teens, I decided too much activity gave the impression that I have no life (whether or not this is actually true), and I stopped updating as much.

I refused (and still refuse) to join MySpace for several reasons, legitimate or not. First, because everyone else was dong it, and I was "too cool" (p.194).  Second, (whether this is fair or not) I felt that MySpace was for the less educated (p.202). Third, I felt that Facebook and MySpace serve the same social function, and so once I was on Facebook, I didn't feel the need to join MySpace (p.198). And fourth, I did not want to have to pick my "Top Friends" (p.222), bridesmaids were hard enough!

Although my husband and I are technically grown, independent adults, we still have some of the same teen fears and power struggles with our families, and these are reflected in the way we interact with them on Facebook. Both of us have our parents on the "Limited Profile" view on our Facebook accounts. For me, this means that they can't see pictures that friends have tagged me in and they can't see groups that I have joined or certain applications that I have on my profile. Why? Because our parents still feel the need to confront us and reprimand us on lifestyle choices that they don't agree with. For example,  about a year ago, my husband (who was 25 at the time) posted a comment on a friend's wall, referencing drinking wine. At the next family gathering, his dad pulled us aside to ask us if we drink from time to time, and to warn us about it.  Even in our mid twenties,  we still have to worry about our parents (and extended families) "misunderstanding (p.165) " and "not giving us enough credit (p.247)."

So, what makes a twenty something different from a teen, when it comes to social media and the way it effects our lives, and vice versa? Not much, except for actual society and parent's power over them.  As much as I hide certain things from my parents and other "adults" in my life, the main motivation is to avoid embarrassment and awkward situations.  Teens, on the other hand, have to deal with punishments varying from grounding all the way up the scale. According to boyd, adults seek to restrict teens' actions (both online and offline), because they are afraid of and for them.  "Teenagers are alternately viewed by adult society as a nuisance who must be restricted or an impressionable population who mist be protected; they are both deviant and vulnerable (p. 242)." Parents and schools can control whether a teen has easy access to the internet, which parts, and for how long. Both can punish a teen for "private" material posted on social media sites, like the two girls I mentioned last week (p.261). Although many of us may have bosses or workplaces that don't understand or don't subscribe to social media, much less allow it in the work place, were are still free to use it at home. Although we may not have the time or take the opportunity, as adults we are allowed to go out at all hours to hang out with our friends in public places. Not so for teens, who have curfews, anti-loitering laws, and other restrictions placed on when and where they can hang out with their friends. In many cases, social media sites have replaced the mall and parks as the place to hang out with friends for many teens (p.277). So teens live out their private lives in the "public" realm of the internet, knowing that others can look in, but focusing on their friends.

A lot has changed in the 13.5 years since I was 13.5, but one thing has stayed the same; teens need a place to hang out with their friends, in a social setting, away from school, and as society changes, so will the places they find to congregate.  Adults and teens aren't that different, but in many ways, they are miles apart.

Sunday, November 1, 2009

Visibly Invisible


When I was 13 years old, I got on my Dad's computer and found my way into a chat room online.  I met people online who would immediately message me "a/s/l" for "Age/Sex/Location," and I realized that online, I could be anything I wanted to be. Of course, as a 13 year old girl, what I wanted to be was a 16 year old girl, which nearly got me in trouble with some gross online stalkers who wanted to meet me in real life. 

I've heard more than one story from friends meeting someone from a dating site, only to find out that they were not exactly "as advertised."

The point is, that online, people can be anything that they want to be. And they often are. They can be a different age, sex, race, religion, nationality, etc. all with a little bit of imagination and typing. Lisa Nakamura found that "when [online] users are free to choose their own race, all were assumed to be white. And many of those who adapted non-white personae turned out to be white male users (p. 391)."

According Lisa Nakamura in cybertyping and The Work of Race in the Age of Digital Reproduction, the phenomenon of stereotyping has made its way online into what she calls "cybertyping." Specifically, the word cybertype describes "the distinctive ways that the Internet propagates, disseminates, and commodifies images of race and racism (p.318)." In theory, the Internet could and should  be a place above and beyond race and gender. However, in reality, this has ended up not being the case. Even when race and gender are not specified online, white male is almost always assumed. "One of the symptoms of cybertyping is this convenient 'disappearance of awareness' of American racial minorities, a symptom that 'multiculturalist' Internet advertising and the discourse of technology work hard to produce (p. 327)."   If a certain race is not shoved in our faces, most Americans will not see or think about it, especially online, where that person is not easily seen.

The irony of this is that in today's society we are seen by more cameras and observers than ever before. If you go out in public you can bet that you will be seen by at least one camera. If you run a red light or a tollbooth, you may be sent a picture of yourself in the mail. We are living in a version of Bentham's Panopticon or even Orwell's Big Brother society. Whether or not we are actually being observed at any particular time, there is a chance that we are.  Under the Patriot Act, the government can even check our emails and phone calls if they think we may be doing something illegal.  At the end of his essay, Discipline and Punish, Foucoult asks, "Is it surprising that prisons resemble factories, schools, barracks, hospitals, which all resemble prisons? (p.486)"  The regimented, disciplined, observed way of life, which started in government and prisons has now come to dominate other institutions which effect our daily lives. 

Has this quest for discipline and observation reached into our homes though the internet?  I have already suggested that the government can monitor communications if it thinks that you are a terrorist, but what about more daily regiments online for the "regular" user?  

On the way home from work the other day, I heard a familiar story. A couple of teenage girls took some provocative pictures over Summer break and put them on their MySpace pages.  Someone at the school started passing the pictures around, and now the girls are suspended from participation in sports and have to make a public apology. These girls did something (questionable or not) on their own time in their own homes, but the act of putting pictures of the event online made them a part of the public system and subject to its rules. The sports team at those girls' school has rules about behavior both inside and outside of the classroom.  Their actions became known and punishable because of the publicness of people's lives now on the internet. The system can now reach into our lives like never before and change them for better or worse.

With all of the positive things that the internet can bring: organization, communication, digitization, and more, it is far from a perfect place. It is not accessible to everyone. The poor and many minorities have not seen the saturation that white middle class America has.  But even if and when that saturation does catch up as did television, the internet may still not be as diverse or equal as people might think. "Mainstream film and television depicts African Americans in consistently negative ways despite extremely high usage rates of television by African Americans. Hence, the dubious goal of 100% 'penetration' of African American communities by Internet technologies cannot by and of itself, result in more parity or even accuracy in representations of African Americans (Nakamura, p. 330)." 

Although the internet has vastly changed society in many ways, we still have a long way to go as a culture to catch up in racial equality and in finding a balance of power and privacy.

Monday, October 26, 2009

Here Comes Wikipedia (and Everybody Else Too)

Clay Shirky's Here Comes Everybody was by far the most enjoyable and readable book that we've read so far this semester. I would recommend it to anybody who works in the media professionally or even as a hobby. I may even find a way to recommend it to my boss!

While discussing this book with my dad at lunch on Sunday, the subject of Wikipedia came up. Not surprising, since Shirky uses it as a prevalent example of the possibilities of organizing people over the Internet.  My dad, of course, is one of the Wikipedia nay-sayers, who is convinced that something that can be edited by anybody can not be relied on to be correct in any way. He pointed to the article on Barack Obama, which months before the election already said that he was the 44th president of the United States.  This may have been true, but if he had check in the next day, the incorrect information would probably have already been fixed. My dad would have  probably preferred the failed Nupedia, which relied on academic experts donating their time to create articles which were highly regulated and had to go through several approval levels before being published (p.111).  Although, had the website survived, these articles would have been more academically accepted, there would have been much fewer of them, and they would never be as up to date as the existing Wikipedia, which can be updated as events are happening. 

The amazing thing about Wikipedia and other sites like it is not just that it exists, but that it functions,  and functions well.  The fact that people are willing to come together and work on an encyclopedia for free, and that it is actually correct most of the time is the truly amazing thing.  Or is it? According the Shirky, this type of behavior is popping up all over the net. From collaborative programmers creating Linux to long lost friends finding each-other and planning reunions, people who may have never worked together are now cooperating to create things. 

The reason people are doing this (get ready for an Economics word) is because the transaction cost has been greatly lowered (p.47). Things that used to be so prohibitively expensive or inconvenient that they were barely, if ever, considered possible are now as simple as a click of a button.  

Prior to the Internet, especially Facebook, I probably would have never stayed in contact with many (if any) high school or college friends. Now, I know what they are up to, and can share in their life changes (so many babies!!) and coordinate meeting up with them when they are in town with nearly effortless ease.  My 5 year college reunion is coming up in a few weeks, and although I'm excited about seeing everyone, I kind of already know what's going on in their lives. What I'm really exited to do, is to finally meet all of those husbands and babies I've seen pictures of and to see Kelly's new hair cut in real life.  No matter that I have not seen or technically talked to most of these people since graduation, I still feel like I am part of their lives.  Because of this type of connection, a group of my friends have been able to coordinate an informal reunion during the official reunion weekend.

Of course, as Shirky points out, participation in this coordination is not in any way equal. There are 1 or 2 people who have been doing the majority of the discussion and preparation on the email list, while the majority of people have only stated whether or not they will be able to attend. This is what Shirky refers to (Economic word again) as  the power law distribution.  " A power law describes data in which the nth position has 1/nth of the first position's rank (p.125)," and it seems to be in effect for any participation/coordination/creation that goes on on the web. For example, the vast majority of Wikipedia users end their participation at that; using. A small percentage of users do end up for one reason or another) becoming contributors. The vast majority of these contributors end up (like myself)  only  making one edit to one article.  A small percentage of those contributors make more than one edit, and as the amount of edits goes up, the amount of contributors making those edits goes down exponentially, resulting in a fraction of a percentage of people being responsible for the vast majority of the work.  

This is also referred to as the 80/20 rule, meaning that 20% of  the participants count for 80% of the work. This statistic would never fly in the corporate (or any paying) world, but it works on the web, because the transaction cost is so low. People are working for free, because they want to. And other people's work, or lack of work is not taking away from the work that they have done. The ones doing the most work (like the people organizing our informal reunion) are the ones who care the most, and the ones who participate the least are the ones that have the least time/interest, but still want to be a part. Shirky contends that social media provides for both types of users, and everyone in between, thus creating a place where everyone can participate in the way that they are most willing/able, and in doing so, they can create things/possibilities that were never before conceivable.

Wikipedia works against all odds, because people care enough to make sure that it does. The same thing applies to any of thousands (or maybe millions) of other collaborative Internet sites, because of love. Not the squishy kind of love, but the passionate kind of love. Social media opens up doors to success (and even more failure) that allow people to explore and pursue their passions in ways they have not been able to do before. What doors has it opened for you?

Sunday, October 18, 2009

The Public Sphere of Starbucks

In high school and sometimes in undergrad, I used to go to Starbucks and other coffee houses to hang out with my friends. We would meet and spend all evening hanging out and talking. We'd even meet other customers and talk with them; it wasn't just a cup of coffee, it was an event. I can't remember the last time I did that. These days, if I make time to even go inside a Starbucks, or any coffee establishment, it's to grab a quick cup and get on my way. I still see the people settled in for a long evening, but most are typing on their laptops or studying/in deep conversation with 1 or 2 other people. What happened? Is it just my habits that have changed, or is it society as a whole?

According to Sociologist and author, Bryant Simon, Starbucks culture has changed or at least capitalized on change in the American culture. He contends that Starbucks has contributed to the death of the traditional public sphere found in places like coffee shops, public libraries, town halls, and church meetings. These used to be places where people could go to enjoy public discourse and debate, to discuss the goings on in the world with whoever would listen. Simon notes that in many instances "public spaces have become less available — and less desirable — since municipal resources are focused elsewhere." But what about Starbucks? They still provide a public place where people come to hang out and talk, but things like small tables, to-go cups, and especially Wi-Fi make spontaneous interaction less likely to occur. People are more likely to work on their own or talk only with the people that they came with. But is this a symptom of Starbucks culture or something bigger? To answer this question, we must first look at the history of the public sphere.

According to The Public Sphere: An Encyclopedia essay by Jurgen Habermas, the public sphere is "a realm of our social life in which something approaching public opinion can be formed. Access is guaranteed to all citizens. A portion of the public sphere comes into being in every conversation in which private individuals assemble to form a public body (p. 73)." The public sphere is theoretically a place where any member of a society can come and have a voice in determining public opinion on a matter. This is more than just voting in an election or poll, this is public discourse, where real conversation and debate takes place.

Newspapers immediately began to effect the public sphere as soon as they came into being. They were a way to disseminate information and opinion without physically gathering the public body of people. But then, "in the transition from literary journalism of private individuals to the public services of the mass media, the public sphere was transformed by the influx of private interests, which received special prominence in the mass media (Habermas, 76)." According to Habermas, the commercialization of the media, especially electronic media, changed communication and the "pureness" of the public sphere. With the invention of things like Public Relations, the line between news and advertisement became blurred. "Discourse degenerated into publicity, and publicity used the increasing power of electronic media to alter perceptions and shape beliefs," adds Pieter Boeder. So, although in some ways the public sphere was growing, it was also distorting, and becoming a somewhat tainted place. However, what was happening in the media became a place to start conversations in more classic public arenas.

The real change happened with the introduction of the Internet. Suddenly, nearly instant communication across the world was possible, opening up the possibilities for a global public sphere. "Although news media increasingly transcend national borders," Boeder warns," this process does not automatically create a public sphere at a transnational or global level." Too many things, from political to language barriers stand in the way of a truly global public sphere. "Media globalization does not automatically entail the creation of a singular global public sphere, but rather a process of gradual blurring and differentiation of the public sphere to a multi-layered media structure, accompanied by an increase in interconnections (Boeder)."

One thing that we must keep in mind is that "the Internet is above all a decentralized communication system (Cyberdemocracy, Mark Poster, p.262)." As much as it brings the world together, it is not one big public forum where all users can meet and discuss public issues. Instead it is comprised of millions of individual sites and pages, where people with similar interests can (if they want to) discuss those interests. Even social networking sites that bring people together, like Facebook and Twitter, are only as public as the user makes them. One can not read and discuss every single tweet or status update or blog posting that occurs on the Internet. Not only are some set to "private," but it's just physically impossible. Instead, users have to select the posts that are interesting and relevant to them. Which begs the question, "If 'public' discourse exists as pixels and screens generated at remote locations by individuals one has never and probably will never meet, as it is in the case of the Internet with its 'virtual communities' and 'electronic cafes,' then how is it to be distinguished from 'private' letters, print face and so forth (Poster, p. 265)?" The line between public and private has now blurred, so that someone can be sitting in a public place with her laptop, having a discussion with friends all over the world, via the Internet.

With the prevalence of the Internet, "the alliance of the public sphere with a particular place or territory diminishes (Boeder)," in favor of virtual communities. So, just because public discourse is not happening as much between patrons of the same Starbucks, doesn't mean it's not happening on their laptops.

Monday, October 12, 2009

On Your Marx...


"Language is as old as consciousness, language is practical consciousness that exists also for other men, and for that reason alone is really exists for me personally as well; language, like consciousness, only arises from the need, the necessity, of intercourse with other men ." - Karl Marx, The German Ideology


What is language other than encoding one's consciousness then distributing it in the form or words, pictures, movements, etc. for someone to then decode. Hopefully they get some kind of accurate meaning from it. According to philosophers like Plato, the only way to achieve truth was through meaningful discourse (or as Marx calls it, "intercourse"). For them, the truth does not exist unless it is communicated. Although more modern philosophers like Stuart Hall believe that "reality exists outside of language," they also concede that "it is constantly mediated by and through language and what we can know and say has to be produced in and through discourse (Stuart Hall, Encoding/Decoding, p.166-167)." So, even if the truth or reality does in deed exist outside of language, it still has to be communicated effectively to have any meaning.

According to Saussure's Course in General Linguistics, "the linguistic sign unites, not a thing and a name, but a concept and a sound-image (p.78)." The concept (signified) is invoked by the sound image (signifier). In order to communicate effectively with each other, two (or more) human beings must know the same signifier/signified "code." Without this basic concept, communication can be very hard if not impossible. "If no 'meaning' is taken, there can be no 'consumption (Hall, p.164)."

My father-in-law recently had his hearing checked, and apparently the doctor said that he has some frequency damage that makes it hard for him to hear the female voice. Assuming this is true, it is difficult for his ear to decode what is encoded in the female voice. However, some people might contend that most men have a hard time decoding anything that females encode and try to communicate with them. However, this may actually have to do with encoding and decoding different levels of a sign. "The level of connotation of the visual sign, of its contextual reference and positioning in different discursive fields of meaning and associations, is the point where already coded signs intersect with deeply semantic codes of a culture and take on additional, more active ideological dimensions (Hall, 168)." For example, taking part of the Marx quote I used earlier, " intercourse with other men," out of context can be lead to quite a misunderstanding, because people can read into different (and wrong) connotative levels of that particular phrase.

According to Dictionary.com, the most dominant connotation (or definition) of the word intercourse is"dealings or communication between individuals, groups, countries, etc." The second most dominant connotation is "interchange of thoughts, feelings, etc." and finally the third (and least) dominant connotation is "sexual relations." This may no longer be accurate in everyday language, which is why "we say dominant, not 'determined', because it is always possible to order, classify, assign and decode an event within more than one 'mapping' (Hall, 169)."

The example above and others like it are constantly in the mind of the communicator. The 6:00 news and most local newspapers tend to make sure that they communicate in as straight forward a way as possible, making sure that the correct information gets out to the intended audience and that the right message gets across. "Broadcasters are concerned that the audience has failed to take the meaning as they - the broadcasters - intended...that viewers are bit operating within the "dominant" or "preferred" code. Their ideal is "perfectly transparent communication" (Hall, 170). However, other some media intended for less broad, possibly more intelligent, audiences, like the NPR's Wait Wait, Don't Tell Me actually want the consumer to read the less dominant code, opting for a more tongue in cheek form of communication. The larger the audience's vocabulary and ability to decode the communicated material, the richer the potential for true, in-depth communication.

Marx believed that "the real intellectual wealth of the individual depends entirely on the wealth of his real connections (German Ideology, Ch. 1)." In his opinion, the more people an individual really communicated and identified with, the richer his pool of knowledge was. Although, I can't say that I agree with everything that Marx says, I do agree with this claim. The more real connections a person has in his environment, be it at home, at work, or in a community, the better off that person is. Feeling as though you are understood by someone is one of the best, most satisfying feelings in the world. Second only to someone completely understanding you and liking you anyway.

Sunday, October 4, 2009

New Media: Databases Dressed to Impress

When I was about 11 years old, my dad bought my brother and me this really cool new game, Doom . Similar to Wolfenstein 3D, which we had been playing for about a year, we "were" the main character, seeing and doing everything from his perspective, exploring, killing bad guys, and getting to the next level. We had lots of fun playing that game for about a month, and got pretty good at it, until the gory, realistic graphics gave me nightmares, and we weren't allowed to play it anymore. Little did I know that 1993 was such a big year for the video game. Doom ended up being the granddaddy of the popularization of "first person" video games.


In Lev Manovich's 2001 book, The Language of New Media (which can be found in its entirety online), Doom and Myst (which also came out in 1993) are used as examples of "how computer games use — and extend — cinematic language (91)." Doom and Myst both used "cinematics" to create mood and realistic feel for the worlds they were depicting. Manovich argues that New Media as a whole borrows most of what it is from pieces of traditional media, especially cinema. Doom and Myst for example have opening credits and back stories. The player gets to be the main character, acting out the rest of the story to achieve the goal.

In fact, Manovich contends that a lot of what people see as differentiators between new and traditional media actually existed before new media. For example, multimedia display (which Bolter and Grusen would call Hypermediation) is not a unique hallmark of new media, because "sound and text (be it intertitles of the silent era or the title sequences of the later period) [has been around] for a whole century. Cinema thus was the original modern 'multimedia'(67)." He also refutes the idea that in contrast to traditional media where the order of presentation was fixed, the user can now interact with a media object, by evoking "ellipses in literary narration, missing details of objects in visual art and other representational 'shortcuts' required the user to fill-in the missing information (71)."

So what is new media? For one thing, Manovich argues that most new media objects are at the heart, databases. They store information to be accessed by the user. Whether the new media is in the form of a website, video game, or a mobile phone OS, new media devices
appear as a collections of items on which the user can perform various operations: view, navigate, search. The user experience of such computerized collections is therefore quite distinct from reading a narrative or watching a film or navigating an architectural site (194).
This is especially true for the web, which, no matter what the packaging of the website, it is in the end a way of displaying a collection of data. Social Media sites are collections of profiles and status updates for the user to access and interact with. News sites are again, databases of stories to search and read. Even gaming websites are collections of data for the user to interact with, but they exemplify better Manovich's other attribute of new media: the algorithm.

Most computer games have at least some kind of narrative or goal, which the player strives to achieve. In order to attain that goal, the player must figure out and perform the game's algorithm. For example, "when a new block appears, rotate it in such a way so it will complete the top layer of blocks on the bottom of the screen making this layer disappear (197)," is the algorithm for Tetris. Narratives, one of the results of algorithms in new media, are in some ways the opposites of databases. However, I submit that narratives need databases in order to exist. Even if the data is in a specific order to be given out in a specific way, it is still needed. And the algorithm is what holds the two together.

"In Doom and Myst — and in a great many other computer games — narrative and time itself are equated with the movement through 3D space, the progression through rooms, levels, or words (215)." This story-telling through interacting with visuals in "space" instead of only seeing or reading about them is a key element of new media. Manovich likes to talk about video games, but computer Operating Systems are also prime examples of this feature. Computer desktops contain icons that, when clicked, take the user to organized folders, documents, programs, or the Internet. All are laid out in space the way the user arranges them. Websites are also often navigable spaces meant to be explored, for example JK Rowling's Official Website. Although, like most websites it is at heart a database of news and information about the author, it is set up as her office, to be explored and interacted with to get the full experience.

So, according to Manovich, what really sets new media apart from old media is its computer-ness. Although it reflected cinema in its look and its story-telling and books and board games in its interactivity, Doom, as scary and "realistic" as it was, required a computer to be created and to function. It was after all, just a database dressed to impress.

Sunday, September 27, 2009

Leave My Aura Alone!

The veritable wealth of information on which to write for this week's readings was overwhelming. The text I highlighted in Walter Benjamin's "The Work of Art in the Age of Mechanical Reproduction," and Bill Nichol's "The Work of Culture in the age of Cybernetic Systems" more than doubled the allowable word count for our entries. With that in mind, I had real difficulty trying to decide what to focus on for this week's blog. I must admit that when Professor David Parry mentioned Benjamin's focus on aura in art work, I imagined Phoebe from Friends trying to cleanse Ross's aura. But that is not really the aura Benjamin is talking about.

Benjamin's concept of aura has to do with the authenticity of an item, namely a piece of art. Benjamin defines authenticity as, "the essence of all this is transmissible from its beginning, ranging from its substantive duration to is testimony to the history which it has experienced (Benjamin 221)." Meaning that the aura of a piece of art is not only is presence, but its cultural significance and history.
An ancient statue of Venus, for example, stood in a different traditional context with the Greeks, who made it an object of veneration, than with the clerics of the Middle Ages, who viewed it as an ominous idol. Both of them, however, were equally confronted with its uniqueness, that is, aura (Benjamin 223).
For Benjamin, the presence of the aura of an original piece of art was very important, and the prevalence of not only the reproduction of art, but art created to be reproduced, greatly concerned him. He was chiefly concerned with photographs and film, which were created to be reproduced. From a photographic negative, for example, one can make any number of prints; to ask for the “authentic” print makes no sense (Benjamin 224).

Film, which is created specifically in pieces, for the camera, to be edited together and projected for an audience later, especially concerned Benjamin. His greatest concern being for the film actor , because “the film actor lacks the opportunity of the stage actor to adjust to the audience during his performance, since he does not present his performance, to the audience in person. (Benjamin 228).” The actor, then is performing for the camera and not for any live audience, only for the idea of the audience who may one day view the edited and projected piece. Benjamin believed that since the actor got his aura from acting for the audience, then the film actors aura is depleted because not only is the audience not present for the original performance but there is no full performance of the original piece. Because of this, he believed that film as an art has a diminished aura. However, he contends that
The film responds to the shriveling of the aura with an artificial build-up of the “personality” outside the studio. The cult of the movie star, fostered by the money of the film industry, preserves not the unique aura of the person but the “spell of the personality,” the phony spell of a commodity (Benjamin 231).
People’s fascination with movie stars, Benjamin believes, is a reaction to the lack of a real aura surrounding the art. He does, however, concede (in a footnote) that film making “not only permits in the most direct way but virtually causes mass distribution. It enforces distribution because the production of a film is so expensive that an individual who for instance might afford to but a painting no longer can afford to buy a film (Benjamin 244).”

Benjamin, who died trying to escape the Nazis in World War II, did not live long enough to see the invention of the computer. If he had, his idea of aura would have been turned on its head. Bill Nichols attempted to build upon Benjamin’s ideas and expand them for the age of cybernetics.

Nichols contended that “If mechanical reproduction centers (sic) on the question of reproducibility and renders authenticity and the original problematic, cybernetic simulation renders experience, and the real itself, problematic (Nichols 30).” By the 1980s the computer has superseded film and television as the new, state of the art form of expression. Not only did it not have an aura in the way that Benjamin viewed it, but computers along with other scientific advances were pushing the boundaries between art and artificial intelligence.

At this point, for Nichols, “The question of whether film or photography is an art is here secondary to the question of whether art itself has not been radically transformed in form and function (Nichols 24).” The integration of function into art in our daily lives has become so prolific, that in many cases it is hard to tell the difference. Is my iPhone a piece of art? What about my GPS or my video editing system?
The chip is pure surface, pure simulation of thought. Its material surface is its meaning - without history, without depth, without aura, affect or feeling. The copy reproduces the world, the chip simulates it (Nichols 33).
The Bruce Willis movie, “Surrogates,” in which people can replace themselves with life-like remote-controlled robots, came out this weekend. In the movie, it was hard to know if you were interacting with the real person or the robot simulating and being controlled by that person. Although an AI movie itself is not news, the fact that CNN is reporting that this idea may not be so far-fetched after all is food for thought. If in the near future, life-like robots interact with humans on a daily basis, whose aura is Phoebe going to cleanse? Ross’s or Robot Ross’s? And would she be able to tell the difference?

Sunday, September 20, 2009

How Google Books Nearly Saved my Life

Finding this week's book was slightly difficult for me. I made the mistake of not immediately going to the bookstore or Amazon to get my books at the beginning of the semester, so now I have the challenge of finding them the week they are due. This week was particularly challenging since the only copy at the UT Dallas Library was checked out, the local book stores were also sold out. I eventually found it at the Downtown Dallas Library (yes, I did grace the halls of a library after all) but not before, I found it on line! Almost the whole text of Elizabeth Eisenstein's The Printing Revolution in Early Modern Europe can be found on Google Books! Since I did in fact need at least the whole text of the first half, I did continue looking until I found a hard copy. But the point is, I found it and thousands of other books online, free to anyone who has access to the Internet. In fact, Google now lets you custom print millions of books at home for free, or for a small sum, you can have your very own copy of an out of print book printed and bound for you.

The magnitude of printing press's impact on dissemination of information, allowing many more people access to books and information than ever before, is only approached by the Internet. Before the invention of the printing press, books were written and copied, but just one book took lots of time and money to create. If someone wanted more than one copy, they either had to hire several scribes, or wait for the same scribe to copy the original (or the copy) again AND it was pretty much guaranteed that the two copies would not be identical. For example:
In 1483, the Ripoli Press charged three florins for quinterno for setting up and printing Ficino's translation of Plato's Dialogues. A script might have charged one florin per quinterno for duplicating the same work. The Ripoli Press produced 1,025 copies; the scribe would have turned out one (16).
Each of those 1,025 copies were identical and could be sold for less, while still making a profit. This meant that more, many more people had access to books. This not only changed the possibilities of learning, learning by reading instead of only by a live instructor, but also the possibilities of writing, mass communication, organization, and record keeping. The Internet has taken these possibilities even further. Not only can people access and read books and other information on the web, but also take classes remotely, publish their own works on blogs and other websites, as well as organize and disseminate endless amount of information.

Books were not the only media that were revolutionized by the printing press. Although movable type and block letters had been around for centuries, the printing press made maps, icons, and charts easier to replicate and mass produce. The result being that more people had access, recognized inaccuracies and created new, better maps and charts. Although easily reusable wood blocks meant that the same picture could be used on one page to illustrate Verona and in another Mantua (59), they could also be used to recreate accurate, recognizable pictures of the kings. In the same way, the Internet has made it far easier to look up pictures of celebrities and get instant updates on statistics and information. The consequence being that with the Internet, as well as the early printing press, not everything is looked over and fact checked right away. However, also like the Internet, the freely offered collective knowledge of readers often led to quickly reprinted new "more accurate" editions of books.

Then, like now the question of intellectual property began to arise. Before the invention of the printing press, there were no authors or copyrights. Stories were written in books and then copied by a scribe then memorized by a wandering scholar or minstrel and credited to anonymous if credited at all.
The terms plagiarism and copyright did not exist for the minstrel. It was only after printing that they began to hold significance for the author (84).
Now, with the easy access to copying and publishing that the Internet provides, the issues of creative property have returned once again. The free books that can be accessed on Google Books are supposed to be out from under copyright rules and considered Public Domain, however, not all of the books that they have offered up that way have been so. Fan Fiction sites have also brought up the question of who owns the rights to made up worlds and characters. Or even, the simple act of reposting a story form a news site can be considered piracy if the original story is not properly credited or liked back to.

Comparing the printing press revolution and the Internet revolution could be a book (or website) of its own. The question is, is the Internet really as big, as life changing, no, world changing as the invention of the printing press? I'm sure we'll be able to print a copy of that book soon.

Monday, September 14, 2009

You keep using that word. I do not think it means what you think it means.

Reading this week's book, Remediation, was a little bit more difficult than the previous readings. It caused what I like to call "brain fuzz," where you get to the end of a paragraph and think, "wait, what did I just read?" I think this is partly because Bolter and Grusen have a nasty habit of making up words which they do not explain, and then proceed to use them juxtaposed against one another in sometimes ambiguous ways. At least the difficult words in the other readings could be looked up in the dictionary.

This being by far the most current of our readings, having been written not only in a time when I was alive, but also cognizant of and actually using the technology described in the book as it was being written, I was a little surprised to not know or understand a lot of the references. The movie Strange Days and the music video, Telecommunications Breakdown were used ad-nausium throughout the book, but I have never heard of either of them. And I'll be honest, I had NO idea what a MUD was, and the context clues were about as clear as mud. Multiuser Dungeon? Really? So, it's basically World of Warcraft meets chat room. (Do they even have chat rooms anymore?) Well, I guess as much as things change they stay the same. So, after discovering that this book was a weird kind of time capsule, I decided to look past the dated material and look at the content.

Bolter and Grusen argue that media continues to grow upon itself as it evolves from one medium to another. It takes part of that old medium and "remediates" it, or uses the old thing in a new way, to show how the new technology improved on the old. One way in which they contend media tries to improve on itself is the ever present, ever unattainable quest for immediacy, or reality to be captured and experienced through media. For example, a photograph is a more realistic version of a painting, and a movie (short for "moving picture") is a more realistic version of a picture, and a "talkie" is a more realistic version of a movie, and so on. Each medium builds upon the last to fulfill the previous medium's broken promises of realism. This is reminiscent of McLuhan's rear view mirror theory, in which we can only understand our current media in terms of past media. Bolter and Grusen take it one step further in contending that new media IS old media, with a face lift (which, by the way, they contend is a remediation of the self).

They also contend that media is both trying to be immediate (feel so real that it disappears) and hypermediate (to constantly remind the viewer or hearer of itself). One of my favorite types of media is the theme park, and not just any theme park, Walt Disney World! Walking down Main Street USA, one is submerged in the sights and sounds of Disney's "make believe" world. In this sense the theme park is very much immediate, it reaches every one of your senses and brings you in to its fantasy. Even the lines bring you into the story of a Mission to Mars or a journey back to time. However, a quick trip inside for some air will jolt the vacationer back to reality as he is bombarded with food and souvenirs for sale, at quite a hefty price tag. Or, when walking from one "land" to another, the park guest hears a change in music, costume, and architecture and is again reminded of the media surrounding him. The whole park is full of remediation everywhere one looks. At the end of every ride is a gift shop with souvenirs for that ride. Chances are that the ride was based on a Disney movie, which was probably based on a book. The park itself is a remeidation of Disneyland, which was remediation of the TV show Disneyland, which then again, was a remediation of the theme park Disneyland. Disney doesn't call it remediation though, they call it Synergy.

This back and forth remediation is also seen in television and the web, mostly by TV stations that now have websites. In a world where information is free and immediate, channels like CNN use their website to not only send viewers back to the TV station, but also use the TV station to send viewers back to the website. Both mediums are full of hypermediated material with multiple boxes of scrolling information and pictures, each is a veritable wealth of information and stimulation. I believe CNBC is the station responsible for the "octobox" and the "decabox" showing 8 or even 10 "live" people at once talking with each other about the day's news. Although it is in some ways immediate, because the people are "live" and talking to each other in real time, it is also extremely hypermediated with not only the sheer quantity of talking heads, but also the graphics and music along with them, one can not help but remember that she is watching TV.

Although technology (especially the web) has moved forward by leaps and bounds in the last ten years since Remediation was written, its examples can still be seen in the new (and let's face it, remediated) generations of technology available today.

Sunday, September 6, 2009

Oral Culture and Your Local Library

I joined my local branch of the Dallas Library last November so I could get my hands on some information about my new dog. After two trips I called it quits. They did not have much useful information, the hours were inconvenient, and I could find everything I needed online. Without late fess. 24 hours a day. I am not the only one fed up with traditional libraries, most of the people I know have not stepped into a library in years. Why would they, when a world of information is right at their fingertips through the world wide web?

Is the library going to be another casualty of the digital age, falling to the way side like so many newspapers and record stores before it? The answer, of course, is yes. Or maybe no. According to John D. Sutter's CNN article, libraries may have a future, with or without books. Like any other piece of technology, the written word has to change with the times or it becomes obsolete. As a source for knowledge and free information, the Library as we know it is likely to soon become extinct. However, some are changing with the times.

Librarians realize that the "one way flow of information from book to patron isn't good enough anymore. (Sutter)." The same frustration with the unchanging nature of books is expressed in Walter Ong's "Orality and Literacy" when he says,
"a written text is basically unresponsive. If you ask a person to explain his or her statement, you can get an explanation; if you ask a text, you get nothing back, except the same, often stupid words which called for your question in the first place (79)."
According to Sutter, many forward-thinking libraries are solving this problem by beginning to blog and tweet about what is going on in their neighborhoods as well as offer digital, non-book services like video, gaming, and music labs. This is just they type of change that Marshall McLuhan called for in his 1969 Playboy interview. Forty years ago, he saw the change coming when he said,
"Book learning is no longer sufficient in any subject; the children all say now, 'Let's talk Spanish,' or 'Let the Bard be heard,' reflecting their rejection of the old sterile system where education begins and ends in a book."
By meeting these needs though turning themselves into community gathering centers where people can debate ideas and making stories come alive by acting them out and recording them, modernized libraries are in some ways returning to oral culture roots.

But, are the values of oral or preliterate culture better than bookish, literate culture? In preliterate, oral culture, events were remembered and agreed upon through collective consciousness and "customary law, trimmed of material no longer in use, was automatically always up to date and thus youthful (Ong, 98)." They did not consult a written, unchanging book, they consulted each other. These days, some people would say that we remember too much through the printed word. With the removal of the physical book, a more interactive society can happen. Although the traditional function of a library may be challenged, the digital age may be able to use new media to "retribalize" society. Theorists like McLuhan believed that “Print centralizes socially and fragments physically, whereas electronic media bring man together in a tribal village that is a rich and creative mix, where there is actually more room for creative diversity than within the homogenized mass urban society of Western man (Playboy).” He believed that electronic media could bring the world together in a similar way that printed media had torn it apart. However, in some ways, printed media has brought the world together for thousands of years, allowing them to communicate with each other, even if they do not speak the same “mother tongue,” though Learned Latin.

According to Ong, “Without Learned Latin, it appears that modern science would have got underway with greater difficulty, if it had gotten under way at all. Modern science grew in Latin soil, for philosophers and scientists through the time of Isaac Newton commonly both wrote and did their abstract thinking in Latin (114).” So without Learned (written) Latin, the world might very well still be in the dark ages, scientifically, and much more segregated socially. The thinkers of the world were able to consult the same texts to convey and discover information in a way that was not possible though other spoken or written word. Through new media, this type of communication without boundaries is once again available, but on a much larger scale.

Although some level of affluency is needed to own new technology, access to it is free and becoming much more available through forward thinking libraries. So, if you walk into your local library branch in the next few years and see that although it lacks books, it’s got a great interactive media lab, and discussion forum, don’t be surprised. Just summon your inner tribes man and join in.

And tweet me. Maybe then I’ll give the library a chance again. If it’s open.

Saturday, August 29, 2009

Plato and Social Media in the Workplace

Several months ago my bosses called a staff meeting at work to talk about how to use a new marketing tool we were to being using at work: Facebook. Now I, and the rest of our 20-something staff, had been secretly (and sometimes not so secretly) been using it for years, much longer than either of our superiors, but we still needed to be taught the "correct" way to use it in the work environment. Since then, I have seen several articles on teaching people the correct way to use a Facebook and other social media in a corporate environment. Simple rules like, staying organized by separating your work life from your personal life online, always assuming that everything is “on the record,” and knowing your facts and citing your resources (A Corporate Guide to Social Media) are great policies and not only improve the use of social media, they keep employees from getting fired.

My coworkers and I thought being taught how to use Facebook was silly at first, but we came to realize that we can still learn a better way to do something we already know. Even in Plato's day, Statesmen and other educated men who had been writing speeches for years had much to learn. In Plato’s Phaedrus, after reading a clunky, unorganized speech by Lysias, the Orator, Socrates determines that, “every discourse ought to be a living creature, having a body of its own and a head and feet; there should be a middle, beginning, and end (436),” in order to make better sense. In addition to making sure thoughts flow in a logical way, he also stresses that “a speech should also end in a recapulation,“ meaning that “there should be a summing up of the arguments in order to remind the hearers of them (441).“ These ideas seem obvious and mundane to literate people in the 21st Century, but someone had to sit down and analyze the organization of basic rhetoric and come up with rules for its best use. Whether or not Socrates’ rules were ever applied to improve Lysias‘ work, they are used in and improve all forms of rhetoric today.

Plato, as a media critic, was not only concerned with the flow of writing, but also in the perfection, clarity, and appropriateness of the writing to each audience. To him it was not only important to say the right things, but to know all the right ways to say them at what times. Socrates tells Phaedrus that when, “he knows the times and seasons of all these things, then, and not until then, he [an orator] is a perfect master of his art; but if he fail in any of these points, whether in speaking or teaching or writing them, and yet declares that he speaks by rules of art, he who says ‘I don’t believe you’ has the better of him (448).” In the same way, if we do or say something inappropriate on a social media website, we can be just as easily discredited in everything we do. This is why it is important to be able to refer to your sources on social media sites with hyperlinks, etc. and to always remember that everything is on the record. As this startling NPR story about Google's Deal with Publishers points out, even if you delete a scandalous or just inappropriate comment or a blog post, there is a chance that it has been saved as a screen capture or printed out already, and there is little you can do about it. If you are using social media at work as a representative of your company or even just your professional self, this is something to keep in mind.

One of the reasons for Socrates’ concern about perfection in writing was its permanence. He was concerned that people gave too much implied intelligence to the written words of speeches, “but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer (453).” In this area, we find some exception in social media. It is inherently interactive, and if someone has a question about a blog post or a tweet, all they have to do is reply or comment, and they will probably get an answer; maybe not just from the original author, but from others following the author. The many wiki sites are a popular example of this phenomenon. You can send a question out into the void breakfast and get hundreds of answers back by lunch time. Although the words are not yet sentient themselves, you are more likely to get an answer from them than you were in Plato’s day.

As has been the trend, any new media is scary at first to the general public, especially to intellectuals who are worried about innovations making people lazy and stupid. Even Plato feared that the letters of the written word would become a crutch and those who used them would know “only the semblance of truth; they will be hearers of many things and will have learned nothing; the will appear omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom with out the reality (452).” People are saying the exact same things of the generations coming up knowing and using digital media. It is not the end of knowledge, but just the transference of knowledge to a new medium. It means a change in the way we communicate at work as well as at home. We should not be afraid of these tools, but learn how to use them, and use them properly.

Don't let the evolution scare you. It’s not business as usual. And that’s OK.