Alan Turing’s “Computing Machinery and Intelligence”

http://www.loebner.net/Prizef/TuringArticle.html

Turing begins his paper with a description of a set of rules for a test he calls the “imitation game,” (what would later come to be known as the Turing Test) as a means of answering the question, “can machines think?” In the game, there is an interrogator, a man, and a woman. The object of the game for the interrogator is to determine who the man is and who the woman is only by asking each of them questions. The questions are administered in such a way that the interrogator gleans no additional information from them besides the answers themselves – for example, through the passing of typed notes. The argument suggested by Turing in the paper is that if the man or the woman were replaced by a machine and the interrogator finds it equally difficult to distinguish between human and computer, then it can be said that the machine in question “can think.”

In the next section of his paper, Turing discusses possible criticisms of the new way in which he has framed the question. He argues that his proposed method factors out the physical appearance of a machine in our perception of whether or not it can think, and that “[t]he question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include.” He argues that even though it seems as if the game heavily favors the human (it’s very difficult for a human to trick someone into thinking they are a computer, too), this doesn’t matter as long as one can accept that it is possible for a machine to be built that can take this test.

In the next few sections of the paper, Turing further clarifies the definition of a “machine” in his description of the game to mean “digital computer” and then goes on to describe various qualities of digital computers. He discusses the elements of a digital computer (“store,” “executive unit,” and “control”) and describes their finite state nature. A reference is made to Babbage’s Analytical Engine as an example of a machine that is a digital computer despite not being an electronic one. A section is also spent arguing that because of the finite state nature of a digital computer, it is possible for a digital computer to simulate any discrete-state machine. This implies that if any one machine can be constructed to play the imitation game, it answers the broader question of “can machines think.”

These sections specifying the machine described in the original outline of the imitation game are followed by a list of possible arguments in opposition to the claim that it is possible to construct a machine that can think. These arguments and my brief interpretations of Turing’s responses to them are as follows:

  1. “Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.”

Turing’s response: If God is truly an omnipotent being, then should it not be within his power to assign a soul to an animal, or, similarly, to a machine, and thus also give the power to think?

2. “The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.”

Turing’s response: This argument is so trivial that it needn’t even be considered.

3. Mathematics has shown that there are problems which cannot be solved mathematically. Doesn’t this mean that there are problems which can’t be solved by digital computers as discrete state machines, which could be solved by humans?

Turing’s response: Although this is a strong argument, do we assign too much importance to our ability to answer questions that a machine theoretically cannot? Those that make this argument would be okay with discussion the question through the criteria of the imitation game anyway.

4. Unless a being can express emotions and be conscious of these emotions, it cannot be said that this being can think.

Turing’s response: It’s possible to test this quality using the imitation game – saying a machine has been programmed such that it writes a poem. It’s possible to for an interrogator to ask questions about the poem to assess whether the machine was conscious of its decisions in the writing of the poem.

5. There will be things that a machine cannot do (“…be kind, resourceful, beautiful, friendly…”).

Turing’s response: This is simply an issue of storage capacity – given infinite storage capacity, machines can have a large diversity of behaviors.

6. Machines can only do what they are programmed to do – they cannot exhibit some behavior that was not already defined in the programming.

Turing’s response: What this argument is really suggesting is that machines cannot surprise. Can it really be said that humans are capable of new thought, if all their ideas are based on things that they have learned?

7. The nervous system is not a discrete state machine – it is continuous. How can a computer simulate human thought as a discrete state machine then?

Turing’s response: A digital machine can simulate a continuous machine so closely that the difference will not be clear to the interrogator in the imitation game.

8. There is no set of rules which can describe what a person should do for every possible scenario.

Turing’s response: Although it may be difficult to comprehend, we cannot say for sure that there is not one set of rules which can be used to predict all of our behavior. Even in a simple case where a computer is given a number and then returns another with no indication of what it has done, it is difficult or impossible to guess the rule that will predict every possible output – that does not mean that one does not exist.

9. A machine cannot exhibit extra-sensory perception.

Turing’s response: In this case, a special imitation game will have to be set up with a “telepathy-proof room” to be sure that the machine is not being influenced by psycho-kinetic powers.

In the final section of the paper, Turing again addresses oppositional argument number 6 – Ada Lovelace’s argument that a computer can only do what it has been programmed to do. Turing discusses a process which he believes could overcome the difficulty in programming a machine that could successfully pass the imitation game as well as disprove the Ada Lovelace argument. In this process, rather than trying to program a fully functional machine from the start, it might be better to create a “learning machine” – a machine that begins with only a base set of rules and then continually updates these rules of interaction as it learns.

SSDs: Speeding Up Storage

This is a summary of “Engadget Primed: SSDs and you,” the “going further” reading from November 7th, when we talked about hardware enhancements. Although the main focus of the article was on modern advances in solid state storage, it started by chronicling previous important developments in the history of data storage. The article is very lengthy, so I’ve tried to summarize it as briefly as I can while still getting the important points, but this post will be a long one.

One of the first systems it mentioned was IBM’s RAMAC, which we also discussed in class. The system included IBM’s first disk storage unit, with a whopping storage capacity of 4.4 MB. We’ve come a long way, in an era where a 60GB disk is considered tiny, and 1TB is par for the course.

IBM 350 Disk Storage Unit - Engadget

After recounting the early players in mechanical storage, the article goes on to explain the technology behind modern mechanical drives like the ones still in predominant usage today. Surprisingly, at its roots the technology is barely different from that in the first IBM hard drives. A hard drive has one or more spinning magnetic platters. Bits (0’s and 1’s) are recorded based on how each magnetic grain is polarized. The primary difference between early rotary hard drives and newer models is the speed at which they rotate (the IBM 350 spun at 1200RPM, modern hard drives typically spin at 7400 RPM), and most importantly, the data density. There is a limit to how much data can physically be stored in a given area of a magnetic disk though:

Eventually, though, magnetic storage runs into fundamental laws of physics. In this case, those immutable rules are represented by the superparamagnetic effect (SPE). Once we shink magnetic grains below a certain threshold, they become susceptible to random thermal variations that can flip their direction.

Essentially, traditional hard drives are physically running out of space to store any more data. Manufacturers have pushed the limits to about 3TB, but at some point, it’s not possible to store more data without drastically affecting performance.

Finally, after working through the background information, the article delves into SSDs and how they work. I’ve been using an SSD for about 6 months now, and had read some articles about them previously, but hadn’t learned about the inner workings in nearly this much detail. I suggest you read the article, because I can’t possibly fit all of the details into this space, but here’s an overview.

So how does flash work, and what makes it different from traditional magnetic drives? The short answer is that instead of storing data magnetically, flash uses electrons to indicate ones and zeroes. You might already recognize why this is a plus: no moving parts. That means no noise, no head crashes, and greater energy efficiency since you don’t have to move a mechanical arm. And unlike DRAM, it’s non-volatile — it doesn’t need constant power to retain information.

Non-magnetic storage has actually existed for many years, and has previously been used in specialized applications such as space probes and data acquisition systems for oil exploration. The first consumer-targeted flash storage for use as a storage disk on a regular computer showed up around 2005 in a Samsung laptop. With 32GB of flash storage, the laptop cost almost $4000. If you thought SSDs are expensive now, think again.

The article goes on to explain in-depth how the underlying physics of solid state storage work. To summarize a few of the most interesting points:

  • There are two types of flash memory: SLC and MLC (single level cell and multi level cell). MLC memory can store twice as much data in a given amount of space, but takes longer to read and write. MLC also degrades faster.
  • SSDs slow down over time as they fill up with data. This has to do with how the memory cells are wear leveled and how data is actually written to the SSD. Some SSDs actually ship with extra space built in (which is used by the device but not reported to the operating system) to account for this.
  • SSDs wear out relatively quickly. Most MLC-based flash has a limit of around 100,000 cycles. This limit can be reached in as little as a year. An article on Coding Horror about the hot/crazy scale of SSDs recounts numerous SSD failures, with none lasting more than 2 years.
  • SSDs are more power efficient than HDDs because they have no moving parts
  • The controller chip being used has a huge impact on performance. Early SSD controller were often low-quality, resulting in poor performance and longevity.

SSDs are still a rapidly evolving technologies. Just recently have prices started to come down into the $1/GB range. As the technology advances, it will become increasingly affordable – right now, SSDs are primarily used by developers, gamers, and other power users, but they are starting to make their way into the mainstream, especially when integrated with consumer products like tablets and ultralight laptops. The big question that remains is, are they worth it?

In my view, not quite yet. As the price continues to drop, SSDs will soon reach the point where they are economically feasible for everyone, but presently, it’s impractical to store large amounts of data on them. A popular choice right now is to have a relatively small (60-160GB) SSD with boot files, applications, and some working data on it, and a secondary mechanical hard drive to store large files for which transfer speed is less important. This is a challenge in laptops though, where space is at a premium and it’s often difficult to fit both a hard drive and solid state drive into one machine. The solution I use is an optical bay caddy: I’ve removed the DVD burner from my laptop, and put a hard drive caddy in its place. I have a 128GB SSD as my primary drive, and store large files on the hard drive. Some laptops are now shipping with a hard drive and a small SSD both built in, as well.

Solid state storage is a fascinating technology, both for its physical underpinnings and the effects it’s having on the computing industry. I foresee a time in the near future when SSDs will be standard fare, and we will all wonder how we lived without them. For those of us who’ve already had a taste, that question has already presented itself.

Reading Summary: Nov. 4, 1952: Univac Gets Election Right, But CBS Balks

http://www.wired.com/science/discoveries/news/2008/11/dayintech_1104

In the summer of 1952, Remington Rand, the manufacturer of he Univac, approached CBS News with the idea of using Univac to predict the results of the election that fall. Sig Mickelson and Walter Cronkite, the news chief and anchor, respectively, thought it would be interesting and “at least be entertaining to use an ‘electronic brain'” in their analysis of the election. When election time came, however, they disregarded Univac’s predictions of the election’s outcome.

To prepare for the election, Eckert and Mauchly worked with a former colleague from Penn college to write a program that compared the results from previous elections to the results of the 1952 election as they came in. Interestingly, they had to work at Mauchly’s house because he wasn’t allowed to work at the company anymore, due to his blacklisting as pro-Communist. The plan was to connect Univac technicians to the CBS studios via teletype machine, and as the results came in the data would be transferred to Univac by copying the data onto paper tape.

Polls conducted before the election had indicated that the Democrat, Illinois Gov. Adlai Stevenson, would be anywhere between a landslide and barely ahead of the Republican, Gen. Dwight D. Eisenhower. Because of this, Mickelson scoffed when Univac predicted that Eisenhower would win with 438 electoral votes and a 100-1 chance that Eisenhower would gain the 266 electoral votes needed win. He actually refused to air the results. A second calculation with more data backed up this prediction, after a short miscalculation involving an extra zero in Stevenson’s totals.

The final results of the election? An Eisenhower landslide: 442 to 89 votes, only 1 percent off of Univac’s prediction. After the final results, CBS confessed that Univac had made an accurate prediction hours earlier that they hadn’t aired. In the 1956 election, the three major networks all used computer analysis in their results in their newscasts.

Reading Summary: Is Online Privacy a Generational Issue?

Link to article: http://www.wired.com/geekdad/2009/10/is-online-privacy-a-generational-issue/

To start the article, West divides internet users into two groups: digital immigrants and digital natives. Digital immigrants were born before the existence of certain digital technologies, which in this article is the internet. A digital native is someone who was born after the creation of these technologies and has grown up using them. In the context of the internet teenagers and young adults would be considered digital natives while middle-age and elderly internet users would be considered digital immigrants.

According to West, there is a perception that digital natives do not value their privacy as much as digital immigrants. This may be because digital immigrants think about their privacy in terms of the ability to conceal information from others. Digital natives on the other hand think about privacy as sharing certain information to specific groups and not to others.  This is why social networks, such as Facebook, now allow their users to choose what content they want to be public and what content they only want certain groups of people to see.

The article goes on to cite a Pew study about online privacy. According to the study, 60% of adults and 66% of teens restrict access to information on their social networking profiles. The article concludes by saying that privacy is not all or nothing, public or private. Instead we should expect to be able to choose the level of privacy that we want certain information to have. This allows us to have the benefits of communicating and sharing online without the loss of privacy that comes with it.

Class Summary: 11/30

We had quite a busy day today, jamming 9 presentations into the 50-minute class.

First, Nathan gave a presentation on fonts, starting with a history of printing. This began with the printing press in 1440, where documents were duplicated by creating molds of each page. Luckily we’ve gone a long way from this difficult process. Nathan contrasted two kinds of fonts- bitmap fonts and outline fonts. Bitmap fonts are just a matrix of points that make up the character.

The problem with bitmap fonts is that they are not scalable, so they must be made for a variety of sizes. Outline fonts define vectors and drawing instructions so that they are scalable. Font technology has been very important in broadening our printed communication abilities.

Next, Mai talked about HCI (Human-Computer Interaction) and GUIs (Graphical User Interfaces). GUIs were a huge advance in HCI from the previous text-based command interfaces. Douglas Englebert invented the mouse and created the first GUI. Englebert’s work led to Xerox creating the first ever GUI computer. Other interfaces that are emerging include touchscreen, gestures, 3D, and tactile interfaces.

Nick next talked about Evidence-Based Medicine (EBM), which uses statistical analysis of medical data across numerous parameters to improve patient care. This field arose from the same problem that Hollerith faced- too much data to analyze. Two pioneers of the field, Dr. Robert Ledley and Dr. Lee Lusted headed an NIH initiative to integrate computers into hospitals. Ledley’s “Metal brain for diagnosis” was a primitive diagnosis program, where one could push buttons for exhibited symptoms. EBM has struggled because of the divide between doctors and computer people, but can be incredibly effective- one example was a massive increase in survival rates from Acute Respiratory Distress Syndrome.

Then I (Andrew) gave a presentation on Arthur Lee Samuel and the field of machine learning. I talked about Samuels’ checkers program, the first ever machine learning program. Then I talked about other games that computers are trying to master, one of the most challenging of which is go. Finally, I attempted to give a quick introduction to the field of machine learning, and supervised vs. unsupervised learning. I gave a few scenarios where these methods could be applied, and some real-life applications like Google News and Facebook.

 

Kevin then talked about the origin of video games, starting in 1958 with “Tennis for Two” by William Higinbotham, a game displayed on an oscilloscope. It was created to help draw public interest to the laboratory that Higinbotham worked at, because they were worried the technical pieces wouldn’t generate enough interest. It was a huge hit at the exhibition, with hundreds of people lining up to play. But no one expected interest in computer games to continue…

Then John talked about the important impact of computers on the financial market. In the 1950’s, if you wanted to know the value of a stock, you had to call your broker who had to look through paper strips; or if it couldn’t be found, send someone out onto the floor to find it. With a huge advance, Quotron I, the ticker was fed in and written onto magnetic tape. Next, Quotron II had a screen that could show important features, including yearly highs and lows. Now, stock information is accessible at a single mouse click with sites like Yahoo Finance. Since the 1950’s, with these advances, we have gone from 3.8 million to 3.5 billion trades per day.

We next learned about slot machines from Jenelle. Back in the day, these machines were known as “one-armed bandits”. They were not reliable, and were unpopular in casinos. Charles Fey invented these early mechanical slot machines. Nowadays, the randomization is done by a central computer which determines when the reels stop spinning. The digitalization of slot machines has turned them into one of the top attractions in casinos.

Manali next presented on the history of hearing aids. Initially, hearing aids would have to be concealed under clothing or fans or elsewhere, because they were much too large. Some important advanced contributed to miniaturizing hearing aids, including vacuum tubes, the micro-telephone, the printed circuit, transistors, and the integrated circuit. In fact, the first application of integrated circuits was for hearing aids.

Finally, Sarah talked about photography. The history of cameras began with the camera obscura, which projected the scene onto a screen behind the pinhole. Later, bitumen plates were placed inside these pinhole cameras, which allowed the image to be stored temporarily. But they faded over time. Advancement accelerated in the 19th century, with silver chloride in 1839, printing using negatives in 1841, hand-held cameras in 1879, camera film in 1889, and color film in 1935; and finally, 1991, when people realized that digital cameras were here to stay.

And that’s the end of the course. We concluded by making ice cream sundaes and having a dance party on the table.

Have a good winter break everyone, it’s been fun.

Andrew

Class Summary: 11/28

Class on 11/28 started off with a 30 minute talk on the subject of space computing. The first subject was a brief overview of control sequences and simulation of satellites presented by Cody who had previous experience with this field as an intern at the Jet Propulsion Laboratory. As space missions are very costly and sensitive, it was shared that all command sequences sent to spacecraft are typically simulated on ground computers before being uplinked and executed on the spacecraft itself.

The conversation then moved to talk about the effects of radiation levels found in space on computers and how computers and software on spacecraft have to be designed to be radiation hardened. The key reason stated for this is that in space there are very large amounts of charged particles flying about, trapped in planetary magnetospheres, or in cosmic wind. It was discussed that when these particles strike computer components, they have a tendency to cause unexpected changes to stored data and program states, in what we learned is called a single event upset or SEU. We also learned that continued exposure to radiation can cause permanent damage to electronics. Due to these effects a process known as radiation hardening is important to keep computers in space operating reliably for long periods of time. The basics of radiation hardening were covered, including the use of different materials in integrated circuits, using less susceptible designs for particularly sensitive components, hardware redundancy and error checking, and careful software design.

Dr. Wagstaff spoke about a project that she had worked on at JPL called Binary Instrument Toolkit for Fault Localized Injection Probabilistic SEUs, or BITFLIPS for short. This project is a set of programs for testing spurious radiation effects on software to simulate the space radiation environment where bits in memory may be flipped unexpectedly. Despite measures like these for testing, there are usually unexpected problems encountered during actual space missions.

On the topic of software, debugging problems on distant spacecraft was also brought up with the Mars rover Spirit as an example. Dr. Wagstaff told the story of Spirit’s flash anomaly that occurred after it landed on Mars. Communication was lost with the rover but ground stations still picked up occasional signals from Spirit. Through debugging on the ground it was found that the file system on the MERs routinely had indexing overflows that caused unexpected system restarts. After finding this, the MER team developed a workaround but could not fix the fundamental problem.

Another topic discussed relating to radiation hardening is how it tends to lag behind current computer technology. An example of this would be the main on-board computer on the recently launched Mars Science Laboratory, a.k.a. Curiosity. Its $200,000 RAD750 computer sports a radiation hardened version of IBM’s PowerPC 750 core, clocked at 200MHz. Although it is the year 2011, this hardware is similar to that of a very dated first generation Apple Macintosh G3. Older missions, like the Mars Exploration Rovers, Spirit and Opportunity, have even more limited 20MHz RAD6000 computers that might be on par with a fast calculator. Despite these challenges, even the MERs were capable of basic autonomy feats such as image based obstacle avoidance.

Curiosity, running a limited 200MHz Power750 processor

Aside from the harsh radiation environment and limited computer hardware, the communications side of space computing was also discussed in class. Unlike terrestrial networks where data can be transferred across the globe fast enough to be mostly unnoticeable; communicating with spacecraft outside of Earth orbit involves longer delays due to the finite speed of light. One example is that at its furthest point from the Earth, round-trip communications with Mars probes may take upwards of 40 minutes. In addition to the long delays, communications with deep space require large and complex radio equipment, such as the very large dish antennas of the Deep Space Network that was discussed in class. On top of all of this, the point was made that the data rates between Earth and places in deep space are commonly low, restricting the amount of data that can be sent to and from spacecraft far from Earth.

70m Antenna - Deep Space Network, Madrid Station

After our discussion on space computing, the class quickly transitioned over to student presentations on various topics they have been researching this term. Austin Sharp made the first presentation on early digital computers in the USSR, including the Strela and the BESM, made during the mid-1950’s for artillery and nuclear weapons calculations respectively. Although the Soviets were catching up with the U.S. in many other fields during the time, these attempts at digital computers ultimately ended up failing to meet their goals. One reason given for this was the high level of competition between the Strela and BESM teams instead of cooperation. Austin noted that cooperation between von Neumann, Goldstine, Eckert, and Mauchly in the U.S. ultimately resulted in ENIAC and the start of many successful computer projects that could not be rivaled by the USSR at the time.

Cody Hyman made the second presentation, on general purpose analog electronic computing following World War II. This talk covered the importance and common use of electronic analog computers following WWII. Analog electronic computers are devices that use analog circuits to model other systems and typically have the abilities to solve certain classes of problems faster than the digital computers of the day. Some of the first analog electronic computers were designed specifically for simulating guided missiles but they quickly became more generalized and went into mass production. While almost entirely extinct today, analog computers were presented as an important and widely used tool in science and engineering between 1950 and 1970. Analog computers were used in applications ranging from the flight computers on the Apollo lunar landers to ICBMs to cooling simulators for nuclear reactors to designing airplanes.

Austin Valeske made the third and final presentation of the day on the Airy tape, one of the first noted instances of debugging. This now familiar technique in computer science came about when Maurice Wilkes, the creator of EDSAC found that a program to evaluate the Airy integral (the solution to the differential equation y(x)’’=xy(x)) contained 20 errors in its 126 line entirety. This led to investigation of techniques including peeping, where one looks into the memory after each instruction, post mortem debugging where the memory is saved after the program terminates, and using interpreters to step through the program.

 

Class Summary: 11/23

Dr. Wagstaff began with the announcement, as she also emailed, that if you have missed a class, you can make up your missed participation points by posting “a thoughtful, contentful comment that shows you’ve read/understood/digested the material we covered that day.”
She also reminded us that presentations are next week. And keep them to 4.5 minutes so we have time for questions and discussion. We will be cut off at 5 minutes, so practice it with a timer to get down to 4.5 minutes.

Identity and Privacy were the topics for today. We passed around slips of paper with quotes about identity and privacy from the reading. We read the quotes and discussed them. We talked about MUDs (Multi-User Dungeon), MUSHs (Multi-User Shared Hallucination), and MMORPGs. A MUD is a text-based virtual fantasy game. Similarly, a

MUSH is a text based virtual domain, but not necessarily a game. We talked about how these games allow people to change themselves and be whatever they want. One aspect of these cyber worlds that we discussed was gender changing. We discussed the motivations behind pretending to be the opposite gender: curiosity, experimentation, challenge, TinySex (cyber sex). In these online worlds, you can pretend to be anything and meet interesting people, overlooking the fact that they are probably lying right back to you. Dr. Wagstaff recommended The Guild, an online TV show written by Felicia Day, which satirizes these communities.
We next talked about privacy. There are some scary concerns when it comes to internet privacy. One example is that researchers could predict with 78% accuracy whether a male is gay by analyzing his Twitter. Even social security numbers can be found by mining social network profiles. Another concern was that iPhones log all of your locations on-device. There is a cool app called iPhone Tracker, which shows you where you’ve

been and where you spend a lot of time. But since all the data is stored on-board, rather than a secure server, it could be bad if your phone was stolen.

We had to cut the class early so we could fill out class evaluations.

Happy Thanksgiving everyone!

Assignment #5

My five mind habits:

  1. Communication: Everything is instant.  I assume if I haven’t gotten a response back within a certain amount of time (depending on the method of communication) that I will not receive a message back at all.  It’s not that I will not receive a reply in a longer time than these intervals; I am just surprised when I do.  For a text message this is about fifteen minutes, for a missed phone call about an hour and for an email one day.  In the case of a delayed reply, I will obsessively check my inbox/phone for whether I have received a reply, as if my mind cannot move on until I do.
  2. Planning: Everything is last minute.  If I want to see someone that day, I text or call and the plans are made.  The more casual the interaction, the less time beforehand the plans are made, and always electronically.  The maximum time plans are made beforehand is about tow weeks, and this is for a large party (for which invites are sent via Facebook).  However, these electronic invitations have created an interesting mind-habit: doubt about the real number of people attending.  This is more and more true for larger and larger groups.  This is caused by the anonymity of the mass text and Facebook invites, where you can respond with “maybe” or not at all without feeling rude.
  3. Travel: Everything takes forever.  Even with the use of cars and planes, the speed of long-distance travel has not changed in decades, whereas the speed of communication has rocketed forward in recent years.  In the days of endless airport security lines, I have often felt myself yearning for instantaneous travel that is as quick as email.  Now, this may be simply natural human impatience, but I find that these yearnings are tied to technology in my mind.  With so much being instant in our world, long-distance seems impossibly slow.
  4. Meeting new people: the Facebook stalk.  The second you meet a person of interest, you Google them.  In middle school, we used to check last year’s yearbook for interesting tidbits, which is so far less revealing than a Facebook page.  You can learn whether or not a person is single, how old they are, where they work, where they go to/went to school, and if they have any damning quirks, all before having a real human interaction with them.  A recent episode of How I Met Your Mother called “Mystery vs. History” outlined this strange new conundrum in terms of the dating world, where all the mystery can be Googled away once you know the person’s full name.  Google and Facebook have caused an expectation of instant and full knowledge of a person.  I know that when I was first getting to know my boyfriend before we were dating, his lack of a Facebook page was very infuriating.  I ended up Googling his name just to satisfy my appetite for insider facts.  This is the new mind habit—the expectation of little privacy.
  5. Studying: the Wikipedia/Sparknotes effect.  The first thing I do when studying is look up the topic on the internet.  Instead of pouring through textbooks and assigned readings again before a test or big assignment, I look it up.  The best way I’ve found to frame the way I study is to find the topics that other people think are important and focus on those.  This is not a perfect system, however, and has backfired for me on a couple exams.  However, it is always effective at one thing: settling the nerves.  The mind-habit at play here is again the quest for instantaneous information.  I want to know the key topics so I can do well, but I do not want to spend hours re-reading to get to that point.  In a way, I think studying has suffered the most in the new world of technology, as it has instilled an impatience detrimental to the effectiveness of test preparation.  Whereas research has become faster and more expansive (in good and bad ways—but if you know the type of source to look for, the internet is still faster that a library), good study habits have, perhaps, begun to erode.

The future:

In writing this, there is one very clear answer for the area I want technology to leap forward in: travel.  After flying twelve hours to get from Portland to Beijing last year, airplane travel has truly lost its appeal for me.  Airports are crowded, there is a strong possibility that security will grope you, you have to get to the airport at least an hour early, the plane air is always dirty, and the plane seats are impossibly small.  In the end, ever since 9/11, airplane travel has simply lost its charm. Although airplane travel for the masses was truly an amazing innovation, it’s time to again leap forward.  The Star Trek transporter is, of course, the ideal.  Just the though of instantaneous, safe travel to anywhere around the world is so exciting.  But how would the arrival of science fiction transporters affect the world?  Once widespread, it would mean the elimination of automobiles and airplanes, at least for civilians.  The military outcomes would be vast and, in a way, unknowable.  Would warheads be beamed to other countries instead of being flown and dropped?  Would you need a password, or several passwords to beam to certain locations?  Would hacking/cracking these transports be the new computer hack?  No matter what, the introduction of these transporters would completely change the way we see the world, and the way we travel it.

 

Class Summary: 11/21/11

Class today started with a few administrative reminders, noting that Assignment 5 is due on Wednesday and final presentations will begin next week, with the schedule of speakers posted on the website.

The subject for today’s class was exploring how computers have affected the way we communicate with one another. To start, five scenarios were proposed, and the class had to indicate who they would share that information with, and via what method. The scenarios, and the general responses, were:

  • Car breaks down: the consensus was to call someone with a cell who could come and help, or in the event of a lack of reception to manually flag down a passing motorist (computer free, presumably).
  • You get a new job: this would be worthy of a Facebook status, and perhaps a call or email to friends and/or family.
  • You broke up with your significant other: talking to people in person, generally closer friends and family; however, texting someone to initiate this interaction was also mentioned. Notably absent: updating the Facebook “relationship” status.
  • You’re having a great day: some felt this was worthy of a Facebook status, others felt that it could come up naturally in conversation.
  • Mom’s (figurative) cancer in remission: close friends and family only, in person or over the phone (privacy here was a much greater concern).

Variation occurred due to individual personalities, as well as the response sought from the contacted parties. Generally, though, the more private the information the more private the modes of communication.

From here, we transitioned into a discussion of Turkle’s paper, regarding how technology has presented us with new social problems, as illustrated by the tech conference in which no one was paying attention to the speaker, and merely playing on their laptops or smartphones. Problems noted by the class:

  • Introduction of a general lack of attention span. People were more interested in their own email or something on the internet than the speaker of the conference that they had flown somewhere to attend.
  • Email in itself has made it so we expect response times to be much more rapid (minutes as opposed to days).
  • We now spend less face-to-face time with other humans. Initially, it was funny to text or IM someone you could just physically say something to, but the irony has since worn off; however, it still remains useful in the context of discussing or sharing something on the internet, or when talking out loud would be disruptive.
  • Animals are no longer real enough. This actually ended up being a subjective problem: seeing something in person seems to coincide with said person’s interest in whatever that thing is (e.g. turtles from the Galapagos to a 14 year-old vs. an evolutionary biologist).
  • Relentless consumption vs. thinking and introspection (i.e. passive vs. active brain activity). There is some need for a balance between the two (thinking is hard), but the internet and various other devices have made it perhaps dangerously easy to get lost in a sea of RSS feeds and never surface to actually “think” about anything.
  • Technology’s effect on kids. Sub-issues mentioned were technology fostering bad habits, perhaps actually altering mental development patterns (e.g. lack of attention span), and setting different boundaries and/or losing some independence (e.g. “growing up tethered”; if you have relied on a cell phone your entire life, what do you do when it dies/breaks/etc?).

This brought us to kids and powerpoint, and how both children and especially middle-school teachers have abused it to the point of losing the efficacy it was initially designed to provide. Generally, a good rulebook can be found here.

Our last topic was the “mind habits” computers seem to have imposed on us. Most of the discussion centered around social networking and instant communications apparatuses. Sites like Facebook and Twitter have made it far too easy to share even the most mundane of details from one’s daily life with everyone you have ever met, and to do so in the simplest way possible, thanks to character limits (I have little hope for the poor fellow who had to post pictures of his handwritten tweets on flickr when Twitter went down).

Another interesting topic was the internet as brain extension (another here). With so much information at our fingertips, it can be hard a) not to continually be searching for newer and newer information, and b) to remember any of the facts, in lieu of knowing only where to find them.

Finally, we concluded class with the topic of computers as a proxy for physical intimacy (i.e. being in contact with people when you’re actually alone), or even the concept of robot friends, including robotic pets and companions for the elderly. It seemed like no one felt the companionship was necessarily a bad thing, but people were a bit weirded out with the concept of giving affection to something that can only mimic reciprocation.

Class summary: 11/16

We went back to talk about virtual property, using the example article about a Chinese gamer being sentenced for life for stabbing another gamer, who stole and sold his Dragon Sabre sword, to death. The discussion revolved around the basic question: are virtual properties real properties, protected by law, or simply bits and data?

Without further elaboration, of course the above question seems unclever, because the answer is: it depends.

The argument went both ways with equally convincing reasons:

Intuitively, the Dragon Sabre sword was absolutely Qiu’s (the perpetrator) private property because he actually invested his time and even money in order to acquire the virtual weapon. However, one could also say that it’s just data and bits in the online game. As a matter of fact, when a gamer signs up for an account with the gaming company, it is often included in the License and Agreement contract that the company has complete access and rights to the data created by gamers. In an extreme case, if the company is shut down or runs into technical glitch that causes the loss of data, gamers cannot sue the gaming company for any monetary compensations for their virtual accumulations. This means, the virtual properties are fundamentally not gamer’s genuine properties at all.

There was also an argument that operated on a hypothesis that, if Qiu was a programmer and he created the Dragon Sobre, he must have the sole ownership of the property. Nevertheless, the sword was created on the gaming company’s framework, so one could easily disregard Qiu as the property owner.

We went on to listing some of the virtual properties to gain more insights of the matter. Examples of them can be: emails, cloud data, media, photos, music, videos, e-books; frequent flyer miles, stocks & investment, domain names, and so on.

An example of virtual property that turns monetary is Gold farming activity. It started with the massively multiplayer online role-playing games such as Ultima Online and Lineage, where players have to do certain tasks to accumulate in-game currency in order to upgrade or purchase in-game items. Such is a tedious job that real players “hire” other players to “farm gold” and pay them in real money. Although many gaming companies have banned exchanging in-game currency for real-world cash among players because it’s deemed cheating, the job is indeed so lucrative that many players in developing countries, especially China, have taken it as their full-time employment. This example explicitly shows that such activity is largely considered fair exchange, and thus blurs the line between real and virtual currency.

Another example is Second Life online simulation game. Players can build their own virtual world in the game by creating an avatar, dressing it up, buying clothes, and later buying real estates, building houses, landscapes, and so on. The more creative players get, the higher the demand for programmers to create requested items (e.g. simulate existing landmarks, castles, tourist destinations, etc.). Those items are frequently bought and sold on Second Life marketplace. They even have a virtual NASA Jet Propulsion Lab on Second Life:

JPL Explorer Island Entrance

An interesting anecdote that shows the concerns over virtual property is e-book checkout service at public libraries. Unlimited e-book checkouts (though with expiry each time) has been speculated to put the printing industry to a huge disadvantage compared to e-publishing. The reasons are simple: libraries don’t have to worry about their e-books being worn out, or high demand for a popular title that requires them to stock up more copies of a physical book. Unlimited checkout means patrons can renew an e-book over and over, forever. Such is a tremendous reduction in libraries’ cost. So lately, the publisher Harper Collins has announced a 26 checkout limit on e-book loans, which means after an e-book has been checked out 26 times, the libraries have to renew the license of that e-book. This act is apparently intended to bring the cost of loaning e-books equivalent to that of physical books, to avoid the over-advantage of e-publishing that can lead to the peril faced by traditional publishing.

We also pondered a moral question: is it OK to pirate something that you already paid for, though in a different form? For example, if you already owned a library of paper books from Amazon, and now you’ve just purchased a Kindle, why can’t Amazon just send you all the electronic copies of all the books you’ve purchased? Or if you bought a music CD, is it legit to just download/copy their mp3 from a friend, since you have paid for all the songs anyway? Obviously, the current state of copyright regulations would not allow that, so it leaves multi-media consumers perpetually frustrated.

Relevant news: The Congress has recently introduced a PROTECT IP Act, also known as United States Senate Bill S.968, focusing on seriously combating websites that facilitate copyright infringement. The bill also enforces the elimination of websites and web servers registered overseas. If reported, though not brought to court, the website will be blocked access, stopped from gaining revenues, and roughly speaking, sentenced an Internet death penalty. “The bill is supported by copyright and trademark owners in business, industry and labor groups, spanning all sectors of the economy. It is opposed by numerous businesses and individuals, pro bono, civil, human rights and consumer rights groups, and education and library institutions.” (Wikipedia)

Finally, we touched on the Twitter article, with the focus on hashtag, the # sign. For those who are not familiar with Twitter, the hashtag # is a type of keyword, category tag that is embedded in a tweet so that other people can search for or use it to read other related tweets, e.g. #cs407. But from the article, we could see that most people use hashtag for sarcasms, or in other words, use # to tag something that is completely opposite to what the hashtag says.

Reference:
Protect IP Act. (n.d.). In Wikipedia. Retrieved November 16, 2011, from http://en.wikipedia.org/wiki/Protect_IP_Act