• About the class
  • Assignments
  • Bibliography
  • Extra Credit
  • Syllabus and Schedule

The Evolution of Computing and its Impact on History

The Evolution of Computing and its Impact on History

Category Archives: Class Summary

Class Summary: 11/30

04 Sunday Dec 2011

Posted by Andrew Atkinson in Class Summary

≈ Comments Off on Class Summary: 11/30

We had quite a busy day today, jamming 9 presentations into the 50-minute class.

First, Nathan gave a presentation on fonts, starting with a history of printing. This began with the printing press in 1440, where documents were duplicated by creating molds of each page. Luckily we’ve gone a long way from this difficult process. Nathan contrasted two kinds of fonts- bitmap fonts and outline fonts. Bitmap fonts are just a matrix of points that make up the character.

The problem with bitmap fonts is that they are not scalable, so they must be made for a variety of sizes. Outline fonts define vectors and drawing instructions so that they are scalable. Font technology has been very important in broadening our printed communication abilities.

Next, Mai talked about HCI (Human-Computer Interaction) and GUIs (Graphical User Interfaces). GUIs were a huge advance in HCI from the previous text-based command interfaces. Douglas Englebert invented the mouse and created the first GUI. Englebert’s work led to Xerox creating the first ever GUI computer. Other interfaces that are emerging include touchscreen, gestures, 3D, and tactile interfaces.

Nick next talked about Evidence-Based Medicine (EBM), which uses statistical analysis of medical data across numerous parameters to improve patient care. This field arose from the same problem that Hollerith faced- too much data to analyze. Two pioneers of the field, Dr. Robert Ledley and Dr. Lee Lusted headed an NIH initiative to integrate computers into hospitals. Ledley’s “Metal brain for diagnosis” was a primitive diagnosis program, where one could push buttons for exhibited symptoms. EBM has struggled because of the divide between doctors and computer people, but can be incredibly effective- one example was a massive increase in survival rates from Acute Respiratory Distress Syndrome.

Then I (Andrew) gave a presentation on Arthur Lee Samuel and the field of machine learning. I talked about Samuels’ checkers program, the first ever machine learning program. Then I talked about other games that computers are trying to master, one of the most challenging of which is go. Finally, I attempted to give a quick introduction to the field of machine learning, and supervised vs. unsupervised learning. I gave a few scenarios where these methods could be applied, and some real-life applications like Google News and Facebook.

 

Kevin then talked about the origin of video games, starting in 1958 with “Tennis for Two” by William Higinbotham, a game displayed on an oscilloscope. It was created to help draw public interest to the laboratory that Higinbotham worked at, because they were worried the technical pieces wouldn’t generate enough interest. It was a huge hit at the exhibition, with hundreds of people lining up to play. But no one expected interest in computer games to continue…

Then John talked about the important impact of computers on the financial market. In the 1950’s, if you wanted to know the value of a stock, you had to call your broker who had to look through paper strips; or if it couldn’t be found, send someone out onto the floor to find it. With a huge advance, Quotron I, the ticker was fed in and written onto magnetic tape. Next, Quotron II had a screen that could show important features, including yearly highs and lows. Now, stock information is accessible at a single mouse click with sites like Yahoo Finance. Since the 1950’s, with these advances, we have gone from 3.8 million to 3.5 billion trades per day.

We next learned about slot machines from Jenelle. Back in the day, these machines were known as “one-armed bandits”. They were not reliable, and were unpopular in casinos. Charles Fey invented these early mechanical slot machines. Nowadays, the randomization is done by a central computer which determines when the reels stop spinning. The digitalization of slot machines has turned them into one of the top attractions in casinos.

Manali next presented on the history of hearing aids. Initially, hearing aids would have to be concealed under clothing or fans or elsewhere, because they were much too large. Some important advanced contributed to miniaturizing hearing aids, including vacuum tubes, the micro-telephone, the printed circuit, transistors, and the integrated circuit. In fact, the first application of integrated circuits was for hearing aids.

Finally, Sarah talked about photography. The history of cameras began with the camera obscura, which projected the scene onto a screen behind the pinhole. Later, bitumen plates were placed inside these pinhole cameras, which allowed the image to be stored temporarily. But they faded over time. Advancement accelerated in the 19th century, with silver chloride in 1839, printing using negatives in 1841, hand-held cameras in 1879, camera film in 1889, and color film in 1935; and finally, 1991, when people realized that digital cameras were here to stay.

And that’s the end of the course. We concluded by making ice cream sundaes and having a dance party on the table.

Have a good winter break everyone, it’s been fun.

Andrew

Class Summary: 11/28

29 Tuesday Nov 2011

Posted by Cody Hyman in Class Summary

≈ 1 Comment

Class on 11/28 started off with a 30 minute talk on the subject of space computing. The first subject was a brief overview of control sequences and simulation of satellites presented by Cody who had previous experience with this field as an intern at the Jet Propulsion Laboratory. As space missions are very costly and sensitive, it was shared that all command sequences sent to spacecraft are typically simulated on ground computers before being uplinked and executed on the spacecraft itself.

The conversation then moved to talk about the effects of radiation levels found in space on computers and how computers and software on spacecraft have to be designed to be radiation hardened. The key reason stated for this is that in space there are very large amounts of charged particles flying about, trapped in planetary magnetospheres, or in cosmic wind. It was discussed that when these particles strike computer components, they have a tendency to cause unexpected changes to stored data and program states, in what we learned is called a single event upset or SEU. We also learned that continued exposure to radiation can cause permanent damage to electronics. Due to these effects a process known as radiation hardening is important to keep computers in space operating reliably for long periods of time. The basics of radiation hardening were covered, including the use of different materials in integrated circuits, using less susceptible designs for particularly sensitive components, hardware redundancy and error checking, and careful software design.

Dr. Wagstaff spoke about a project that she had worked on at JPL called Binary Instrument Toolkit for Fault Localized Injection Probabilistic SEUs, or BITFLIPS for short. This project is a set of programs for testing spurious radiation effects on software to simulate the space radiation environment where bits in memory may be flipped unexpectedly. Despite measures like these for testing, there are usually unexpected problems encountered during actual space missions.

On the topic of software, debugging problems on distant spacecraft was also brought up with the Mars rover Spirit as an example. Dr. Wagstaff told the story of Spirit’s flash anomaly that occurred after it landed on Mars. Communication was lost with the rover but ground stations still picked up occasional signals from Spirit. Through debugging on the ground it was found that the file system on the MERs routinely had indexing overflows that caused unexpected system restarts. After finding this, the MER team developed a workaround but could not fix the fundamental problem.

Another topic discussed relating to radiation hardening is how it tends to lag behind current computer technology. An example of this would be the main on-board computer on the recently launched Mars Science Laboratory, a.k.a. Curiosity. Its $200,000 RAD750 computer sports a radiation hardened version of IBM’s PowerPC 750 core, clocked at 200MHz. Although it is the year 2011, this hardware is similar to that of a very dated first generation Apple Macintosh G3. Older missions, like the Mars Exploration Rovers, Spirit and Opportunity, have even more limited 20MHz RAD6000 computers that might be on par with a fast calculator. Despite these challenges, even the MERs were capable of basic autonomy feats such as image based obstacle avoidance.

Curiosity, running a limited 200MHz Power750 processor

Aside from the harsh radiation environment and limited computer hardware, the communications side of space computing was also discussed in class. Unlike terrestrial networks where data can be transferred across the globe fast enough to be mostly unnoticeable; communicating with spacecraft outside of Earth orbit involves longer delays due to the finite speed of light. One example is that at its furthest point from the Earth, round-trip communications with Mars probes may take upwards of 40 minutes. In addition to the long delays, communications with deep space require large and complex radio equipment, such as the very large dish antennas of the Deep Space Network that was discussed in class. On top of all of this, the point was made that the data rates between Earth and places in deep space are commonly low, restricting the amount of data that can be sent to and from spacecraft far from Earth.

70m Antenna - Deep Space Network, Madrid Station

After our discussion on space computing, the class quickly transitioned over to student presentations on various topics they have been researching this term. Austin Sharp made the first presentation on early digital computers in the USSR, including the Strela and the BESM, made during the mid-1950’s for artillery and nuclear weapons calculations respectively. Although the Soviets were catching up with the U.S. in many other fields during the time, these attempts at digital computers ultimately ended up failing to meet their goals. One reason given for this was the high level of competition between the Strela and BESM teams instead of cooperation. Austin noted that cooperation between von Neumann, Goldstine, Eckert, and Mauchly in the U.S. ultimately resulted in ENIAC and the start of many successful computer projects that could not be rivaled by the USSR at the time.

Cody Hyman made the second presentation, on general purpose analog electronic computing following World War II. This talk covered the importance and common use of electronic analog computers following WWII. Analog electronic computers are devices that use analog circuits to model other systems and typically have the abilities to solve certain classes of problems faster than the digital computers of the day. Some of the first analog electronic computers were designed specifically for simulating guided missiles but they quickly became more generalized and went into mass production. While almost entirely extinct today, analog computers were presented as an important and widely used tool in science and engineering between 1950 and 1970. Analog computers were used in applications ranging from the flight computers on the Apollo lunar landers to ICBMs to cooling simulators for nuclear reactors to designing airplanes.

Austin Valeske made the third and final presentation of the day on the Airy tape, one of the first noted instances of debugging. This now familiar technique in computer science came about when Maurice Wilkes, the creator of EDSAC found that a program to evaluate the Airy integral (the solution to the differential equation y(x)’’=xy(x)) contained 20 errors in its 126 line entirety. This led to investigation of techniques including peeping, where one looks into the memory after each instruction, post mortem debugging where the memory is saved after the program terminates, and using interpreters to step through the program.

 

Class Summary: 11/23

24 Thursday Nov 2011

Posted by Andrew Atkinson in Class Summary

≈ 4 Comments

Dr. Wagstaff began with the announcement, as she also emailed, that if you have missed a class, you can make up your missed participation points by posting “a thoughtful, contentful comment that shows you’ve read/understood/digested the material we covered that day.”
She also reminded us that presentations are next week. And keep them to 4.5 minutes so we have time for questions and discussion. We will be cut off at 5 minutes, so practice it with a timer to get down to 4.5 minutes.

Identity and Privacy were the topics for today. We passed around slips of paper with quotes about identity and privacy from the reading. We read the quotes and discussed them. We talked about MUDs (Multi-User Dungeon), MUSHs (Multi-User Shared Hallucination), and MMORPGs. A MUD is a text-based virtual fantasy game. Similarly, a

MUSH is a text based virtual domain, but not necessarily a game. We talked about how these games allow people to change themselves and be whatever they want. One aspect of these cyber worlds that we discussed was gender changing. We discussed the motivations behind pretending to be the opposite gender: curiosity, experimentation, challenge, TinySex (cyber sex). In these online worlds, you can pretend to be anything and meet interesting people, overlooking the fact that they are probably lying right back to you. Dr. Wagstaff recommended The Guild, an online TV show written by Felicia Day, which satirizes these communities.
We next talked about privacy. There are some scary concerns when it comes to internet privacy. One example is that researchers could predict with 78% accuracy whether a male is gay by analyzing his Twitter. Even social security numbers can be found by mining social network profiles. Another concern was that iPhones log all of your locations on-device. There is a cool app called iPhone Tracker, which shows you where you’ve

been and where you spend a lot of time. But since all the data is stored on-board, rather than a secure server, it could be bad if your phone was stolen.

We had to cut the class early so we could fill out class evaluations.

Happy Thanksgiving everyone!

Class Summary: 11/21/11

21 Monday Nov 2011

Posted by Nick Lowery in Class Summary

≈ 6 Comments

Class today started with a few administrative reminders, noting that Assignment 5 is due on Wednesday and final presentations will begin next week, with the schedule of speakers posted on the website.

The subject for today’s class was exploring how computers have affected the way we communicate with one another. To start, five scenarios were proposed, and the class had to indicate who they would share that information with, and via what method. The scenarios, and the general responses, were:

  • Car breaks down: the consensus was to call someone with a cell who could come and help, or in the event of a lack of reception to manually flag down a passing motorist (computer free, presumably).
  • You get a new job: this would be worthy of a Facebook status, and perhaps a call or email to friends and/or family.
  • You broke up with your significant other: talking to people in person, generally closer friends and family; however, texting someone to initiate this interaction was also mentioned. Notably absent: updating the Facebook “relationship” status.
  • You’re having a great day: some felt this was worthy of a Facebook status, others felt that it could come up naturally in conversation.
  • Mom’s (figurative) cancer in remission: close friends and family only, in person or over the phone (privacy here was a much greater concern).

Variation occurred due to individual personalities, as well as the response sought from the contacted parties. Generally, though, the more private the information the more private the modes of communication.

From here, we transitioned into a discussion of Turkle’s paper, regarding how technology has presented us with new social problems, as illustrated by the tech conference in which no one was paying attention to the speaker, and merely playing on their laptops or smartphones. Problems noted by the class:

  • Introduction of a general lack of attention span. People were more interested in their own email or something on the internet than the speaker of the conference that they had flown somewhere to attend.
  • Email in itself has made it so we expect response times to be much more rapid (minutes as opposed to days).
  • We now spend less face-to-face time with other humans. Initially, it was funny to text or IM someone you could just physically say something to, but the irony has since worn off; however, it still remains useful in the context of discussing or sharing something on the internet, or when talking out loud would be disruptive.
  • Animals are no longer real enough. This actually ended up being a subjective problem: seeing something in person seems to coincide with said person’s interest in whatever that thing is (e.g. turtles from the Galapagos to a 14 year-old vs. an evolutionary biologist).
  • Relentless consumption vs. thinking and introspection (i.e. passive vs. active brain activity). There is some need for a balance between the two (thinking is hard), but the internet and various other devices have made it perhaps dangerously easy to get lost in a sea of RSS feeds and never surface to actually “think” about anything.
  • Technology’s effect on kids. Sub-issues mentioned were technology fostering bad habits, perhaps actually altering mental development patterns (e.g. lack of attention span), and setting different boundaries and/or losing some independence (e.g. “growing up tethered”; if you have relied on a cell phone your entire life, what do you do when it dies/breaks/etc?).

This brought us to kids and powerpoint, and how both children and especially middle-school teachers have abused it to the point of losing the efficacy it was initially designed to provide. Generally, a good rulebook can be found here.

Our last topic was the “mind habits” computers seem to have imposed on us. Most of the discussion centered around social networking and instant communications apparatuses. Sites like Facebook and Twitter have made it far too easy to share even the most mundane of details from one’s daily life with everyone you have ever met, and to do so in the simplest way possible, thanks to character limits (I have little hope for the poor fellow who had to post pictures of his handwritten tweets on flickr when Twitter went down).

Another interesting topic was the internet as brain extension (another here). With so much information at our fingertips, it can be hard a) not to continually be searching for newer and newer information, and b) to remember any of the facts, in lieu of knowing only where to find them.

Finally, we concluded class with the topic of computers as a proxy for physical intimacy (i.e. being in contact with people when you’re actually alone), or even the concept of robot friends, including robotic pets and companions for the elderly. It seemed like no one felt the companionship was necessarily a bad thing, but people were a bit weirded out with the concept of giving affection to something that can only mimic reciprocation.

Class summary: 11/16

16 Wednesday Nov 2011

Posted by Mai Nguyen in Class Summary

≈ 3 Comments

We went back to talk about virtual property, using the example article about a Chinese gamer being sentenced for life for stabbing another gamer, who stole and sold his Dragon Sabre sword, to death. The discussion revolved around the basic question: are virtual properties real properties, protected by law, or simply bits and data?

Without further elaboration, of course the above question seems unclever, because the answer is: it depends.

The argument went both ways with equally convincing reasons:

Intuitively, the Dragon Sabre sword was absolutely Qiu’s (the perpetrator) private property because he actually invested his time and even money in order to acquire the virtual weapon. However, one could also say that it’s just data and bits in the online game. As a matter of fact, when a gamer signs up for an account with the gaming company, it is often included in the License and Agreement contract that the company has complete access and rights to the data created by gamers. In an extreme case, if the company is shut down or runs into technical glitch that causes the loss of data, gamers cannot sue the gaming company for any monetary compensations for their virtual accumulations. This means, the virtual properties are fundamentally not gamer’s genuine properties at all.

There was also an argument that operated on a hypothesis that, if Qiu was a programmer and he created the Dragon Sobre, he must have the sole ownership of the property. Nevertheless, the sword was created on the gaming company’s framework, so one could easily disregard Qiu as the property owner.

We went on to listing some of the virtual properties to gain more insights of the matter. Examples of them can be: emails, cloud data, media, photos, music, videos, e-books; frequent flyer miles, stocks & investment, domain names, and so on.

An example of virtual property that turns monetary is Gold farming activity. It started with the massively multiplayer online role-playing games such as Ultima Online and Lineage, where players have to do certain tasks to accumulate in-game currency in order to upgrade or purchase in-game items. Such is a tedious job that real players “hire” other players to “farm gold” and pay them in real money. Although many gaming companies have banned exchanging in-game currency for real-world cash among players because it’s deemed cheating, the job is indeed so lucrative that many players in developing countries, especially China, have taken it as their full-time employment. This example explicitly shows that such activity is largely considered fair exchange, and thus blurs the line between real and virtual currency.

Another example is Second Life online simulation game. Players can build their own virtual world in the game by creating an avatar, dressing it up, buying clothes, and later buying real estates, building houses, landscapes, and so on. The more creative players get, the higher the demand for programmers to create requested items (e.g. simulate existing landmarks, castles, tourist destinations, etc.). Those items are frequently bought and sold on Second Life marketplace. They even have a virtual NASA Jet Propulsion Lab on Second Life:

JPL Explorer Island Entrance

An interesting anecdote that shows the concerns over virtual property is e-book checkout service at public libraries. Unlimited e-book checkouts (though with expiry each time) has been speculated to put the printing industry to a huge disadvantage compared to e-publishing. The reasons are simple: libraries don’t have to worry about their e-books being worn out, or high demand for a popular title that requires them to stock up more copies of a physical book. Unlimited checkout means patrons can renew an e-book over and over, forever. Such is a tremendous reduction in libraries’ cost. So lately, the publisher Harper Collins has announced a 26 checkout limit on e-book loans, which means after an e-book has been checked out 26 times, the libraries have to renew the license of that e-book. This act is apparently intended to bring the cost of loaning e-books equivalent to that of physical books, to avoid the over-advantage of e-publishing that can lead to the peril faced by traditional publishing.

We also pondered a moral question: is it OK to pirate something that you already paid for, though in a different form? For example, if you already owned a library of paper books from Amazon, and now you’ve just purchased a Kindle, why can’t Amazon just send you all the electronic copies of all the books you’ve purchased? Or if you bought a music CD, is it legit to just download/copy their mp3 from a friend, since you have paid for all the songs anyway? Obviously, the current state of copyright regulations would not allow that, so it leaves multi-media consumers perpetually frustrated.

Relevant news: The Congress has recently introduced a PROTECT IP Act, also known as United States Senate Bill S.968, focusing on seriously combating websites that facilitate copyright infringement. The bill also enforces the elimination of websites and web servers registered overseas. If reported, though not brought to court, the website will be blocked access, stopped from gaining revenues, and roughly speaking, sentenced an Internet death penalty. “The bill is supported by copyright and trademark owners in business, industry and labor groups, spanning all sectors of the economy. It is opposed by numerous businesses and individuals, pro bono, civil, human rights and consumer rights groups, and education and library institutions.” (Wikipedia)

Finally, we touched on the Twitter article, with the focus on hashtag, the # sign. For those who are not familiar with Twitter, the hashtag # is a type of keyword, category tag that is embedded in a tweet so that other people can search for or use it to read other related tweets, e.g. #cs407. But from the article, we could see that most people use hashtag for sarcasms, or in other words, use # to tag something that is completely opposite to what the hashtag says.

Reference:
Protect IP Act. (n.d.). In Wikipedia. Retrieved November 16, 2011, from http://en.wikipedia.org/wiki/Protect_IP_Act

Class Summary 11-14-11

14 Monday Nov 2011

Posted by Sarah Fine in Class Summary

≈ 1 Comment

Today we spoke about Software and Property, picking up where we left off last week.  We were shown an ad from 1985 for the HP-85 personal computer because Jon could not get full power on the machine he brought last week.  The sewing machine sized box was advertised as portable, friendly, expandable, and capable of “full-screen editing.”

We then discussed the Time magazine article that was assigned as the reading for today’s class.  Interesting points on the “Machine of the Year” were: the back-up power provided by a hand crank, the fear that computers will completely replace human jobs, the very-un-PC jab at the Japanese out of fear of their success in the computing field, and the lack of prediction of the huge fields of software and tech support.  A notable point was the very low estimate in the 1980s for the maximum number of personal computers in the 2000s: 80 million.  Now, there are ~300 million computers sold each year worldwide.  Although this includes the frequent replacement rate of the personal computer, the modern availability of the computer is far beyond what was predicted.

Next, we discussed “A Brief History of Hackerdom” by Eric Raymond.  An interesting distinction between hacking and cracking, a distinction not made by the media. Where cracking is breaking into a system with malicious intent (the definition used for hacking by the mainstream media), hacking is entering a system without permission but without malicious intent, perhaps to understand and explore a system or to expose fatal flaws to security.  Raymond’s point was that early hackers created the first internet culture by using the ARPANET to communicate about the innovations they were making and/or discovering.  Although the ARPANET did not connect all computers like the modern internet, by logging into compuservers a user could log into discussion boards or download games.  A fun thing to come out of this early internet culture was Blinkenlights.

Next, we read the folloing quote by Donald Knuth (1974):

“Computer programming is an art, because it applies accumulated knowledge to the world, because it require skill and ingenuity, and especially because it produces objects of beauty.”

This is not a common view of computer programming, as it seen as more of a math skill that a creative one.  However, a program that works efficiently requires a measure of elegance and ingenuity beyond simply solving a problem.  As a class, we decided that Knuth’s view of computer programming is the ideal, as the real world applied constrictions like limited time and funds.  Although not all computer programming is artistic, all computer programming could be.

We then defined some terms:

—Open source: the code is available to be viewed, with or without a monetary fee

—Free software: two options, free as in beer, or free as in speech.  Free as in beer means it costs no money, while free as in speech means that it is available everywhere, without restriction, with or without a monetary fee

Other ways that software could be “free” or “open” is when software is development in the open, with any hacker allowed the ability to edit and/or add their code to the software.  This very similar to the way Wikipedia is run.  In this way, saying software is “open” or “free” is a complex statement, with several possible meanings.

11/9 Class Summary: Guest Lecture by Jon Brewster

09 Wednesday Nov 2011

Posted by Austin Sharp in Class Summary

≈ 1 Comment

Today, Jon Brewster from Hewlett-Packard gave a guest lecture, entitled “Will Compile For Food (life in corporate America)”. He has worked at HP since 1977, and graduated from OSU in 1980.

When he was at OSU in about 1976, there was a large analog computer or two in Dearborn Hall. These computers actually added and divided voltage amounts, rather than using voltages to represent bits. This made the computer very fast (compared to contemporary digital computers) for physical simulations, such as a flight simulator.

Jon explained which projects HP’s Corvallis location worked on. Originally they mainly built calculators. These were very useful in their day, programmable, modular (memory modules could be added), and they were based on reverse polish notation (which looks like  4 3 + instead of 4 + 3). He even wrote a universal Turing machine on one of these.

In 1984, HP released its first personal computer, which included many other firsts for the company: first mouse, inkjet printer, 3.25″ disk, window system, flat panel, and unix system from HP. Since there were no standards, all of the drivers and operating system were built from the ground up at HP. They had to cross-compile C code to get it working on the processor, which involved a complicated bootstrapping process of using compilers on themselves.

Between 1987 and 1993, HP led a consortium that standardized X windows system, so that it was easier for application makers to write for any machine. This consortium beat Sun Microsystems’ very nice window system, because the standardization was helpful to developers and cost customers nothing.

Jon also dropped a crucial knowledge bomb around this point: “If you don’t answer email from your 6-year-old daughter, it’s not okay.”

In the mid-1990s, Jon went to Hawaii to work at an observatory, replacing very old equipment (computer that used Fortran and 16-bit manual input) with a more modern Unix/C/X windows system. He has become quite a bit of an astronomy hobbyist and operates his own automated mini-observatory in Monmouth, controlled entirely by Javascript.

Since about 1998, HP Corvallis has focused on eServices. Jon is extremely excited about eServices, particularly using Agile development processes (in this case, Scrum) to deliver software in small increments and adjust easily to changing requirements. This is likely an ideal situation for agile, but despite Jon’s disdain for waterfall development, some projects need a larger perspective, even if eServices do not. EServices also have the upsides of making it easy to push updates, keep code in-house, test, and gather data from customers and users.

Finally, Jon began to talk about the cloud. He explained that the cloud doesn’t simply designate an application that stores no information locally; rather it is a different processor/data storage paradigm, that distributes both processing power and data over many servers, rather than having many servers that crunch numbers pulling from one main database. This avoids the database bottleneck, and makes it easier to expand capacity without overbuying, and so works well for websites like Facebook and Google. Jon called this ‘map reduction’: computing in parallel across a ‘blizzard’ of machines, and then reducing to the answer needed.

Unfortunately, we did not have time to see whether Jon’s ancient HP PC worked, but his enthusiasm in relaying the developments of the last 30 years in computing was much appreciated.

Class Summary: 11/7

08 Tuesday Nov 2011

Posted by Cody Hyman in Class Summary

≈ Comments Off on Class Summary: 11/7

Continuing on from the last class, we began the morning discussing objections and shortcomings of the Turing test, the state of artificial intelligence, and more on the proceedings of previous Loebner prize competitions.

While the judges are still not being convinced by machines, one such argument against the Turing test brought up in class is its inability to discern between intelligence and the appearance of intelligence. Searle’s Chinese Room, a thought experiment, was brought up to illustrate this point. The experiment imagines a person in a room with no knowledge of the Chinese language tasked with writing responses to messages written in Chinese using a book of every necessary Chinese response to any set of inputs. If the entire process is done from the book the person would be conversing in Chinese but would not have any understanding of what they are reading or writing. Likewise, the machines attempting to win the Loebner prize simply respond in a simple programmed fashion and do not yet understand exactly what they are saying. The use of the Turing test as a metric for machine intelligence was also questioned as it is fairly subjective, and as we saw with the Loebner prize, depends on the test giver’s experience with pseudo-intelligent conversation machines.

Other AI related topics were also discussed, relating to the history of AI. The term AI originated in 1956 with John McCarthy’s proposal for the Dartmouth conference, a short summer long research period to study the learning capability of machines. McCarthy was unable to reach his lofty goals, as many of them have not even been attained since then. We also learned about the general slump in AI research (Artificial Intelligence Winter) that continued on up through the 1990s until adequate computer hardware to tackle many of the problems started to become available.

After concluding our discussion on artificial intelligence, the Loebner Prize, and the Turing Test, we transitioned into talking about advancements in computer architecture leading to microcomputers.

A modern integrated circuit

We began this discussion outlining the evolution in computers from vacuum tubes into discrete transistors, into integrated circuits, and eventually microcomputers. Integrated circuits are entire circuits created on a single semiconductor through the process of photolithography. It was discussed how this method of making circuits is advantageous over hand assembly as it can produce smaller, cheaper, more efficient, and less error prone circuits, a necessity for the creation of microcomputers. Today almost every computer made is mostly constructed from integrated circuits.

Along with the introduction of the integrated circuit, we discussed the necessary advances in memory that have made the modern computer possible. Two categories of memory were analyzed in class, including serially accessed storage memory and random access memory (RAM).

On the subject of RAM, we watched a short video discussing three types of RAM including magnetic core memory, static RAM (SRAM), and dynamic RAM (DRAM). Although antiquated today, core memory was predominant in the early ages of electronic computing up through the 1970’s. We learned how core memory utilizes wires hand woven between magnetic rings (toroids) and electric currents to store bits of data magnetically. Being hand assembled, these devices commonly only held up to a few kilobits. We also briefly discussed the two other forms of RAM that are commonly seen today SRAM and DRAM. SRAM that use transistors and capacitors respectively to store bits of information.

After talking about RAM, we moved on to cover how information has been stored on computers, and how the methods of doing so have changed since the introduction of electronic computers. The first method, after the era of punch cards to catch on was the use of data tapes. These tapes stored data magnetically on large reels. One example was the 1.6kilobit/inch tape shown during the previous week, where one large 700 inch reel could only hold 140kB. Another storage device was the floppy disk, which progressed from monstrous 8” 80kB disks down to 5.25” and later 3.5” 1.44MB disks. The class then examined the innards of a 3.5” floppy disk to see the magnetic film disk inside the cartridge where the data is actually stored.

Similar in function to a floppy disk are hard disk drives, which were also discussed. Using spinning multiple magnetic disks and read/write heads on movable arms, hard disks are able to store very large amounts of data in comparison to other storage media. Two old and disassembled hard disk drives were passed around the room to get a hands on look at how the devices work (which hasn’t changed much in recent decades).

On a side topic, we also conversed about the increasing use of non-volatile solid state (semiconductor based) storage drives in place of hard disks. These devices replace the need for moving parts with integrated circuits allowing for faster operation, however these devices have not yet matched the storage density of traditional hard disks.

At the end of the conversation we also discussed the origins of the magnetic platter hard disk with the IBM 350 “RAMAC” disk drive. Utilizing 50 2 foot diameter plates spinning at 1200rpm and a single read/write head, the IBM 350 could hold 5MB of data. Of course in comparison to modern disk drives, this seems outlandish, but this device was a computing breakthrough when it was introduced in 1956.

Class summary: 11/2

06 Sunday Nov 2011

Posted by Mai Nguyen in Class Summary

≈ Comments Off on Class summary: 11/2

We first started with a braistorming session for the question: What can humans do?

A majority of the answers that came up were in the high-level intelligence category such as: recognize emotions, translate foreign languages, compose music, write poems, create something new, recognize contexts/patterns/3D objects, make medical decisions, rephrase, paint, etc. Those were to distinguish computers and humans.

It was surprising that none of us mentioned physical activity, like driving. Computer drivers are supposed to be more reliable than humans without all the distraction just as texting, talking on the phone, listening to music. And yet we are still frightened by the scenario of an un-manned vehicle, so we always want human override lest something wrong happens.

The discussion revolved around the question “Can machines think?”, which, in Turing’s time, received a lot of knee-jerk objections and was deemed too meaningless to deserve discussion by Turing himself. Instead, the question should be whether a machine can do well in a behavioral game that involves the presence of a mind or thoughts. The first of such game was call the Imitation Game designed by Alan Turing. He described the game as followed:

Suppose there is a man (A) and a woman (B) and an interrogator (C) who can be of either gender. The interrogator is separated from the other two. The object of the game is for the interrogator to determine which of the other two is the man and which is the woman. The question then is, what if we let a machine take the part of A in this game? Would it be able to “fake” being a man and fool the interrogator? Such questions are more precise than “Can machines think?”.

It is noteworthy that Turing was aware of the major objections to the claim that machines can think, so he went on to label nine objections and gave his arguments against them as well (though those were not discussed in class):

1. The theologian objection: God has granted humans soul, and thus a soul make us able to think. Animals or machines, regardless of having a physical body, do not have a soul, so they cannot think.

2. “Head in the sand” objection: if machines could really think, the consequences are very frightening. Humans could lose the sense of superiority and uniqueness, as well as face the fear of being replaced/decimated by intelligent machines. Such predictions have been negatively depicted in science fictions movies like “I, Robot”, “Terminator” or “Eagle eye”.

3. The mathematical objection: computers cannot answer all the mathematical questions based solely on logic.

4. The consciousness objection: the absence of emotions and feelings suggests that computers cannot have what is equivalent to the human’s brains.

5. Disability objection: contains a list of thing that computers cannot do, such as be friendly, be kind, tell right from wrong, have a sense of humor, fall in love, etc.

6. Lady Lovelace’s objection: machines can only get as smart as we tell them to be, or can do things we program them to do, based on Ada Lovelace’s description of the Analytical Engine.

7. Continuity of nervous system: human brains are not digital, they have continuous nervous response whereas computers operate on a discrete basis of being on or off. The objection claims that without continuous response, machines cannot have intelligence.

8. Informality of behaviors: machines operate on some sets of rule in certain while there is no strict rule for what human ought to do in every possible set of circumstances. It follows that humans are definitely not machines.

9. Extrasensory perception argument: Turing was somehow quite convinced by the human’s ability of telepathy, so he set up the conditions such that mind-reading was impossible for interrogators in the game. The objection was that humans could use telepathy to figure out whether other participants are humans or machines, and Turing’s argued that machines could be telepathic as well.

 

We then had a mini debate over the prospect of Artificial Intelligence. The biggest obstacle now for AI is how to make machines remember and learn from experience. Some hilarious examples were shown in the two following videos:

AI vs. AI: Two chatbots talking to each other:

 

Two Bots Talking: Fake Kirk and A.L.I.C.E.

 

 

Class Summary: 10/31

01 Tuesday Nov 2011

Posted by Kevin Hess in Class Summary

≈ Comments Off on Class Summary: 10/31

Class began with a discussion of some of the factual inaccuracies found in Jacquard’s Web. Although the book is very readable, some technical accuracy was sacrificed for the sake of the narrative of the book. One example of a factual inaccuracy found on the book was that it suggested that the ENIAC was programmed using punched cards, when in reality the machine was programmed using patch cables.

Discussion turned to the reading “The Past and Future History of the Internet,” by Barry M. Leiner et al., and to the early formation of the internet. One of the earliest forms of the internet began with DARPA (Defense Advanced Research Projects Agency) and the creation of ARPANET (Advanced Research Projects Agency Network). The first successful communication over ARPANET was sent on October 29, 1969, between UCLA and Stanford.

Log showing first communication over ARPANET

The question of who, exactly, invented the internet has an ambiguous answer. Because the creation of the internet was such a collaborative, community effort, the best answer is probably that not one single person was solely responsible.

Of important note was the use of packet switching, rather than circuit switching, in ARPANET. In circuit switched networks, a direct, physical connection has to be made between the two parties communicating. To make different connections, the actual infrastructure of the network has to be changed (example: telephone operators switching cable connections). In a packet switched network, on the other hand, lines in the network are shared (multiplexing) and traffic is managed by routers. In this system, the physical infrastructure of the network doesn’t have to be changed to accommodate different connections, and ideal routes through the network can be determined dynamically. This kind of network is what makes the internet as we know it possible.

The next topic of discussion was competition between humans and computers, and, more specifically, the supercomputers Deep Blue and Watson. Deep Blue is the name of a chess-playing computer that was created by IBM for the sole purpose playing chess. Deep Blue calculated its moves using brute force analysis – meaning that millions of possible moves would be considered every turn to find the most advantageous one. This kind of processing heavy analysis was possible because of Deep Blue’s advanced processing capabilities and its specialized hardware. At its time, Deep Blue was the biggest and most powerful supercomputer in the world – it could calculate around 200 million moves per second. Renowned chess player Garry Kasparov was defeated by Deep Blue in 1997.

Video: Deep Blue beat G. Kasparov in 1997

The other supercomputer we discussed was Watson, also designed by IBM. Watson was created to be a contestant on the game show Jeopardy. Because being successful on Jeopardy requires speedy interpretation of puns and other lingual tricks, this is a daunting task for a computer; it requires complex language analysis. But, with the ability to evaluate around 200 million pages of content per question and almost 3000 processor cores, Watson was able to defeat Jeopardy star Ken Jennings in a special match (video below). A simplified explanation of Watson’s method: it selects key words from clues, runs them through its 15 terabyte knowledge stores, and then calculates the probability of the answer it has found being correct. If this probability meets a certain threshold, then Watson buzzes in.

Video: Jeopardy! IBM Watson Day 3

Although Watson’s algorithms and processing speed allow it to determine the correct answer a lot of the time, its occasional erratic behavior betrays its non-human nature. For example: choosing a person’s name as an answer when it’s apparent that the clue is suggesting a book, or the oddly specific bet amounts chosen through statistical analysis. This, however, begs the question – is “human-like” behavior the ideal for artificial intelligence, or simply a bar to be exceeded?

← Older posts

♣ Topics

  • Ada Lovelace Day
  • Alternate History
  • Class Summary
  • News
  • People
  • Personal History
  • Reading Summary
  • The Future
  • War is in the air

♣ Archives

  • December 2011
  • November 2011
  • October 2011
  • September 2011

♣ Recent Comments

  • Randy orton on RIP Steve Jobs
  • Kevin on Computers can be hacked and so should life
  • Kiri Wagstaff on Alan Turing’s “Computing Machinery and Intelligence”
  • Sarah Fine on Class Summary: 11/21/11
  • Sarah Fine on Class summary: 11/16

♣ Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Proudly powered by WordPress Theme: Chateau by Ignacio Ricci.