Class Summary 11-14-11

Today we spoke about Software and Property, picking up where we left off last week.  We were shown an ad from 1985 for the HP-85 personal computer because Jon could not get full power on the machine he brought last week.  The sewing machine sized box was advertised as portable, friendly, expandable, and capable of “full-screen editing.”

We then discussed the Time magazine article that was assigned as the reading for today’s class.  Interesting points on the “Machine of the Year” were: the back-up power provided by a hand crank, the fear that computers will completely replace human jobs, the very-un-PC jab at the Japanese out of fear of their success in the computing field, and the lack of prediction of the huge fields of software and tech support.  A notable point was the very low estimate in the 1980s for the maximum number of personal computers in the 2000s: 80 million.  Now, there are ~300 million computers sold each year worldwide.  Although this includes the frequent replacement rate of the personal computer, the modern availability of the computer is far beyond what was predicted.

Next, we discussed “A Brief History of Hackerdom” by Eric Raymond.  An interesting distinction between hacking and cracking, a distinction not made by the media. Where cracking is breaking into a system with malicious intent (the definition used for hacking by the mainstream media), hacking is entering a system without permission but without malicious intent, perhaps to understand and explore a system or to expose fatal flaws to security.  Raymond’s point was that early hackers created the first internet culture by using the ARPANET to communicate about the innovations they were making and/or discovering.  Although the ARPANET did not connect all computers like the modern internet, by logging into compuservers a user could log into discussion boards or download games.  A fun thing to come out of this early internet culture was Blinkenlights.

Next, we read the folloing quote by Donald Knuth (1974):

“Computer programming is an art, because it applies accumulated knowledge to the world, because it require skill and ingenuity, and especially because it produces objects of beauty.”

This is not a common view of computer programming, as it seen as more of a math skill that a creative one.  However, a program that works efficiently requires a measure of elegance and ingenuity beyond simply solving a problem.  As a class, we decided that Knuth’s view of computer programming is the ideal, as the real world applied constrictions like limited time and funds.  Although not all computer programming is artistic, all computer programming could be.

We then defined some terms:

Open source: the code is available to be viewed, with or without a monetary fee

Free software: two options, free as in beer, or free as in speech.  Free as in beer means it costs no money, while free as in speech means that it is available everywhere, without restriction, with or without a monetary fee

Other ways that software could be “free” or “open” is when software is development in the open, with any hacker allowed the ability to edit and/or add their code to the software.  This very similar to the way Wikipedia is run.  In this way, saying software is “open” or “free” is a complex statement, with several possible meanings.

11/9 Class Summary: Guest Lecture by Jon Brewster

Today, Jon Brewster from Hewlett-Packard gave a guest lecture, entitled “Will Compile For Food (life in corporate America)”. He has worked at HP since 1977, and graduated from OSU in 1980.

When he was at OSU in about 1976, there was a large analog computer or two in Dearborn Hall. These computers actually added and divided voltage amounts, rather than using voltages to represent bits. This made the computer very fast (compared to contemporary digital computers) for physical simulations, such as a flight simulator.

Jon explained which projects HP’s Corvallis location worked on. Originally they mainly built calculators. These were very useful in their day, programmable, modular (memory modules could be added), and they were based on reverse polish notation (which looks like  4 3 + instead of 4 + 3). He even wrote a universal Turing machine on one of these.

In 1984, HP released its first personal computer, which included many other firsts for the company: first mouse, inkjet printer, 3.25″ disk, window system, flat panel, and unix system from HP. Since there were no standards, all of the drivers and operating system were built from the ground up at HP. They had to cross-compile C code to get it working on the processor, which involved a complicated bootstrapping process of using compilers on themselves.

Between 1987 and 1993, HP led a consortium that standardized X windows system, so that it was easier for application makers to write for any machine. This consortium beat Sun Microsystems’ very nice window system, because the standardization was helpful to developers and cost customers nothing.

Jon also dropped a crucial knowledge bomb around this point: “If you don’t answer email from your 6-year-old daughter, it’s not okay.”

In the mid-1990s, Jon went to Hawaii to work at an observatory, replacing very old equipment (computer that used Fortran and 16-bit manual input) with a more modern Unix/C/X windows system. He has become quite a bit of an astronomy hobbyist and operates his own automated mini-observatory in Monmouth, controlled entirely by Javascript.

Since about 1998, HP Corvallis has focused on eServices. Jon is extremely excited about eServices, particularly using Agile development processes (in this case, Scrum) to deliver software in small increments and adjust easily to changing requirements. This is likely an ideal situation for agile, but despite Jon’s disdain for waterfall development, some projects need a larger perspective, even if eServices do not. EServices also have the upsides of making it easy to push updates, keep code in-house, test, and gather data from customers and users.

Finally, Jon began to talk about the cloud. He explained that the cloud doesn’t simply designate an application that stores no information locally; rather it is a different processor/data storage paradigm, that distributes both processing power and data over many servers, rather than having many servers that crunch numbers pulling from one main database. This avoids the database bottleneck, and makes it easier to expand capacity without overbuying, and so works well for websites like Facebook and Google. Jon called this ‘map reduction’: computing in parallel across a ‘blizzard’ of machines, and then reducing to the answer needed.

Unfortunately, we did not have time to see whether Jon’s ancient HP PC worked, but his enthusiasm in relaying the developments of the last 30 years in computing was much appreciated.

Class Summary: 11/7

Continuing on from the last class, we began the morning discussing objections and shortcomings of the Turing test, the state of artificial intelligence, and more on the proceedings of previous Loebner prize competitions.

While the judges are still not being convinced by machines, one such argument against the Turing test brought up in class is its inability to discern between intelligence and the appearance of intelligence. Searle’s Chinese Room, a thought experiment, was brought up to illustrate this point. The experiment imagines a person in a room with no knowledge of the Chinese language tasked with writing responses to messages written in Chinese using a book of every necessary Chinese response to any set of inputs. If the entire process is done from the book the person would be conversing in Chinese but would not have any understanding of what they are reading or writing. Likewise, the machines attempting to win the Loebner prize simply respond in a simple programmed fashion and do not yet understand exactly what they are saying. The use of the Turing test as a metric for machine intelligence was also questioned as it is fairly subjective, and as we saw with the Loebner prize, depends on the test giver’s experience with pseudo-intelligent conversation machines.

Other AI related topics were also discussed, relating to the history of AI. The term AI originated in 1956 with John McCarthy’s proposal for the Dartmouth conference, a short summer long research period to study the learning capability of machines. McCarthy was unable to reach his lofty goals, as many of them have not even been attained since then. We also learned about the general slump in AI research (Artificial Intelligence Winter) that continued on up through the 1990s until adequate computer hardware to tackle many of the problems started to become available.

After concluding our discussion on artificial intelligence, the Loebner Prize, and the Turing Test, we transitioned into talking about advancements in computer architecture leading to microcomputers.

A modern integrated circuit

We began this discussion outlining the evolution in computers from vacuum tubes into discrete transistors, into integrated circuits, and eventually microcomputers. Integrated circuits are entire circuits created on a single semiconductor through the process of photolithography. It was discussed how this method of making circuits is advantageous over hand assembly as it can produce smaller, cheaper, more efficient, and less error prone circuits, a necessity for the creation of microcomputers. Today almost every computer made is mostly constructed from integrated circuits.

Along with the introduction of the integrated circuit, we discussed the necessary advances in memory that have made the modern computer possible. Two categories of memory were analyzed in class, including serially accessed storage memory and random access memory (RAM).

On the subject of RAM, we watched a short video discussing three types of RAM including magnetic core memory, static RAM (SRAM), and dynamic RAM (DRAM). Although antiquated today, core memory was predominant in the early ages of electronic computing up through the 1970’s. We learned how core memory utilizes wires hand woven between magnetic rings (toroids) and electric currents to store bits of data magnetically. Being hand assembled, these devices commonly only held up to a few kilobits. We also briefly discussed the two other forms of RAM that are commonly seen today SRAM and DRAM. SRAM that use transistors and capacitors respectively to store bits of information.

After talking about RAM, we moved on to cover how information has been stored on computers, and how the methods of doing so have changed since the introduction of electronic computers. The first method, after the era of punch cards to catch on was the use of data tapes. These tapes stored data magnetically on large reels. One example was the 1.6kilobit/inch tape shown during the previous week, where one large 700 inch reel could only hold 140kB. Another storage device was the floppy disk, which progressed from monstrous 8” 80kB disks down to 5.25” and later 3.5” 1.44MB disks. The class then examined the innards of a 3.5” floppy disk to see the magnetic film disk inside the cartridge where the data is actually stored.

Similar in function to a floppy disk are hard disk drives, which were also discussed. Using spinning multiple magnetic disks and read/write heads on movable arms, hard disks are able to store very large amounts of data in comparison to other storage media. Two old and disassembled hard disk drives were passed around the room to get a hands on look at how the devices work (which hasn’t changed much in recent decades).

On a side topic, we also conversed about the increasing use of non-volatile solid state (semiconductor based) storage drives in place of hard disks. These devices replace the need for moving parts with integrated circuits allowing for faster operation, however these devices have not yet matched the storage density of traditional hard disks.

At the end of the conversation we also discussed the origins of the magnetic platter hard disk with the IBM 350 “RAMAC” disk drive. Utilizing 50 2 foot diameter plates spinning at 1200rpm and a single read/write head, the IBM 350 could hold 5MB of data. Of course in comparison to modern disk drives, this seems outlandish, but this device was a computing breakthrough when it was introduced in 1956.

Class summary: 11/2

We first started with a braistorming session for the question: What can humans do?

A majority of the answers that came up were in the high-level intelligence category such as: recognize emotions, translate foreign languages, compose music, write poems, create something new, recognize contexts/patterns/3D objects, make medical decisions, rephrase, paint, etc. Those were to distinguish computers and humans.

It was surprising that none of us mentioned physical activity, like driving. Computer drivers are supposed to be more reliable than humans without all the distraction just as texting, talking on the phone, listening to music. And yet we are still frightened by the scenario of an un-manned vehicle, so we always want human override lest something wrong happens.

The discussion revolved around the question “Can machines think?”, which, in Turing’s time, received a lot of knee-jerk objections and was deemed too meaningless to deserve discussion by Turing himself. Instead, the question should be whether a machine can do well in a behavioral game that involves the presence of a mind or thoughts. The first of such game was call the Imitation Game designed by Alan Turing. He described the game as followed:

Suppose there is a man (A) and a woman (B) and an interrogator (C) who can be of either gender. The interrogator is separated from the other two. The object of the game is for the interrogator to determine which of the other two is the man and which is the woman. The question then is, what if we let a machine take the part of A in this game? Would it be able to “fake” being a man and fool the interrogator? Such questions are more precise than “Can machines think?”.

It is noteworthy that Turing was aware of the major objections to the claim that machines can think, so he went on to label nine objections and gave his arguments against them as well (though those were not discussed in class):

1. The theologian objection: God has granted humans soul, and thus a soul make us able to think. Animals or machines, regardless of having a physical body, do not have a soul, so they cannot think.

2. “Head in the sand” objection: if machines could really think, the consequences are very frightening. Humans could lose the sense of superiority and uniqueness, as well as face the fear of being replaced/decimated by intelligent machines. Such predictions have been negatively depicted in science fictions movies like “I, Robot”, “Terminator” or “Eagle eye”.

3. The mathematical objection: computers cannot answer all the mathematical questions based solely on logic.

4. The consciousness objection: the absence of emotions and feelings suggests that computers cannot have what is equivalent to the human’s brains.

5. Disability objection: contains a list of thing that computers cannot do, such as be friendly, be kind, tell right from wrong, have a sense of humor, fall in love, etc.

6. Lady Lovelace’s objection: machines can only get as smart as we tell them to be, or can do things we program them to do, based on Ada Lovelace’s description of the Analytical Engine.

7. Continuity of nervous system: human brains are not digital, they have continuous nervous response whereas computers operate on a discrete basis of being on or off. The objection claims that without continuous response, machines cannot have intelligence.

8. Informality of behaviors: machines operate on some sets of rule in certain while there is no strict rule for what human ought to do in every possible set of circumstances. It follows that humans are definitely not machines.

9. Extrasensory perception argument: Turing was somehow quite convinced by the human’s ability of telepathy, so he set up the conditions such that mind-reading was impossible for interrogators in the game. The objection was that humans could use telepathy to figure out whether other participants are humans or machines, and Turing’s argued that machines could be telepathic as well.

 

We then had a mini debate over the prospect of Artificial Intelligence. The biggest obstacle now for AI is how to make machines remember and learn from experience. Some hilarious examples were shown in the two following videos:

AI vs. AI: Two chatbots talking to each other:

 

Two Bots Talking: Fake Kirk and A.L.I.C.E.

 

 

Class Summary: 10/31

Class began with a discussion of some of the factual inaccuracies found in Jacquard’s Web. Although the book is very readable, some technical accuracy was sacrificed for the sake of the narrative of the book. One example of a factual inaccuracy found on the book was that it suggested that the ENIAC was programmed using punched cards, when in reality the machine was programmed using patch cables.

Discussion turned to the reading “The Past and Future History of the Internet,” by Barry M. Leiner et al., and to the early formation of the internet. One of the earliest forms of the internet began with DARPA (Defense Advanced Research Projects Agency) and the creation of ARPANET (Advanced Research Projects Agency Network). The first successful communication over ARPANET was sent on October 29, 1969, between UCLA and Stanford.

Log showing first communication over ARPANET

The question of who, exactly, invented the internet has an ambiguous answer. Because the creation of the internet was such a collaborative, community effort, the best answer is probably that not one single person was solely responsible.

Of important note was the use of packet switching, rather than circuit switching, in ARPANET. In circuit switched networks, a direct, physical connection has to be made between the two parties communicating. To make different connections, the actual infrastructure of the network has to be changed (example: telephone operators switching cable connections). In a packet switched network, on the other hand, lines in the network are shared (multiplexing) and traffic is managed by routers. In this system, the physical infrastructure of the network doesn’t have to be changed to accommodate different connections, and ideal routes through the network can be determined dynamically. This kind of network is what makes the internet as we know it possible.

The next topic of discussion was competition between humans and computers, and, more specifically, the supercomputers Deep Blue and Watson. Deep Blue is the name of a chess-playing computer that was created by IBM for the sole purpose playing chess. Deep Blue calculated its moves using brute force analysis – meaning that millions of possible moves would be considered every turn to find the most advantageous one. This kind of processing heavy analysis was possible because of Deep Blue’s advanced processing capabilities and its specialized hardware. At its time, Deep Blue was the biggest and most powerful supercomputer in the world – it could calculate around 200 million moves per second. Renowned chess player Garry Kasparov was defeated by Deep Blue in 1997.

Video: Deep Blue beat G. Kasparov in 1997

The other supercomputer we discussed was Watson, also designed by IBM. Watson was created to be a contestant on the game show Jeopardy. Because being successful on Jeopardy requires speedy interpretation of puns and other lingual tricks, this is a daunting task for a computer; it requires complex language analysis. But, with the ability to evaluate around 200 million pages of content per question and almost 3000 processor cores, Watson was able to defeat Jeopardy star Ken Jennings in a special match (video below). A simplified explanation of Watson’s method: it selects key words from clues, runs them through its 15 terabyte knowledge stores, and then calculates the probability of the answer it has found being correct. If this probability meets a certain threshold, then Watson buzzes in.

Video: Jeopardy! IBM Watson Day 3

Although Watson’s algorithms and processing speed allow it to determine the correct answer a lot of the time, its occasional erratic behavior betrays its non-human nature. For example: choosing a person’s name as an answer when it’s apparent that the clue is suggesting a book, or the oddly specific bet amounts chosen through statistical analysis. This, however, begs the question – is “human-like” behavior the ideal for artificial intelligence, or simply a bar to be exceeded?

Computers: The Ultimate Machines of War

War is the driver of history – many of our most prolific inventions have been the result of advances in military technology. Wars have influenced the development of computers in particular, both necessitating their advancement and at times delaying researchers’ progress. The earliest computing machines were used primarily for generating mathematical tables which could be applied to nearly any industry, and for tabulating massive amounts of data. Military leaders quickly recognized how computers could be applied to their needs though. One area in which this was particularly evident was with cryptography – the business of encrypting and cracking secret messages. The enigma machine was one of the first electromechanical devices used for the encryption and decryption of secret messages. Built by German engineers Arthur Scherbius and Richard Ritter in the 1920s, it used a series of rotors with integrated circuits to route keypresses through the machine, encoding each letter. After each letter had been encoded, it would advance the rotors, so each letter in the message would be encrypted differently. With various other mechanisms to complicate reversing the code,  there were about 10,000,000,000,000,000 possible combinations (Singh, pg. 136). The enigma machine was used by Nazi Germany in World War II to encrypt nearly all of their radio transmissions. Deciphering this information was of crucial importance to the British military, and an entire campus at Bletchley Park was established as a base of operations for the cryptoanalyists. One of the foremost researchers, Allan Turing, developed his own modifications of the enigma machines, designed to brute-force the ciphers based on known pieces of information. These machines, which he dubbed “bombes”, were another significant step in the progress of computin. By the end of the war, 49 of them were in use at Bletchley Park (Singh, pg. 181). War did not solely advance the progress of computers though. With significant resources being put into fighting World War II, available computers were almost exclusively purposed for wartime calculations. An example of this was the Harvard Mark I, often regaled as the first ever fully functional and stable electronic computer. Built by IBM under the direction of Howard Aiken, it was donated to Harvard University for their use in research.

When the Mark I was completed in 1944, IBM gave it to Harvard as a gift. That spring it was installed at the university but was immediately leased for the duration of the war by the US Navy, desperate for gunnery and ballistics calculations. Aiken, a naval reserve officer, was put in charge of the Mark I for the Bureau of Ships – Williams, pg. 112

Although the actual construction of the Mark I was completed rapidly, it was quickly commissioned by the Navy, and researchers at the University were shortchanged of the opportunity to truly take advantage of these new computing resources. Had it not been for the war, computer research may not have been quite so rapid, but the technology would have gotten into the hands of civilians much faster.

The role of women in the history of computing goes back as far as Babbage’s era. Frequently, women were employed in the task of completing the manual calculations (human “calculators”) for men’s work in engineering, astronomy, and other fields. The division was largely a matter of sexist beliefs that only men should do the actual innovation, but that manual computation was a waste of their time (Ceruzzi, pg. 240). Nevertheless, jobs as “computers” were very popular with women, as they were still a step up from the common secretarial work which was oftentimes the only other option. The women who took these jobs were often very proud of their work, for though the labor was menial, it did require significant mathematical abilities (Ceruzzi, pg. 239).

It is therefore unsurprising that many of the first computer “programmers” were women as well – it was merely an extension of the existing tradition of men determining what needed to be calculated, and women executing the calculation. Many of the women programmers were hired because they were mathematicians – one of the few fields women could study at the advanced level – even though they had absolutely no formal training with computers (Williams, pg. 113).

Computers are crucial in modern warfare, to a far greater extent than Aiken or Hopper could have ever predicted. Fighter jets are flown by advanced electronics, computer-powered satellites beam high-resolution imagery to military intelligence agencies, and cyber-espionage is of increasing concern for governments worldwide.

One of the areas in which military technology has come to depend extensively on computers is with unmanned aerial vehicles (UAVs), often referred to as “drones”. The United States Air Force and other government agencies have been using UAVs since the mid 1990s. Originally, they served for reconnaissance missions, but have since been adapted with powerful missiles and other weapons (USAF, 2010). Some have questioned the ethical implications of such dangerous weapons being controlled by soldiers who operate them across the world from where the fighting occurs. Most of the approximately 700 Predator Drones in Iraq are controlled from an Air Force Base in Nevada, where soldiers may have limited perception of what is actually happening on the ground when they fire their missiles (Harris, 2006).

Another area in which computers are of increasing importance in modern warfare is not on the battlefield, but with cyberwarfare. A highly publicized example was the Stuxnet virus, which infected industrial control systems running uranium enrichment systems in Iran. It is unknown to this day who was responsible for the virus, but investigations have implicated both the Israeli and American governments as possible perpetrators of the attack. The virus caused centrifuges to spin out of control, resulting in serious damage to their enrichment systems. This problem set back their nuclear program by years, which may have disrupted Iran’s attempts to build an atomic bomb. With various nations establishing specific cyberdefense units in their military, this type of sabotage may become the next major forefront of global wars.

John von Neumann was quoted as saying,

If you say why not bomb them tomorrow, I say, why not today. If you say at five o’clock, I say why not one o’clock. – Rheingold

In this quote, von Neumann is referring to “them” as the USSR. He was very closely involved with the development of atomic weapons before and during the cold war, and was a strong advocate of a preventative strike against the USSR. This was at a time of intense fear of global war, and von Neumann believed that if the United States did not attack the USSR first, they would surely be decimated. Nevertheless, his views were seen by many as being extreme. A majority of the American population was not advocating for a preemptive attack against the USSR. Looking back at the cold war era from our current vantage point gives us perspective on just how devastating an all-out nuclear war would have been – which von Neumann’s proposed attack surely would have provoked. Yet had I been alive in that time, it is difficult to say whether I would have agreed or disagreed. To be sure, there was significant fear of the Soviets, but as a generally pacifistic individual, I believe I would have advocated against starting a war with such a formidable opponent.

 

Sources:

CERUZZI, P. E. (1991). When Computers Were Human. IEEE Annals of Computing History, 13(3), 237-244.

Harris, F. (2006, June 2). In Las Vegas a pilot pulls the trigger. In Iraq a Predator fires its missile. The Telegraph.

Rheingold, H. (2000). Johnny Builds Bombs and Johnny Builds Brains. In Tools for Thought.

Singh, S. (2000). The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography.

USAF. (2010, July 20). MQ-1B Predator. Retrieved October 29, 2011, from US Air Force Information Factsheets: http://www.af.mil/information/factsheets/factsheet.asp?fsID=122

Williams, K. (n.d.). Improbable Warriors: Mathematicians Grace Hopper and Mina Rees in World War II.

 

Class Summary: 10/26

We began class by talking about the paper, “When Computers were Human”, that was written by Paul Ceruzzi. Dr. Wagstaff started by asking us whether or not we felt that the reading questions were helpful. There was a general lack of response due to it being early and several students having not yet arrived. We then went on to talk about the paper itself, which talked about how in the early 1940s, humans did all of the necessary scientific and engineering calculations by hand or with mechanical calculators. This was a barrier in many scientific fields, as it took an increasing long time as calculations became more complex. Many research facilities had to keep hiring more and more human computers to keep up with all the calculations they had to do. Even doing this did not give them big increases in computing power, because unlike electronic computers which have exponential increased in power, human computers cannot increase exponentially.

We then began discussing what the job of being a human computer was like. Most computers were woman and there job usually consisting almost entirely of just doing math problems and not any design or testing. This was left up to the engineers, who were usually men. To us today, the job of being a computer sounds pretty boring, but women at the time considered it a good job. This was due mostly to the fact that it was better than most other jobs they could get at the time and it paid well. It also allowed them to contribute to the war effort and the Navy employed many women computers who were given ranks and titles.

The video: ENIAC was then shown. The part of the video we watched was an interview with a woman who worked on ENIAC. She talked about how they had to program with patch cables and how reliability was always a big issue and you could never be sure that the machine was working correctly. To deal with this, they would run a test program before and after they ran the actual program so that they could make sure the machine was working properly before the program and that something hadn’t gone wrong while they ran program.

Next we watched the video: FIRST COMPUTER ENIAC. The part of the video we watched talked about how they had to physically wire the machines. ENIAC was built in a circular room, so the lead programmer would stand in the middle yelling instructions to women who were standing next to certain parts of the computer who would wire the machine. They also showed someone double checking some of the calculations with an abacus because at the time that was much more reliable then the computer.

We then discussed the UNIVAC computer, which was finished in 1951. Computers need the ability to do logical operations and they also need memory. In today’s computers, operations are done in the CPU. At the time of UNIVAC, they were done with vacuum tubes. Vacuum tubes are large and burned out a lot, so they were not ideal. Later computers used transistors instead of vacuum tubes, which are much smaller and more reliable. Today many transistors can be fit onto one silicon chip. Vacuum tubes are not used very much anymore, although they are still used in certain things like amplifiers.

For memory, the UNIVAC used mercury delay lines. These sent acoustic waves through tubes full of mercury. Mercury was used so that the waves would propagate slowly. The tube of mercury had to be kept at a constant temperature; otherwise the waves would propagate too slowly or quickly.  A few years after UNIVAC, they started using magnetic tape as memory, which was a huge advance at the time. They had machines to convert a deck of punch cards into a magnetic tape. This magnetic tape could then be read much faster by the computer than punch cards.

We then watched the video: UNIVAC: Remington-Rand Presents the UNIVAC. The video talks about how to program the UNIVAC.  Programmers at the time would write a program and then had to compile it themselves. Then typists would type the compiled program into a console that would put the program on a magnetic tape. This machine was also backwards-compatible with punch cards.

Dr. Wagstaff then gave each of us an unpunched card which we attempted to write a short message on. This took a while to do, even for short messages, and we talked about ways that the process could be made easier. One of these would be to have the more commonly used letters, such as e and a, be easier to punch than other letters. This concluded our discussion for the class and Dr. Wagstaff asked us to find and bring in a fact about IBM’s Deep Blue computer or Watson, the computer on Jeopardy.

What if the Difference Engine existed in the 1800’s?

The second assignment for this class focused on speculating about alternate history.

What if Charles Babbage had completed his Difference Engine in the 1830’s, and the engines were then mass-produced, spreading outward into all areas of calculation?

Students each selected a key event from the period 1830 to 1880 and discussed how it might have been altered by the availability of Difference Engine technology. These events included political, scientific, technological, and financial happenings that together provide an eclectic view of the time period:

The alternate histories make for fascinating readings. Links are provided above to submissions the students have shared publicly. Read on!

Class Summary 10-24-11

We started class reviewing the feedback from last class—it seems that everyone is pretty satisfied, which is great!  From now on, we will be doing about a half and half mix and reading review and class discussion, which is very similar to the previous class format.  A new element will be focused reading questions (non-graded) to aid the reading process, given out via email.

There will be a new blog post with the class’s topics for Assignment #2, as we all kicked butt on the assignment with an average of 9.3/10.

We ended with Hollerith last time, so today we continued right where we left off.  It was mentioned that IBM started on their path to creating the modern computer, at this moment in history, making calculating machines.  The company began as CTR, and became successful selling punch cards to other companies.  This was a very lucrative venture because each punch card can only be used once.  Howard Aiken designed the first real computer in the modern sense, produced by IBM, which cost $1.5 million in modern dollars, which wasn’t even a commercial venture.  The machine was given to Harvard University, and named the Harvard Mark I (although officially it was called Automatic Sequence Controlled Calculators).

This brings us to today’s real topic, Grace Hopper.  She worked as a mathematician for the navy, and when Aiken requested women from WAVES to do calculations (women were often used for calculations during this time, which I found surprising), Hopper started working for IBM.  We read her paper, “The Education of a Computer,” in which she talked about programming computers.  Programming in her time was very low level in comparison to modern programming.  She envisioned using programming languages to speed up and enhance the accuracy of computer calculations, because at the time, only raw numbers were able to be entered into computers.  To demonstrate this process, we played a game called “Robo Rally” in groups of four.

Each team of two was given a robot pieces, and told the following instructions:

000: Forward 1

101: Forward 2

010: Turn right

011: Turn left

100: Back up 1

The goal of the game is to reach the goal marker in 10 moves or less.  The conveyor belt spaces move you one or two spaces post turn, (depending on the number or arrows in the space), the gear spaces rotate you 90 degrees post turn, and the black spaces are death holes that you fall through and die.

The actual game, of course, does not work this way.  Instead of binary codes, players are given cards with symbols on them which represent possible moves.  This is better because sequences of zeros and ones have no semantic meaning to us, and it is very easy to make clerical errors.  Hopper rightly saw that a language to program computers would make programming far more human-friendly.

Grace Hopper did eventually create a programming language, which she called FLOW-MATIC.  At the time, programmers used flow charts to accurately use binary, which inspired the name FLOW-MATIC for her language.  IBM advertised the change thusly, “Mastering the knowledge of the complicated techniques and symbols of conventional computer flow charts requires a long training period.  Flow-Matic charting, however, can be easily grasped by anyone with knowledge of the application to be programmed.”

A fun anecdote about Grace Hopper was the story of the “first bug,” which happened to Harvard Mark II in 1947.  The word bug was used to describe as a flaw in physical design at the time, but after a moth was actually found in the machinery, disrupting a computer program, the term “bug” became a term for a programming flaw or mistake, and “debugging” became the process of fixing this mistake.

Finally, we watched this video:

Grace Hopper 60 Minutes Interview in 1982

For next time, the readings are available on the syllabus.  Questions will be sent out via email, including a request for a saying, less than 80 characters, to be brought in by each student.

Class Summary: 10/19

Class today began with a short feedback session, in which we filled out short notes stating something we thought was going well in the class, and something that we thought could be improved.

From there we jumped stright into sharing interesting quotes and passages from the reading, “Johnny Builds Bombs and Johnny Builds Brains”. Topics of favorite quotes were very diverse: how von Neumann managed to win vast amounts of financial support from the government, likely due to his charisma and well-placed connections (rather unlike our old friend Babbage); the somewhat lucky rise of Mauchly and Eckert, and their fortuitous partnership with Goldstine, who had grown increasingly frustrated with army policies; von Neumann’s diverse and rather charmed existence, with his intellectually star-studded parties and major contributions to the fields of game theory, quantum physics, operational research (and later life itself and the construction of automata, to be called von Neumann machines); and finally the mess about who came up with which ideas first during this period of extremely rapid innovation.

This last topic started the discussion over rights and patents during this period. This started with the innovation of the stored program (and the infamous First Draft which lead many to give the credit solely, and perhaps unjustly, to von Neumann, who likely put only his name on the manuscript because it was only the draft version), as well as the arguments between von Neumann and co.’s ENIAC machine and Atanasoff and Berry’s ABC. The disagreement stemmed from a short visit by Mauchly to Atanasoff, where the exchange of ideas eventually leading to construction of the ENIAC may or may not have taken place While that disagreement was “solved” by Minnesota courts in 1973 (in favor of the ABC), discussion is still ongoing and unclear about who was responsible for which ideas during this time of extremely rapid innovation.

Discussion then flowed into an attempt to organize the figures and machines that took part in the computer revolution. What we came up with as a class was sort of a mish-mash of connected events and tangled ideas; however, this disarray was actually reflective of the time, in which many people were sharing ideas with others, as well as coming to similar conclusions through independent work. Dr. Wagstaff has graciously organized this information by hardware technology and chronologically:

1. Mechanical computers: Differential Analyzer (1931)

2. Electromechanical computers: Z3 (1941), Harvard Mark 1 (1944)

3. Electronic computers:

– ABC (1942, first vacuum tube logic, 300 tubes, binary representation, not programmable)

– Colossus (1944, 1500 tubes, limited programming with cables)

– ENIAC (1946, 18,000 tubes, decimal representation, programmed with cables)

– EDSAC (1949, Cambridge, 3000 tubes, binary, first stored program, using mercury delay line memory)

– Manchester Mark 1 (1949, first stored program, using cathode ray tube memory)

– ACE (1950, 1450 vacuum tubes, mercury delay line memory, 1 MHz)

– EDVAC (1951, 6,000 tubes, mercury delay lines)

– UNIVAC (1951, 5,200 tubes, mercury delay lines, first commercially available computer in US)

A bit more detail can also be found on this Wikipedia page, which includes a fully chronological table of events from the 1940’s. There were also several theoretical constructs included in this discussion, including self-replicating von Neumann machines, universal Turing machines, and how one could turn a Turing machine into a von Neumann machine by adapting the Turing machine’s output with a robotic construction device.

As for the figures, we discussed how the various groups formed and influenced one another. This ends up being somewhat of a web, so I shall arbitrarily choose Turing as our starting point. Turing made his initial contributions while at Princeton studying under Church, a mathematical logician. It was here that he initially came into contact with von Neumann, a student of Hilbert’s, though the contact didn’t foster much in the way of later partnerships; Turing returned to England to aid with code breaking during the war, and later drew up the plans for ACE. ACE was eventually built after Turing left the NPL (at the time the fastest computer in the world at 1 MHz), while Turing oversaw the construction of the Manchester Mark I. On the other side of the pond, von Neumann joined with Goldstine, Mauchly and Eckert in efforts that eventually lead to construction of the ENIAC (and this group’s interactions with those that constructed the ABC have been mentioned earlier), which later blossomed into the EDVAC. Other somewhat more independent mentions were Aiken, who was responsible for the Harvard Mark I, and the group at MIT who constructed the Differential Analyzer.

And, finally, any discussion mentioning von Neumann machines would be incomplete without the thoughts of philosopher Randall Munroe.