Bodies & Minds

After the sensation and awareness of other plants and animals, we will now concentrate on what is between our ears. This page is about building minds - out of stuff - matter. …..


FIRST: Computers don’t do anything you can’t do with a pencil and paper. Fortunately, they do it much faster. In fact, it was the need for speed that lead us to build modern computers. 

My friend and co-worker at UT Austin, Florence Turk, was a computer. That was her job title during WWII. The main focus for computers in those days was to compute artillery trajectories. It was the urge for speed that led us from Florence to machines.


Computers can be built out of about anything. 

The abacus is just a frame and some beads. 


The digicomp was made out of plastic:


Some MIT students built a more elaborate one out of tinkertoys:

BTW, There is a good history survey of computers at wikipedia, which shows the continual push for speed.

Now about my use of computers: __________________________________________________________


Back in the early 1960s, when computers were still very expensive, I built logic for textile looms in my Father’s machine shop, made out of telephone relays, programming the steps of a loom across these contacts with a soldering iron.  Although hardware is now billions of times faster and I now build complex programs out of software, our programming ability has not advanced much beyond my soldering iron. 

[Actually modern computing started in the early 1800s France with textile looms made by Joseph Jacquard, and controlled by punched cards.]

CHW CC CDC 6600-6400

Then, in the later 1960s, I programmed the first supercomputer, the CDC 6600. This was a milestone machine, the first capable of a Megaflop [1 million mathematical instructions per second]. BUT it cost $8.5 million USD in 1966 ($80 million now?), required its own building with motor generators to power it and 30-40 of us mathematicians, physicists, EEs [electrical engineers] and musicians to make it go. In comparison today, my iPhone is thousands of times faster, costs $300 and it goes in my pocket.

[This picture is in the 1970s, when we had both a CDC 6600 and a 6400 at UT Austin. Charles Warlick, in the suit, was Director of the Computation Center - in the 1960s & 70s. Under his leadership, we created leading edge software while  supplying the academic needs of a large university.]
Here is a link to a video descibing the CDC 6600 - the world’
s first supercomputer.

Later in Austin, I headed up a development center for a computer manufacturer and then founded a company manufacturing a UNIX computer of my own.

Here is the point about computers for our coming discussion about thinking machines: 

On these and similar projects, I wrote system software. This is the software inside computers that most people are unaware exists. It makes the concrete parts of the computer - chips and wires - into abstract things for the user - applications programs and data files and displays.

For example, There is a little program inside your computer that takes keystrokes from your keyboard, gathers them, converts them into representations of letters and passes them to your word processor, etc. Similarly there is another little program that takes data from your word processor and converts them into signals which paint your screen with letters. All these little programs are coordinated by a central program that shares resources and sequences them. We usually call this central program the supervisor or dispatcher.

Your head works similarly. There are automatic programs that keep you breathing and keep your heart beating. There are programs that build a scene inside your head, using input from your eyes, but mainly what your head expects to see based on past experience. 

About computers, whether digital or biological:

Our computers are simple digital machines that are reliable, repeatable and deterministic - built of wires and chips. They switch reliably between two voltages that we can categorize as zero - 0 and one - 1. This allows us to perform symbolic operations similar to numbers. From there, we can define symbols that can do everything from planetary orbits to word processing and pacman games. Compared to our brains, they are toys.

Our brains, in contrast, are not simple, not deterministic, seldom repeatable and reliable. They are complex networks of living cells, passing waves of pulses back and forth - across more connections than there are stars in the MilkyWay galaxy. Although we know where a few functions are located: speech, sight and motor control, we have no idea how they work - yet..

They are alike, because they are both equivalent computing machines - Turing machines as we call them now. And as Alan Turing showed us, all minimum standard computing machines can perform the same math procedures. They may be different as to size, speed, complexity and construction, but they are equivalent mathematically.

Now, let’s look briefly at the hardware of our brains: First we have to reach back in history.


First, comb jellies. Like all jellies, they have a body plan of radial, circular symmetry. They have some neurons that keep them swimming, but no brain - just some neurons. 


It was the chordates that stretched their body plan out in a line instead of a circle. They also stretched their neurons out out into a line, gathering some neurons on one end with some senses like sight and hearing making a head and leading to - the beginning of our brains.

About neurons: 

Neurons are cells that grew wires - yes, insulated connectors ! They have inputs on one side - called dendrites and a long output - called an axon - on the other side. The dendrites sum up all the inputs connected to them and if a certain total is met, the neuron fires its axon. The axon connects to many dendrites of other neurons. Conditioning, development and learning adjust the neurons to learn and make a network. Then you spend most of your life programming the network between your ears.

So you can see how nature can build a computer out of neurons. Looks much better than beads, cardboard and clicketey clack relays, doesn’t it?

Here are some pictures of neurons - the first, an animation and the second an actual video of a carefully stained neuron in action:

Above is an illustration of a neuron transmitting pulses to other neurons.

Below is an actual microscope video, showing actual proteins traversing the axons of the neurons. What careful stain work and photography - a fantastic picture!! 

The following is a link to  more of an explanation:

video of an actual neuron in action

Now let’s compare our digital computers and the neural computers in our heads: A good analogy is highway trafffic. You may drive 100MPH on some rural two lane highway, but the traffic load, the total number of cars going by, is low.


On the other hand, you may complain about rush hour traffic on the urban freeway, crawling at 20MPH, but because the freeway is many lanes wide on each side, the total number of cars going by is quite large. How about this?


Our brains have about 80 billion neurons and trillions of interconnections. Neurons are slow, firing and recovering - one neuron tick - takes milliseconds. It is analogous to one vehicle on a highway, carrying data.  BUT neurons have trillions of connections - a wide highway - trillions of lanes wide - so the amount of data being moved on every tick  is enormous.  And they are very fault tolerant. If one connection out of trillions misfires, it’s only one bit out of millions - no big deal. We say that our brains are very parallel.

Our computers are fast and getting faster - clocking millions of times faster than neurons, but the data path is very narrow. Once again, think of the amount of data traffic flowing as the product of the cycle speed and the data path width. We say that our computers are very serial. This makes each bit of data more sensitive. Often if one bit is missed, the machine crashes. Users get the BSOD - blue screen of death!. Here is a humorous? version:


Click for larger image, but haven’t you seen this before?

Here is the circuit layout of a 6502 processor, used in the first Apple computer:


One measure of computing speed is instructions per second or IPS - analogous to the traffic rate on a highway - cars per minute. An instruction is an operation like one addition or one multiplication. The first supercomputer I worked on long ago was famous for doing one million instructions per second or 1 MIPS. Your smartphone in your pocket today [2014] is over a thousand times faster than my old supercomputer. Computers almost double in power every couple of years = Moore’s Law.

Current estimates for your brain’s computing power range to about 1017 IPS. The latest supercomputer as of 2014 does about 1016 IPS. So we are approaching the speed of the human brain. And we can now build about as much storage as goes in one human brain. 

BUT we have no idea how to program this other than manually - like me soldering relays long ago. We can only do trivial problems. The only way we can program so much speed and storage is to do it the way nature does it - through learning. We don’t know how to do that - yet.

Consider your own learning experience. When you are born, you can breathe, your heart beats and your digestive system is starting up [sort of - kids are messy]. Nearly everything else is learned. Humans are pretty premature and undeveloped when they are born. 

You start by crying until someone puts a nipple in your mouth. [NOTE: This does not work at my age!! ] Then you learn how to look and wave your arms and legs around, etc. Fortunately there some learning centers already built in. You are not starting with a completely blank slate. One important center is the language center. When the language learning center activates for a few years, the results are astonishing.

Humans are amazing learning machines !!! So the task that computer scientists are working on is: How do we make our machines learn like a human baby?

This was made three days after the iPad came out.

WOW. Just like the little French girl making up the story about Winnie the Pooh, kids are amazing learning machines  !!!


Next >>> We’ll talk some more of this in The Return

© Gareth Harris 2019                                      Email:                  See also: