The Treasure We Found

Remember the tales about great explorers returning home empty handed,
only to discover that the treasure they were seeking was waiting back at home all the time? 
We are the treasure we sought - and the prize ? 

A Beautiful Mind

brain-computer-interfaces

.

From soundtrack of A Beautiful Mind: A Kaleidoscope of Mathematics


Billions of years after the Big Bang, humans first showed up in the record about 2 million years ago.

Then our current form, homo sapiens, appeared about 200,000 years ago.

And civilization, just 10,000 years ago.

— 

Some small change to what we inherited turned our brains into powerful computers.

The difference between our DNA and our bonobo/chimp cousins is about one percent. 

Modern humans acquired the current brain version only recently - let’s call it brain 2.0 .

We don’t know how to use them very well - yet.  Thinking on this level is new. but 

Brains are curious.

As Dorothy Parker said: 
“The cure for boredom is  
curiosity.
There is no cure for curiosity."

Curiousity drives us to seek experience.

We convert experience into knowledge by thinking.

______________________________________

Three major components of thinking [according to me] are: 

information, symbols & language. 

Color45

Information: There is lots of data around us - like the dots on the right here. Sometimes we perceive information in the data - like the number 45.

There are many similar issues about the content of information within data - sometimes called the SNR - Signal to Noise Ratio. We have all had trouble following conversations in a noisy room. [I am getting older. It is getting worse.] 

The first really thorough treatment of this problem was by Claude Shannon as he worked for the telephone company in the 1940s. He came up with such measures as the capacity of a telephone line. This was a serious problem when all phones were connected with copper wire. For example, I had a really noisy phone connection when I lived by the salt waters of Tybee Island and almost could not transmit computer data over that line, although my brain/ears could pick voices out of the noise.

Information can also be transferred from one point to another deliberately as opposed to broadcasting it everywhere. We are doing it right now by using WRITING - symbols on paper that represent sounds we make with our mouths. Reading of writing has a certain speed and many of us try to improve our speed. Sometimes we also focus on increasing our comprehension - getting the message meaning through correctly. But when we increase our speed, we increase our error rate. Talking goes at one speed, reading another, internet transmission still faster, etc.

So there is an enormous flow of data around us. We hear sounds. We see objects. Correct perception of the information in the data around us can be quite critical !  - some are bushes, but some are lions! 

We are equipped to perceive with sensors - mainly our eyes and ears. Our ears can hear from about  20 to 20,000 cycles per second in the sound spectrum, plus bumps. Our eyes can see from red to violet. Our skin can feel infrared [below red] heat and our insect cousins can see into the ultraviolet. 

Early point to point data transmission by humans included calls, hand signals, smoke signals and couriers. One of the most striking was the famous “talking drums” of Africa, with which a skilled user could mimic human speech over miles. Along with the Industrial Revolution came Samuel Morse and his telegraph code which was later used by radio operators. [SOS = …- - -… ] Many early computers used teletypes which could send and receive 10 characters per second.

I remember one day in the late 1970s. I was leading a computer R&D lab in Austin. We had all been working on early forms of networking for about 10 years. I stopped by to visit Martin Richards [the creator of the predecessor of the programming language C] while in Cambridge. He had a test circuit on his desk.  We got excited when we sent about a megabyte - a million characters - between computers in a couple of minutes. Today, while typing this at home, I received a gigabyte file - about a billion characters - in a few seconds.

______________________________________

Symbols: We deal with information in chunks. Our eyes recognize corners, edges and motion in scenes. Our ears can separate sounds into words and sentences. About 80% of our information comes in through our eyes and we remember best when we can visualize data. But the part of our brain that may be most accessible to study is objects represented by words and similar symbols.

A good example is the letter A, or alpha in the Greek alphabet - Α, derived from the Phoenician aleph, .. It started out as a picture of a cow’s head -  - [Aleph is the West Semitic word for ox. ] The name for this collection of symbols is “alphabet” from the first two letters of Greek - alpha α and beta β and seen below:

greek alphabet

Sometimes we make up new letters to fit the sounds in the language we are speaking, such as ð called thorn or eth, for “th” in English this or that. Sometimes we lose some letters because of technology. When printing became popular, typesetters could only use letters that they had in their type box. In France, then England, letters uncommon to those languages were left out or combinations of other letters were substituted, leading to “ye” for   ðe or “the", etc.   ð  still exists in Nordic languages like Icleandic and modern computing removed many limits in typography. Here is a fun article about lost letters.

Most of us are familiar with the Roman alphabet used today for English. Some of us use Greek for math, etc. I studied ancient Koiné Greek in seminary. All schools used to require Greek and Latin. Letters are built of pictures - Egyptians, Mayan, Chinese, etc. - and syllables as in Seqouia’s famous Cherokee syllabary and Kanji Japanese, or in the smallest units of pronounced sound as in the letters of the Phoenician alphabet. Most of us in the west now use alphabets derived from Phoenician to represent phonemes - the smallest units of sound we make.

- - - more to come - - -

Kids love words. And they not only learn the language of adults around them, they make up languages of their own. 

Once we get information inside our heads, we seem to save it in the form of symbols, ideas or concepts. Symbols can be words or pictures or assembled out of other words or pictures. For example - - - 

- - - more to come - - -

______________________________________

language


Language:


our most accessible clue to how we think



Once again I will resort to a story: About 1967, l left working as a physicist for Lockheed and moved my family to Austin so I could attend the Episcopal Seminary of the Southwest there. Looking around for a job to support my family, I ended up across the  street at UT - the University of Texas, about the same time as one of the first supercomputers - the Control Data CDC 6600. I got a job there because I knew FORTRAN - a programming language. [old computing joke: Somebody told me to learn a foreign language to impress the girls, so I learned FORTRAN.] 

At this point in time my awareness of computing was very  limited.  I was approaching computers as a physicist, seeing them as giant calculators - good only for crunching numbers. BUT … One of my tasks at UT was writing a command parser for an early database system which could process statements of the form: “Print Name, Address where State = Texas and City = Austin;” I had no idea how to do this, we were all neophytes in those days, but I was handed a paper by Bob Floyd of Stanford University that changed my life. 

This paper described a method for specifying language grammars. A large percentage of our computer work in those days was in providing tools for people to use computers through a more language like interface instead of writing code on the “bare metal” like us hotshot systems programmers. While pursing this approach to my problem, I learned more about syntax [order] and semantics [meaning] and became aware of the work of linguists such as Noam Chomsky at MIT and also Bob Simmons who was at UT at the time and helped guide me. 

sentencediagram

Let me see if I can describe why this is so important. Remember when you were taught to diagram sentences in school? It looked like this: “I finished the dishes after Tim ate his doughnut and we cleaned the kitchen.

Each of the symbols here are words and the structure, the diagram, is a mechanism for recognizing a sentence. Where did such a mechanism come from ? How many other similar mechanisms are in our heads? Computers are automata - mechanisms for processing symbols - and so are our heads!


First let’s pursue language - one of our more accessible abilities: Here is a brief intro to the concept of a universal grammar built in to human beings, as led by Noam Chomsky, who revolutionized our understanding of linguistics in the latter 20th century: [Note: everybody is younger in this early clip!] 


I urge you to read everything you can understand by two leaders in linguistics: Noam Chomsky or Steven Pinker. You can also find them on Youtube. One of the best intros to language comes from Steven Pinker, author of The Language Instinct and The Blank Slate,
here is a link to 
Linguistics as a Window to Understanding the Brain, 50 minutes. - worth your time. . 

He makes the point: 

language is not thought      and this leads to the big question: 

What underlies all our mental activity like speaking & listening, looking & seeing?   .   .

This is important:  So - even though language and thought seem like such a major change from our animal cousins, the amount of change in our mental machinery might be quite small. After all there is only a 1 or 2% total difference in our DNA for all our anatomy.

Our difference amounts to what computer scientists call computational equivalence  and - the notion that all computers are equivalent functionally - from the simple pencil and paper or abacus all the way up to the mightiest supercomputer. If something can be computed by a procedure then it can be computed by any  equivalent computer that can do the procedure, no matter how simple. The differences relate mainly to capacity and speed not to function. Whether it takes a microsecond, a day or a week to count a list of items, the result is the same.

______________________________________

Now we return to understand ourselves as computing animals embedded in a universe of information,
making symbols and creating new ideas:

Like fish swimming in the ocean, we swim in a vast sea of information. 
Like fish discovering water, we are only now becoming aware of this sea.

That awareness is called consciousness.
It will carry us deep into the waters of thought itself. 

Soon we will meet intelligence beyond our own.
But it will come, not from the stars,
It will come from us.

______________________________________

Remember IBM’s Watson computer winning at Jeopardy - beating the humans?

IBM-Watson

BUT ... before you assume that we will deliberately build some super duper computer, consider other ways this might happen. The surprise may be that instead of a single computer, we gradually integrate our brains into a network consisting of humans and computers together as we build / grow through the Internet. As SUN computer company used to say: “The network IS the computer.

Consider:

In 2009 DARPA released 10 red balloons in the U.S. and offered a prize of $40,000 for the first team to find them. Many thought this would be a long intense search. BUT MIT students put together a team that solved the problem in 9 hours, using a social network of humans and their technologies such as GPS devices, cell phones and networking.

RedBalloonsMIT

read more here and see the video showing the search

Meanwhile, let’s consider recent increases in computing power per dollar $$:

brainvscomputerpower

BUT, Although we can now assemble about as many logic elements as in the human brain. 

We have no idea yet how to program on that scale.

Again, A reminder and cautionary note about our hubris:

ssbig1

Before programming the first supercomputer, I built and programmed custom textile machines using a rotary stepping switch like this one. Now our computers are billions of times faster and their capacity is now approaching that of the human brain, BUT we can barely program them. Since the 1940s, our ability to program has not advanced significantly over me using a soldering iron on a stepping switch. 

… work in progress,

at least I hope to see some progress … ? ! ?

========

The hiddden question is: Can we do this before we damage the planet enough to destroy ourselves?



    Next >>> Worlds Without End

© Gareth Harris 2016       --------        Contact email: garethharris@mac.com        --------         see also: GarethHarris.com