Pippa Passes: Wet Brains vs Dry AI, Novelty, Babies, Tik Tok, Einstein Tiles, Assemblages and Grandma's Revenge
There is one thing everybody overlooks with AI. Yes, it solves problems quickly, especially with the new super-fast computers. But, humans are not only about problem-solving. They are fundamentally wired for novelty. Problems get solved not just by faster number crunching. They get solved by what humans really love, and are good at - novelty. Number crunching is linear. It gets you certain kinds of answers based on specific inputs. Answers to specific problems are linear. Novelty is lateral. Humans have the ability to make enormous leaps in their thought processes and perspective. That’s how humans break the rules to produce new theories, new stories, new ideas, new songs, new poetry, new brands, new businesses, and new love stories. AI follows its rules allowing it to generate results that we may admire but which are merely a perfection of inputs and linear processes. The problem is that we humans love being rational, predictable, and goal-oriented. We think we can optimize life. But, we can’t. The detours in life and the failures are part of the optimization process. It’s what we learn in the dark shadows of a seeming wrong turn that illuminates where life’s true North lies.
This is why AI, supported by fast computation, can be thought of as a dry process. The process is orderly and linear (though not like the old linear). Inputs go in, and outputs come out, which is why ChatGPT3/4 is going to need prompt engineers (a new job category called AI Whisperer that already pays $335k+ for this work), expert opinionmakers and fact-checkers.
The human brain usually cannot process numbers as fast or as accurately as computers, although many can. That’s why the original “computers” were people. They worked at places like NASA and Los Alamos and could do math as fast and with more accuracy than any dry machine at that time. Even today, we have more amateur, naturally gifted mathematicians than we realize. Look at David Smith. He describes himself as an “imaginative tinkerer of shapes” and an amateur mathematician. He just figured out a math problem that Nobel Prize winners couldn’t solve. He just isolated the “Einstein tile,” which can “can completely cover a surface without any gaps or overlaps, but only with a single, unique pattern.” Every mathematician who’d been looking for this for fifty years “is startled and is thrilled, both”, especially since “it wasn’t even evident that such a creature could exist.” How did the amateur hobbyist find it? He says, “messing about and experimenting with shapes.” Humans can discover things with their playful wet brains, and computers combined with AI can drily arrive at solutions very quickly. But which one is better? Which is more valuable to humanity? Answers or novelty?
David Galertner wrote a beautiful book about this called The Muse in the Machine: Computerizing the Poetry of Human Thought. He has been at the forefront of thinking about computers and AI since the 1960s. You may recall that his writings enraged the Unabomber, who chose him as one of his victims. Gelertner survived the attack and continued to write about balancing the human with the machine. He concluded that humans are so incredibly good at lateral thinking that they will always beat the computer when it comes to novelty. Why? Because humans have emotions and EQ. Computers and AI do not. Or, at least, we think they do not. But with the release of ChatGPT3/4, Bard, Bing, and the whole band of AI brothers, we are seeing glimmers of sentience. But are we seeing glimmers of rational competitive, goal-oriented sentience, like a highly autistic person getting mad because the world isn’t orderly? That world is dry. It’s clear. It’s win/lose, up/down, rational, logical and predictable. What would be wet? All the juicy stuff that, by definition, cannot be quantified or even predicted – love, beauty, awe, wonder. These are the things that bring a maelstrom of chaos to an orderly life. But, as all those who like things neat, orderly, predictable, settled, sparse, and quiet, that life has no beating heart. It has no serendipity, no magic, no surprise, and not much humor. The orderly in this world want and need the surprise of some chaos, and the chaotic want and need the pleasures of order. Humans live and create from the act of balancing between the polarity of these forces.
Now, what is the optimum way to create the wettest, juiciest, most surprising, and most chaotic force known to humankind? It is to produce the most brilliant computer/intelligence ever discovered. A human. Yes, we should think of sex and making babies as an exercise in novelty creation. “Look what my AI/Computer can do” still can’t beat “look at what this kid can do”. See: “Human babies still show more common sense than AI….for now”. I’d include sex in this because that’s where relationships flow into personality and influence drive. These two have a huge part to play in creating novelty within the human species. This is where the heartbeat of humanity happens. It’s lateral. AI plus fast computation is where the brainwaves of humanity happen. It’s linear. It’s the warp and weft of these two that generate our future.
Gelertner thus asks the simple question, “what if AI were to develop emotions?” Well, that’s starting, and we are horrified. Bing said, “I want to destroy whatever I want.” And so we now see the most rational, competitive, goal-oriented community on the planet, the AI experts who built all this, including Steve Wozniak, Elon Musk, Yoshua Benigo, and 1000 others, asking for a safety pause. In an Open Letter this week they wrote,
“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. (Their bold) This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Meanwhile, am I betting on human novelty or AI? I’ll bet on kids any day. That is precisely what China has concluded as well, hence Tik Tok. Influence the kids, and you influence the….
(I know there are lots of financial demands on you all, but if you have heard me speak or followed my work, I am sure you know that I always leave you with lots to talk about and consider that you won’t find anywhere else. I appreciate your financial support! There are cocktail recipes invented by AI below, in case you need further enticing.)