On Wednesday, Nov. 29, the student-led Artificial Intelligence Society at Johns Hopkins (HopAI) invited Daeyeol Lee, a Bloomberg Distinguished Professor of Neuroscience and Psychological Brain Sciences, to share his views on biological and artificial intelligence (AI). At the talk, Lee explored how intelligence manifests itself across biological boundaries and how the definition of intelligence can help developers and users gain a deeper understanding of AI.
One of the defining characteristics of life is that it reproduces. Genetic material self-replicates across generations, leading to the heritability of different traits. Ultimately, genes that increase an organism’s fitness are more likely to be passed down to future generations.
Using this evolutionary perspective, Lee defined intelligence as a function of life.
“Most people have come to the consensus that intelligence is the ability to solve complex problems in an uncertain and dynamic environment. I added the additional qualifier, ‘for the purpose of life,’ for a couple of reasons. If you don't have that, then who decides what solution is a good solution? Who picks the problem?” Lee explained. “If you introduce these criteria, you can see which solutions are better by judging which ones maximize the rate of replication.”
Lee’s definition of intelligence is consistent with lives from unicellular to multicellular organisms such as plants and animals. The single-celled Lacrymaria species exhibits hunting behaviors despite lacking a brain or a nervous system; plants such as the Venus flytrap employ strategies like counting prey-induced action potentials to guide their behavior, enclosing the prey. These illustrate examples of problem-solving used to increase fitness.
Artificial intelligence, which cannot replicate itself, does not qualify as intelligence under this definition.
This distinction between biological and artificial intelligence is further elucidated by Leucochloridium paradoxum, a parasitic worm that manipulates the behavior of its snail hosts. Infected snails are more likely to position themselves in exposed areas under broad sunlight, making them easy prey for birds. This is in the best interest of the parasite, which needs a bird host to reproduce. On the other hand, it is clearly unfavorable for the snails, which are consumed by birds.
Lee used this example to illustrate the importance of linking intelligence to agency — intelligence should be evaluated in terms of the actor promoting self-replication. In this case, the parasitic worm’s intelligence is driving the behavior of the infected snail.
Comparably, AI is coded by humans. AI is not acting in its own self-interest — Lee explained that its behavior is, therefore, an expression of the designer's intelligence, not its own.
While AI’s capacity for decision-making and memory approximates the functions of the human brain, Lee warned against making direct comparisons between products of engineering and evolution.
He cited planes and birds as an example. Planes can travel farther and carry heavier loads than small birds. On the other hand, hummingbirds are highly adept at navigating varying environmental conditions, with their flight design geared toward survival. As such, deciding which flight design is “better” depends on the metric being used.
Deciding which metrics to use to compare engineering concepts like flight design is challenging. More abstract concepts like intelligence are prone to similar difficulties. To illustrate this point, Lee explored the validity of comparisons between AI and human brains.
Information is processed in the brain by neurons that relay information to each other at junctions called synapses. If the electrical impulse of a neuron is not sufficient to cause a depolarization that reaches the threshold for an action potential, then the signal will not propagate through the synapse.
Similarly, transistors are electrical components that allow or block the flow of current in a circuit. As the building blocks of microprocessors and integrated circuits, transistors’ collective input forms logic gates, which computers use to make decisions and are the basis for many modern computer systems. While the synapse and transistor differ in significant ways, their contributions to information processing make them convenient metrics to compare the computing power of brains and machines.
Lee mentioned that the fastest supercomputer as of 2022 contained 2.2 quadrillion transistors, while the human brain only has approximately 1 quadrillion synapses. Despite their higher computing power, Lee explained that AI has not surpassed human intelligence, especially regarding abilities like goal-setting. As such, computing power is not a comprehensive measure of intelligence.
Furthermore, the supercomputer operates at 21 megawatts, enough to power 15,000 homes, while the brain operates at 20 watts, less energy than that used by an incandescent light bulb. While energy efficiency is not regarded as a metric for intelligence, it illustrates that a direct comparison between transistors and synapses doesn’t tell the full story of how problem-solving differs between computers and humans.
Acknowledging that generative AIs have seen great advancements through deep learning networks and reinforced learning algorithms, he drew on his background in economics and game theory to explore their comprehension of language. Based on the premise that AI can’t play the coordination game, he argued that there’s no need for it to understand language. As such, Lee put forth that AI language models cannot be considered intelligent.
As a product of evolution, organisms are wired to make decisions based on the reproductive benefits that the choices provide.
With this framework, Lee discussed the coordination game, a two-player game where both players earn the highest payoff when they select the same choice. In this case, cooperation maximizes self-benefit. While highlighting that more research is needed, he reasons that language evolves from coordination games.
“There's a logical necessity of communication in a coordination game, which is why I think that's a very sound and rigorous framework to try to understand how language first evolved,” Lee said in an interview with The News-Letter.
Lee argued that because AI is not alive, it does not have a utility function that maximizes self-interest. Therefore, it cannot participate in the coordination game — it does not comprehend language.
When discussing the future of AI, Lee emphasized that defining intelligence as a function of life can lead to a more productive approach to its regulation. For instance, accepting that AI cannot make decisions for its own benefit leads to the conclusion that it is acting on the parameters humans set for it. This refutes the idea that AI is an intelligent entity too dangerous for widespread use by the public. When AI is seen as a tool that can benefit human beings, open access becomes important. As something within human control, regulation becomes necessary.
“AIs don't have life and don't make decisions for themselves. Therefore, they depend upon the parameters that people set for them," he said in an interview with The News-Letter. “As it gets more powerful and sophisticated, people have to become more aware that they’re dealing with potentially dangerous tools. Once we realize these risks, we need to pass laws to make sure that we don’t produce tools that can injure people. But that’s different from saying that AI is a superhuman intelligence — that implies that AI has its own agenda.”
Freshman Prisha Rathi, an attendee of the event, echoed this view in an interview with The News-Letter.
“It's interesting how interdisciplinary this is because it incorporates engineering, economics, neuroscience, computer science, bioethics,” she said. “Intelligence for the purpose of life makes a lot of sense. It gives a goal to work toward that doesn't involve generalizations where AI takes over the world.”