Medieval Thinking and Political AGI

A cherry picked quote from a review of a new book:

“In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions—to inquire how the causal relationships would change given some kind of intervention—which Pearl views as the cornerstone of scientific thought.” [1]

The exploration of causal reasoning, and how it works, dates back to the Medieval times, and specifically the work of the Scholastics, on whose theories our modern legal system is built.

Recall a classic example: a dead body is discovered, and a man with a bloody sword stands above it.  Did the man commit the crime, or was he simply the first on the scene, and picked up the sword?  How can you know? What is the “probability” of guilt?

The Scholastics struggled mightily with the concept of “probability”.  Eventually, Blaise Pascal, and many others (Huygens, etc…) developed the modern mathematical theory that explains dice games, and how to place bets on them. [2] This mathematical theory is called “probability theory”, and it works. However, in a courtroom trial for murder, it is not the theory that is applied to determine the “probability of innocence or guilt”.

What actually happens is that the prosecution, and the defense, assemble two different “cases”, two competing “theories”, two different networks of “facts”, of “proofs”, one network showing innocence, the other showing guilt.  The jury is asked to select the one, or the other network, as the true model of what actually happened.

The networks consist of logically inter-connected “facts”. Ideally, those facts are related in a self-consistent fashion, without contradicting one-another.  Ideally, the network of facts connects to the broader shared network that we call “reality”.  Ideally, the various “facts” have been demonstrated to be “true”, by eye-witness testimony, or by forensic reasoning (which itself is a huge network of “facts”, e.g that it takes x hours for blood to clot, y hours for rigor mortis to set in). Ideally, the network connections themselves are “logical”, rather than being various forms of faulty argumentation (appeals to emotion, appeals to authority, etc.)  You really have to put yourself into a Medieval state of mind to grok what is happening here: men pacing in long, fur-trimmed coats, presenting arguments, writing hundred-page-long commentaries-on-commentaries-on-commentaries. Picking apart statements that seems superficially coherent and believable, but can be shown to be built from faulty reasoning.

What is being done, here? Two different, competing models of the world are being built: in one model, the accused is guilty. In the other model, the accused is innocent. Clearly, both models cannot be right. Only one can be believed, and not the other.  Is there a basis on which to doubt one of the models? Is that doubt “reasonable”?  If the accused is to be harmed, viz. imprisoned, and its been shown that the accused is guilty “beyond a reasonable doubt”, then one must “believe”, accept as “reality”, that model, that version of the network of facts.   The other proof is unconvincing; it must be wrong.

I think that it is possible to build AI/AGI systems, within the next 5-10 years, that construct multiple, credible, competing networks of “facts”, tied together with various kinds of evidence and inference and deduction, relation and causation.  Having constructed such competing models, such alternative world-views of deeper “reality”, these AI/AGI systems will be able to disentangle the nature of reality in a rational manner.  And, as I hope to have demonstrated, the mode of reasoning that these systems will employ will be distinctly Medieval.

medieval-university

University students being disciplined.

P.S. The current tumult in social-media, society and politics is very much one of trying to build different, competing “models of reality”, of explanations for the world-as-it-is. In the roughest of  outlines, this is (in the USA) the red-state vs. blue-state political divides, the “liberal” vs. “conservative” arguments. Looking more carefully, one can see differences of opinion, of world-view on a vast number of topics.  Every individual human appears to hold a puzzle-piece, a little micro-vision of what is (social) reality, a quasi-coherent tangle of networked facts that stick together, for them, and that they try to integrate into the rest of society, via identity politics, via social-media posts, via protest marches.[4]

The creation and integration of (competing) models of reality is no longer just a courtroom activity; it is a modern-day social-media and political obsession.  It is possible today, unlike before, only because of the high brain-to-brain (person-to-person) data bandwidth that the Internet, and social-media now provides. One can encounter more competing theories of reality than every before, and one can investigate them to a greater level of detail, than before.

If and when we build AGI systems capable of simultaneously manipulating multiple competing models of the world, I think we can take a number of lessons from social science and psychology as to how these networks might behave. There is currently tremendous concern about propaganda and brain-washing (beliefs in quasi-coherent networks of “facts”, that are non-the-less disconnected from mainstream “reality”). There is tremendous concern about the veracity of main-stream media, and various well-documented pathologies there-of: viz. the need to be profitable forces main-steam media to propagate outrageous but bogus and unimportant news. The equivalent AGI risk is that the sensory-input system floods the reasoning system with bogus information, and that there is no counter-vailing mechanism to adjust for it. Viz.: we can’t just unplug journalism and journalists from the capitalist system; nor is it clear that doing so would increase the quality of the broad-cast news.

Some of the issues facing society are because human brains are not sufficiently integrated, in the sense of “integrated information”. [3]  Any individual human mind can make sense of only one small part of the world; we do not have the short-term memory, the long-term memory, or the reasoning capacity to take on more. This is not a limitation we can expect in AGI.  However, instead of having individual humans each representing the pro and con of a given issue, its reasonable to expect that the AGI will simultaneously develop multiple competing theories, in an attempt to find the better, stronger one – the debate does not stop; it only getters bigger and more abstract.

Another on-line concern is that much of on-line political posting and argumentation is emotionally driven, anchored in gut-sense arguments, which could certainly be found to be full of logical fallacies, if they were to be picked apart. “Humans are irrational”, it is said. But are they really? In Bayesian inference, one averages together, blurs together vast tracts of “information”, and reduces it to a single number, a probability. Why?  Because all of those inputs look like “noise”, and any given, specific Bayesian model cannot discriminate between all the different things going on. Thus, average it together, boil it all down, despite the fact that this is sure to erase important distinctions (e.g. “all cats have fur”, except, of course, when they don’t.)  This kind of categorical, lumped-together, based-on-prior-experience, “common sense” kind of thinking sure seems to be exactly what we accuse our debate opponents of doing: either they’re “ignoring the details”, or failing to “see the big picture”.  I don’t see how this is avoidable in the reasoning of Bayesian networks, or any other kind of network reasoning.  Sooner or later, the boundaries of a fact network terminate in irrelevant facts, or details that are too small to consider. Those areas are necessarily fuzzed over, averaged, and ignored: gut-intuition, common-sense will be applied to them, and this suffers from all of well-known pitfalls of gut-sense reasoning.  AGI might be smarter than us; and yet, it might suffer from a very similar set of logical deficiencies and irrational behaviors.

The future will look very much like today. But different.

[1] Judea Pearl, The Book of Why / The New Science of Cause and Effect

[2] James Franklin, The Science of Conjecture: Evidence and Probability Before Pascal

[3] Integrated information theory – Wikipedia

[4] Meaningness – Thinking, feeling, and acting—about problems of meaning and meaninglessness; self and society; ethics, purpose, and value. (A work in progress, consisting of a hypertext book and a metablog that comments on it.)

About Linas Vepstas

Computer Science Researcher - Hanson Robotics
This entry was posted in Meta, Theory, Uncategorized and tagged , . Bookmark the permalink.

One Response to Medieval Thinking and Political AGI

  1. Jeff Thompson says:

    I agree that the way forward in AI is causal generative models which compete to predict the data. This is very different than deep learning and other approaches which do “pattern recognition” or try to extract regularities from the data. It’s interesting to me that you mane this post, because it seems that the “patternist” theory pursued at OpenCog is mostly of the second type – mining data for patterns and regularities. I wonder if there is also OpenCog research that pays more attention to starting with causal generative models (or probabilistic programs) that predict the data.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.