
We’ll be honest. If you had told us a few decades ago we’d teach computers to do what we want, it would work some of the time, and you wouldn’t really be able to explain or predict exactly what it was going to do, we’d have thought you were crazy. Why not just get a person? But the dream of AI goes back to the earliest days of computers or even further, if you count Samuel Butler’s letter from 1863 musing on machines evolving into life, a theme he would revisit in the 1872 book Erewhon.
Of course, early real-life AI was nothing like you wanted. Eliza seemed pretty conversational, but you could quickly confuse the program. Hexapawn learned how to play an extremely simplified version of chess, but you could just as easily teach it to lose.
But the real AI work that looked promising was the field of expert systems. Unlike our current AI friends, expert systems were highly predictable. Of course, like any computer program, they could be wrong, but if they were, you could figure out why.
Experts?
As the name implies, expert systems drew from human experts. In theory, a specialized person known as a “knowledge engineer” would work with a human expert to distill his or her knowledge down to an essential form that the computer could handle.
This could range from the simple to the fiendishly complex, and if you think it was hard to do well, you aren’t wrong. Before getting into details, an example will help you follow how it works.
From Simple to Complex
One simple fake AI game is the one where the computer tries to guess an animal you think of. This was a very common Basic game back in the 1970s. At first, the computer would ask a single yes or no question that the programmer put in. For example, it might ask, “Can your animal fly?” If you say yes, the program guesses you are thinking of a bird. If not, it guesses a dog.
Suppose you say it does fly, but you weren’t thinking of a bird. It would ask you what you were thinking of. Perhaps you say, “a bat.” It would then ask you to tell it a question that would distinguish a bat from a bird. You might say, “Does it use sonar?” The computer will remember this, and it builds up a binary tree database from repeated play. It learns how to guess animals. You can play a version of this online and find links to the old source code, too.
Of course, this is terrible. It is easy to populate the database with stupid questions or ones you aren’t sure of. Do ants live in trees? We don’t think of them living in trees, but carpenter ants do. Besides, sometimes you may not know the answer or maybe you aren’t sure.
So let’s look at a real expert system, Mycin. Mycin, from Stanford, took data from doctors and determined what bacteria a patient probably had and what antibiotic would be the optimal treatment. Turns out, most doctors you see get this wrong a lot of the time, so there is a lot of value to giving them tools for the right treatment.
This is really a very specialized animal game where the questions are preprogrammed. Is it gram positive? Is it in a normally sterile site? What’s more is, Mycin used Bayesian math so that you could assign values to how sure you were of an answer, or even if you didn’t know. So, for example, -1 might mean definitely not, +1 means definitely, 0 means I don’t know, and -0.5 means probably not, but maybe. You get the idea. The system ran on a DEC PDP-10 and had about 600 rules.
The system used LISP and could paraphrase rules into English. For example:
(defrule 52 if (site culture is blood) (gram organism is neg) (morphology organism is rod) (burn patient is serious) then .4 (identity organism is pseudomonas)) Rule 52: If 1) THE SITE OF THE CULTURE IS BLOOD 2) THE GRAM OF THE ORGANISM IS NEG 3) THE MORPHOLOGY OF THE ORGANISM IS ROD 4) THE BURN OF THE PATIENT IS SERIOUS Then there is weakly suggestive evidence (0.4) that 1) THE IDENTITY OF THE ORGANISM IS PSEUDOMONAS
In practice, the program did as well as real doctors, even specialists. Of course, it was never used in practice because of ethical concerns and the poor usability of entering data into a timesharing terminal. You can see a 1988 video about Mycin below.
Under the Covers
Mycin wasn’t the first or only expert system. Perhaps the first was SID. In 1982, SID produced over 90% of the VAX 9000’s CPU design, although many systems before then had dabbled in probabilities and other similar techniques. For example, DENDRAL from the 1960s used rules to interpret mass spectrometry data. XCON started earlier than SID and was DEC’s way of configuring hardware based on rules. There were others, too. Everyone “knew” back then that expert systems were the wave of the future!
Expert systems generally fall into two categories: forward chaining and backward chaining. Mycin was a backward chaining system.
What’s the difference? You can think of each rule as an if statement. Just like the example, Mycin knew that “if the site is in the blood and it is gram negative and…. then….” A forward chaining expert system will try to match up rules until it finds the ones that match.
Of course, you can make some assumptions. So, in the sample, if a hypothetical forward-chaining Mycin asked if the site was the blood and the answer was no, then it was done with rule 52.
However, the real Mycin was backward chaining. It would assume something was true and then set out to prove or disprove it. As it receives more answers, it can see which hypothesis to prioritize and which to discard. As rules become more likely, one will eventually emerge.
If that’s not clear, you can try a college lecture on the topic from 2013, below.
Of course, in a real system, too, rules may trigger more rules. There were probably as many actual approaches as there were expert systems. Some, like Mycin, were written in LISP. Some in C. Many used Prolog, which has some features aimed at just the kind of things you need for an expert system.
What Happened?
Expert systems are actually very useful for a certain class of problems, and there are still examples of them hanging around (for example, Apache Drools). However, some problems that expert systems tried to solve — like speech recognition — were much better handled by neural networks.
Part of the supposed charm of expert systems is that — like all new technology — it was supposed to mature to the point where management could get rid of those annoying programmers. That really wasn’t the case. (It never is.) The programmers just get new titles as knowledge engineers.
Even NASA got in on the action. They produced CLIPS, allowing expert systems in C, which was available to the public and still is. If you want to try your hand, there is a good book out there.
Meanwhile, you can chat with Eliza if you don’t want to spend time chatting with her more modern cousins.
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.
