Jump to content

Talk:Symbolic artificial intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Changes to the Lead Text to Correct Some Inaccuracies and Misleading Statements

[edit]

Current Lead Text

[edit]

Since many people may only read the introductory paragraphs, it is important to ensure they are correct. Unfortunately, the middle paragraph of the the current lead section has some key inaccuracies and parts that are misleading. I am referring to this paragraph:

"Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s. Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. However, in the late 80s and 90s specific technical problems (such as brittleness and intractability) showed the limits of the symbolic approach. AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks. These methods were directed towards specific problems with specific solutions, rather than general intelligence. "Deep learning" (a sub-symbolic approach) had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. But by 2020 difficulties with bias, explanation, comprehensibility, and robustness have become more apparent with deep learning approaches and AI researchers have called for combining the best of both the symbolic and neural network approaches."

Parts that are Misleading or Inaccurate

[edit]

The problem with these sentences is that they give an erroneous view of symbolic AI, especially in the three sentences in bold (added). Specifically, it propagates these viewpoints:

  1. The key problems of symbolic AI that led to the Second AI Winter were primarily (a) brittleness, and (b) computational intractability.
  2. Russell and Norvig, 2021, pages 21, and 23-25 is claimed to support that contention. Those pages do not, see the clarifications below.
  3. From the mid-1990s, AI research focused on connectionism, soft computing, mathematical optimization and neural networks.
  4. These methods were "new methods", implying there had been no prior work on connectionism, soft computing, mathematical optimization or neural networks.
  5. All of these methods were considered sub-symbolic, not just neural networks and connectionism.
  6. There is an implication that by a "turn to" these methods, AI researchers turned away from symbolic approaches. That is not true.
  7. Russell and Norvig, 2021, pages 25-26 is claimed to support "AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks". It does not.

Clarifications

[edit]
However, in the late 80s and 90s specific technical problems (such as brittleness and intractability) showed the limits of the symbolic approach.
Russell and Norvig, page 21 is describing problems from earlier AI work, around 1957-1959, and prior to the LightHill Report in 1973, which indeed mentioned combinatorial explosion as a problem.
Attempts to address computational intractability were part of early efforts regarding the use of heuristics in searching, such as A*.
Heuristic search, such as the A* algorithm, published in 1968 is one means of addressing this problem.
Applying expert knowledge is another approach to focusing search, as Feigenbaum mentions in his CACM interview.
So, yes this was a problem for the First AI Winter, but at this point we are talking about the Second AI Winter, starting in 1988.
The main reasons for the failure of expert systems leading to the Second AI Winter are described as problems with knowledge acquisition and handling uncertainty by Kautz. Similarly, Russell and Norvig on P. 24 say, "It turned out to be difficult to build and maintain expert systems for complex domains, in part because the reasoning methods used by the systems broke down in the face of uncertainty and in part because the systems could not learn from experience."
Brittleness, applies to both deep learning and symbolic systems. It was indeed a problem for expert systems, but is not unique to symbolic AI. I am still including it, however.
Instead, of brittleness and intractability, Kautz cites two key technical problems for the end of enthusiasm for expert systems:
"The first challenge was the need for principled and practical methods for probabilistic reasoning." [Kautz, 2022, p. 110]
"The second unsolved challenge for the expert system approach was named the “knowledge acquisition bottleneck.” " [Kautz, 2022, p. 110]
AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks. These methods were directed towards specific problems with specific solutions, rather than general intelligence.
Kautz characterizes the next two decades and the primary focus on P. 110: "overcoming these challenges set the workplan for the next two decades of research in AI. ". It is not a focus on subsymbolic systems, but instead on handling uncertainty and the knowledge acquisition bottleneck.
Both Kautz and Russell & Norvig cite probabilistic reasoning (Bayesian networks, HMMs, and later statistical relational learning) and machine learning (primarily symbolic approaches, but also SVMs and other classifiers, including Valiant's theoretical work).
It is not until about 2012 that deep learning takes off. Russell and Norvig say 2011, citing speech recognition, but Kautz starts that period as 2012, and most references I have seen, such as Marcus, also use 2012.
Indeed, [Marcus, 2019] describes neural net research as being considered unsuccessful to around 2012:

"Still, many people continued in Rosenblatt's tradition for decades. And until recently, his successors too struggled mightily. Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods.

... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks.

Suddenly, for the first time, Hinton's team and others began setting records, most notably in recognizing images in the ImageNet database we mentioned earlier. Competitors Hinton and others focused on a subset of the database-1.4 million images, drawn from one thousand categories. Each team trained its system on about 1,25 million of those, leaving 150,000 for testing. Before then, with older machine-learning techniques, a score of 75 percent correct Was a good result; Hinton's team scored 84 percent correct, using a deep neural network, and other teams soon did even better; by 2017, Image labeling scores, driven by deep learning, reached 98 percent."

These methods were directed towards specific problems with specific solutions, rather than general intelligence.
The implication is that symbolic AI had need been focused on narrow AI before, but only AGI. But that is not the case. Clearly, expert systems are an obvious case of narrow AI. So, I propose just dropping this sentence.

Approach to Address these Problems

[edit]

I think the problem is that overall the explanation is too coarse, and does not break down the periods of the Second AI Winter, the period immediately following that when probabilistic reasoning and symbolic machine learning received much greater focus, and then the period in which deep learning took off (circa 2012). Finally, a shift to a greater focus on hybrid systems appears to have started about 2020.

I propose refining the introductory discussion to break out these periods and reserving "sub-symbolic" to describe only neural nets and connectionism, and not using it to encompass probabilistic methods, Bayesian approaches, or optimization. The latter techniques can be used for symbolic AI, deep learning, and in various hybrid logical-probabilistic approaches, such as Markov Logic Networks.

Regarding the use of "soft", fuzzy logic was introduced in 1965, and Danny Hillis founded Thinking Machines Corporation in 1983. So, there wasn't a sudden shift to soft and sub-symbolic approaches in the late 80's and neural nets didn't become dominant until about 2012. We can certainly talk more about fuzzy logic and other extensions to logic later on.

Revised Sentences and New Paragraph for Deep Learning

[edit]

Here is what I propose, discussed one part at a time:

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s.
{ No changes. }
Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the ultimate goal of their field.
{ Just clarified that AGI was an **ultimate** goal and not the only one. Certainly, many others were focused on specific applications, just as they are today. E.g., theorem provers, planners, symbolic mathematics, etc. And of course, expert systems were, by definition, narrow AI.}
An early boom, with early successes such as the Logic Theorist and Samuel's Checker's Playing Program led to unrealistic expectations and promises and was followed by the First AI Winter as funding dried up.[1][2]
{ I'm adding a bit more of the early history and a mention of the first AI winter. Russell mentions Samuel's program. Kautz mentions the Logic Theorist. }
A second boom (1969-1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace. [3][4] That boom, and some early successes, e.g., with XCON at DEC, was followed again by later disappointment.[4] Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988-2011) followed.[5]
{ Boom and bust again, with time periods from first Russell then Kautz and some specifics on why there was a boom. It all looked very promising back then, initially. }
Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.[5] Uncertainty was addressed with formal methods such as Hidden Markov Models, Bayesian reasoning, and statistical relational learning. Symbolic machine learning addressed the knowledge acquisition problem with contributions including Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning, case-based learning, and inductive logic programming to learn relations.[6]
{ Some specifics about symbolic machine learning until I write that section. For handling uncertainty, Bayesian reasoning and HMMs are mentioned by Russell & Norvig while Kautz mentions Bayesian reasoning and statistical relational reasoning. }

---

Next, I'd start a new paragraph just to address deep learning and history to the present:

Neural networks, a sub-symbolic approach, had been pursued from early days. Early examples are Rosenblatt's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams,[7] and work in convolutional neural networks by LeCun et al. in 1989.[8]
{ Brief intro and overview. I could cite say even more as to William Grey Walter's work on cybernetics and the work of Minsky and Papert on neural networks, but I'm trying to keep it shorter.}
However, neural networks were not viewed as successful until about 2012

Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ...A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks.[9]

{ Explain why we view 2012 as the jumping off point for when deep learning really takes off. It seems necessary to make this point as so much of what you read attempts to rewrite history to imply first there was symbolic AI, and then deep learning; End of story. That's just not true.}
Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation.
{ Acknowledge incredible results. }
However, by 2020 as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; AI researchers have begun calling for combining the best of both the symbolic and neural network approaches[10][11] and addressing areas that both approaches have difficulty with, such as common-sense reasoning.[9]
{ Finally, how both approaches may be best together. I may need to add more on hybrid approaches that extend logic to handle probability, I intend to put that in somewhere. }

New Paragraphs

[edit]

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s.[12][13] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the ultimate goal of their field.[14] An early boom, with early successes such as the Logic Theorist and Samuel's Checker's Playing Program led to unrealistic expectations and promises and was followed by the First AI Winter as funding dried up.[1][2] A second boom (1969-1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.[3][4] That boom, and some early successes, e.g., with XCON at DEC, was followed again by later disappointment.[4] Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988-2011) followed. [5] Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.[6] Uncertainty was addressed with formal methods such as Hidden Markov Models, Bayesian reasoning, and statistical relational learning.[15][16] Symbolic machine learning addressed the knowledge acquisition problem with contributions including Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning, case-based learning, and inductive logic programming to learn relations.[6]

Neural networks, a sub-symbolic approach, had been pursued from early days and was to reemerge strongly in 2012. Early examples are Rosenblatt's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams[17], and work in convolutional neural networks by LeCun et al. in 1989.[18] However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ...A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks [9] Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for combining the best of both the symbolic and neural network approaches[10][11] and addressing areas that both approaches have difficulty with, such as common-sense reasoning.[9] Veritas Aeterna (talk) 23:44, 9 August 2022 (UTC)[reply]

References

  1. ^ a b Kautz 2020, pp. 107–109.
  2. ^ a b Russell and Norvig 2021, p. 19.
  3. ^ a b Russell and Norvig 2021, p. 22-23.
  4. ^ a b c d Kautz 2020, pp. 109–110.
  5. ^ a b c Kautz 2020, p. 110.
  6. ^ a b c Kautz 2020, pp. 110–111.
  7. ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
  8. ^ LeCun, Y.; Boser, B.; Denker, I.; Henderson, D.; Howard, R.; Hubbard, W.; Tackel, L. (1989). "Backpropagation Applied to Handwritten Zip Code Recognition". Neural Computation. 1 (4): 541–551.
  9. ^ a b c d Marcus and Davis 2019.
  10. ^ a b Rossi, Francesca. "Thinking Fast and Slow in AI". AAAI. Retrieved 5 July 2022.
  11. ^ a b Selman, Bart. "AAAI Presidential Address: The State of AI". AAAI. Retrieved 5 July 2022.
  12. ^ Kolata 1982.
  13. ^ Russell & Norvig 2003, p. 5.
  14. ^ Russell & Norvig 2021, p. 24.
  15. ^ Russell and Norvig 2020, p. 25.
  16. ^ Kautz 2020, p. 111.
  17. ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
  18. ^ LeCun, Y.; Boser, B.; Denker, I.; Henderson, D.; Howard, R.; Hubbard, W.; Tackel, L. (1989). "Backpropagation Applied to Handwritten Zip Code Recognition". Neural Computation. 1 (4): 541–551.

Adding the Rest of the Section on History

[edit]

I'm adding the section on Uncertain Reasoning after the Second AI Winter today, then should have the section on Machine Learning within a few days. I'm trying to keep it short enough to be readable while still hitting key highlights. Veritas Aeterna (talk) 00:16, 18 August 2022 (UTC)[reply]

Here's the section on machine learning. I'm focusing on contributions made in symbolic machine learning primarily and especially in the period after the Second AI Winter up until about 2011. Veritas Aeterna (talk) 22:45, 20 August 2022 (UTC)[reply]

Adding the Sections on Techniques and Controversies

[edit]

I have started working on the remaining two sections on techniques and controversies. For techniques, I will mostly briefly mention key algorithms, projects, or contributions with links to the appropriate Wikipedia pages. I will try to keep the overviews brief -- a sentence or less, so the article does not grow unmanageably long.

For the controversies section, I intend to include some comments from Gary Marcus discussing the cultural animus against symbolic AI in the deep learning community, along with criticisms of symbolic AI from Hinton. I am first moving the discussion of "GOFAI" there, to the controversy section, as it is definitely not a neutral term, rather it has the negative connotations of "old-fashioned" implying that it has been entirely superseded.

Hopefully I can have the Techniques section sometime this week and the Controversies sometime next week. Veritas Aeterna (talk) 23:10, 29 August 2022 (UTC)[reply]

OK, I have added the Techniques and Contributions section and will start on the Controversies section next week.

Update: Added content for the section on Controversies. Added a new reference to Rodney Brooks paper, "Intelligence without Representation".

Veritas Aeterna (talk) 02:21, 4 September 2022 (UTC) Veritas Aeterna (talk) 04:41, 13 September 2022 (UTC)[reply]

Broken short citations

[edit]

In this edit, I added a reference that was missing for a broken short citation (Marcus & Davis 2019), but I noticed that there are other broken short citations, which I didn't fix (Marcus 2019; Marcus 2020; Marcus 2022).

Also, Veritas Aeterna, please note that headings should be in sentence case per MOS:HEAD. I corrected a lot of improperly formatted headings in the aforementioned edit. Thanks, Biogeographist (talk) 17:20, 25 September 2022 (UTC)[reply]

Hi, thanks so much for your improvements. Sometimes when I have pasted in quoted materials I have indeed picked up curly apostrophes or quotes. I’ll also check the short citations by searching for sfn and ensuring there is a proper citation. But if you know a quicker way to style check and proof a Wikipedia page, please let me know. Something like flake8 for checking Python code or lint for C…only for Wikipedia pages, if we have such a tool. Oh, and yes you’re right about the headings, thanks for fixing them!! Veritas Aeterna (talk) 23:11, 25 September 2022 (UTC)[reply]
Some people use {{Automated editing}} tools that may include linter-like functions, but I don't use those and don't know what they can do. On my computer I have some text services that I use on- and off-wiki: I do conversion of quote marks using one of those. If there are more serious formatting issues in an article, I will edit the text in a dedicated text editor for use of regex, etc. Biogeographist (talk) 00:51, 26 September 2022 (UTC)[reply]

GOFAI Philosophical Discussions

[edit]

CharlesTGillingham wanted to move discussions of the term GOFAI to a page under Philosophy, to which I agreed, but removing the entire section on the Qualification Problem, which Turing first raised, removes a key part of the discussion of Controversies, so I have restored it here, rather than reverting his recent sequence of changes entirely. I don't mind adding a See Also to that GOFAI discussion, if he likes. Veritas Aeterna (talk) 21:32, 5 July 2023 (UTC)[reply]

Could we describe the qualification, ramification and frame problems from McCarthy's point of view? He was the one who identified these.
It's only tangentially related to Dreyfus -- it follows from his critique of the "epistemological assumption", but he never actually wrote a program that didn't work because of it. McCarthy did. I think McCarthy's experience is much more grounded in computer science, and much more interesting and useful for future work.
The frame, ramification and qualification problems are, in my view, related to the common sense knowledge problem. When you try to describe real-world situations or goals, your symbolic description tends to get longer and longer the more you think about it, because you keep thinking of more special cases and details that have to be specified. It's really difficult to know if you've got it right now and you can safely stop working on it. McCarthy developed logic that could side-step any obviously inessential stuff if necessary, but this doesn't really solve the problem -- it just kicks it down the road.
This is a computer science problem. It relates to the brittleness of expert systems, the difficulty of creating a useful universal ontology, and it's the way value-alignment problem is usually framed. It belongs in this article.
Dreyfus doesn't, in my view. He's too far away from the code to be useful. ---- CharlesTGillingham (talk) 22:46, 5 July 2023 (UTC)[reply]
I can think of two solutions here:
(1) I just change this part to what it was before when it was labeled "Philosophical: critiques from Dreyfus and other philosophers" and restore the previous content, perhaps linking in the GOFAI article, or removing the sentence on Haugeland completely if you think it inaccurate.
(2) I add a paragraph describing McCarthy's views on the frame problem and use of circumscription in an attempt to address it, then you can review and see what you think.
Let me know which you prefer but please don't do anything just yet.
Also, I can move the part on Dreyfus to the section on Situated Robotics.
Similarly, please me know which you prefer but please don't do anything just yet.
By the way, the Reply formatting isn't working too well for me -- it keeps adding new material at the front of sentences. I don't know why. Veritas Aeterna (talk) 01:00, 6 July 2023 (UTC)[reply]
I still feel Dreyfus has always been irrelevant to the practice of symbolic AI. Minsky said that he "misunderstands and should be ignored." Wikipedia should cover him where he belongs, in philosophy & cognitive science. He doesn't belong in this article. ---- CharlesTGillingham (talk) 16:29, 6 July 2023 (UTC)[reply]
I can remove the Dreyfus references -- trying to work with you here. I would make similar arguments about GOFAI, which I think you disagree with, as I never knew anyone who held the GOFAI view. Most were familiar with the physical symbol hypothesis and have come to view the "sufficient" part of it as overly strong.
However, I think this following part on embodied cognition remains relevant, and can be moved to the section covering Rodney Brooks work.
"The embodied cognition approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its environment, including the rest of its body. Under the embodied cognition approach, robotics, vision, and other sensors become central, not peripheral."
So, I am moving that and trying to write transitions to tie it into Brook's approach. See if it works for you, too, once I have finished the edits.
Regarding (1) vs (2) did you have a preference? Or yet another alternative?
Thanks for slowing things down.
Right now I am submitting some white papers to NSF and DARPA on deadlines. Veritas Aeterna (talk) 19:16, 6 July 2023 (UTC)[reply]
That would be (2). Add McCarthy, delete Dreyfus and keep Brooks.
My idea here is this: In this article, we cover problems that were discovered by computer scientists working in AI, and disputes between computer scientists. We cover criticism of the PSSH/Cognitivism/GOFAI/Strong AI hypothesis elsewhere. ---- CharlesTGillingham (talk) 00:41, 7 July 2023 (UTC)[reply]
OK, let me work on this and give it a try. Should have it by Monday. BTW I took a class from McCarthy, but it was on LISP, and he could say all combinations of CAR, CADR, CADDR, etc. that you can think of. FOL and automatic theorem proving was at UT-Austin. Veritas Aeterna (talk) 22:01, 7 July 2023 (UTC)[reply]
I rewrote much of the first section and changed the title to emphasize problems encountered applying FOL to dynamic situations (the Frame Problem and the Qualification Problem) and similar difficulties with common-sense reasoning. There are no references to Haugeland and Dreyfus there anymore, and much more discussion of McCarthy's contributions with circumscription and with his view of common-sense reasoning in the Advice Taker. Suggestions welcome.Veritas Aeterna (talk) 00:25, 9 July 2023 (UTC)[reply]
This all looks very good. ---- CharlesTGillingham (talk) 18:09, 9 July 2023 (UTC)[reply]
Great! I'm glad we could reach an agreement. Veritas Aeterna (talk) 20:51, 10 July 2023 (UTC)[reply]