Talk:Symbolic artificial intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

GOFAI vs non-GOFAI[edit]

Is there a reliable source for the following sentence? (I have removed it as it strikes me as wrong):

Often, GOFAI(R) is used to distinguish systems that do not employ connectionist or statistical machine learning algorithms, which have come to play a major role in AI, robotics and computer vision since the late-1990s.

I understand the defining feature of GOFAI to be the use of symbolic representations, not the use of statistics or connectionist architectures. So decision tree learning is GOFAI and support vector machines are not although both are statistical machine learning. I believe there are some connectionist architectures where the elements are logical propositions which would make them GOFAI. Pgr94 (talk) 11:20, 26 July 2008 (UTC)[reply]

Improving this Article by Clarifying the Contributions of Symbolic AI[edit]

Currently, the article reads more as a caricature of symbolic AI, with some parts correctly described. It reads as if the intention is to minimize the work that was done from around the time of the Dartmouth Conference to the current time, and summarize it all as dead and buried, with no contributions. That is hardly a neutral point of view.

The first correction is an edit to point out that symbolic AI was more than expert systems.

Next, I propose expanding "Techniques" to "Techniques and Contributions" and using it to cover (proposed sections): Symbolic programming languages, Search, Planning, Automated Reasoning, Symbolic learning approaches, Knowledge-based systems, and finally Agents and Multi-agent systems.

The intended result is to complement the existing article Artificial Intelligence, but with sections that focus on the specific contributions in ideas, along with exemplary systems, for Symbolic AI in particular.

Further, I'd add a discussion of Daniel Kahneman's Type I and Type II reasoning as it is commonly used for comparing and contrasting Symbolic AI versus Deep Learning approaches.

Some of the existing sections I could see belonging to a History or Controversies section. Currently, it seems a bit scattered.

Comments, suggestions, or thoughts from anyone watching this page? Veritas Aeterna (talk) 00:17, 5 July 2022 (UTC)[reply]

Removed Sentence that Symbolic AI was Abandoned, and Reorganized Intro Slightly[edit]

I continued with some incremental improvements to the opening section.

Removed this sentence:

   "However, the symbolic approach would eventually be abandoned in favor of subsymbolic approaches, largely because of technical limits."

to more accurately describe how symbolic AI fell out of favor but was never "abandoned" and the scales have shifted back to more balanced views now.

Also, moved the paragraph higher up due to its importance. I'm thinking the next three paragraphs belong more in a Controversies section, although we may wish to mention the increased approach on statistical AI in the years right before the deep learning explosion. Veritas Aeterna (talk) 21:47, 5 July 2022 (UTC)[reply]

Disagree, as this isn't historically accurate. Let's discuss in the section I added below. ---- CharlesTGillingham (talk) 04:11, 25 July 2022 (UTC)[reply]

Proposed Reorganization[edit]

Below is a proposed reorganization. The introductory text would remain here of course. I'd hoover up some sections into a short history of symbolic AI, which would be intended to complement the main article on History of artificial intelligence.

For now, I'm just moving the content of the erroneously titled section "Abandoning the symbolic approach 1990s" to a new "Controversies" section and changing the section "Origins" to "Foundational Ideas" and starting with just leaving the existing content on the physical symbol system in place while moving the part about the Logic Theorist to "Dominant paradigm 1955-1990".

Thoughts or comments welcome, otherwise I will proceed to make these changes incrementally.

I'm also trying to ensure that all changes I make are consistent with the main article Artificial Intelligence, but hopefully complementary, focusing more on the Symbolic AI aspects, of course.

Foundational ideas[edit]

Primary[edit]

The physical symbol system hypothesis[edit]

The knowledge principle: In the knowledge lies the power[edit]

Secondary[edit]

Inspiration from human and animal cognition and behavior[edit]

Advantages of an explicit symbolic representation of knowledge[edit]

A short history[edit]

Origins and dominant paradigm, 1955-1990[edit]

The Knowledge Revolution: boom and bust, 1980-1993[edit]

Probabilistic reasoning and greater rigor, 1993-2012[edit]

Challenges from deep learning, 2012-2020[edit]

Neurosymbolic AI, 2020-Present[edit]

Techniques and Contributions[edit]

Symbolic programming languages[edit]

To cover:

  • LISP
  • Prolog

Knowledge Representation[edit]

To cover:

  • Symbols, terms, and first-order logic
  • Ontologies
  • Frames and defaults

Search[edit]

To cover:

  • A*, alpha-beta pruning, and descendants
  • Heuristic search as one way to apply knowledge
  • Look-ahead in game-playing
  • SAT techniques

Planning[edit]

To cover:

  • GPS and STRIPS-based planners
  • Planners for dynamic worlds and actions with uncertain outcomes

Automated Reasoning[edit]

To cover:

  • FOL, theorem provers
  • Constraints & constraint-based reasoning
  • Rules & Production systems
  • Knowledge sources & blackboards
  • Metalevel reasoning
  • Reasoning in knowledge representation systems
    • Inheritance
    • Defaults
    • Ontologies
  • Handling uncertainty and vagueness
    • Certainty factors
    • Fuzzy logic
    • Extensions to FOL

Knowledge-based Systems[edit]

To cover:

  • Knowledge acquisition
  • KBS architectures
  • Explanations, e.g., by reasoning maintenance systems

Symbolic learning approaches to Machine Learning[edit]

To cover:

  • From instruction: advice taking
  • Learning by analogy
  • Learning from expert explanations
  • Case-based learning

Agents and Multi-agent Systems[edit]

To cover:

  • The agent perspective as a unifying view of AI
  • Agent swarms and inter-agent communications

Subsymbolic approaches[edit]

To cover:

  • Situated robotics
  • Connectionism
  • Neural networks, before deep learning

Controversies[edit]

Philosophical: critiques from Dreyfus and other philosophers[edit]

Funding and practicality: AI Winters[edit]

Veritas Aeterna (talk) 23:21, 8 July 2022 (UTC)[reply]

Veritas Aeterna (talk) 23:06, 8 July 2022 (UTC)[reply]

Adding in a History Section to Subsume the Two Sections with Date Ranges[edit]

I’m adding a history section and using categories for AI boom and bust periods from Henry Katuz’s AAAI magazine article: The Third AI summer: AAAI Robert S. Engelmore Memorial Lecture [1]. There is considerable overlap of time periods with the History of AI article, but often differing in a few years. I have stayed with Henry Kautz’s time periods, except for at the end where I broke up the period he called THE SECOND AI WINTER: THE DISRESPECTED SCIENCE, 1988–2011, into two parts. The first part is the AI Winter, which is I just call “The Second AI Winter”. The second part is similar to what the History of AI has titled just AI 1993-2011, which I have called AI: MORE RIGOROUS FOUNDATIONS to be more clear.

All this was done so I could fold the existing sections “Dominant paradigm 1955-1990” and “Success with expert systems 1975–1990” into a larger, more encompassing and intuitive time line that is consistent as I can make it with Henry Kautz’s timeline along with that in History of AI.

I also have tried to clarify a bit more of the distinction between the “neats” and “scruffies” in an unbiased way, taking into account both existing text and the finer distinctions referred to in https://en.wikipedia.org/wiki/History_of_artificial_intelligence#Frames_and_scripts:_the_%22scuffles%22

So I changed:

  • ‘Logic-based’ to ‘Modeling formal reasoning with logic: the “neats” ‘

And

  • ‘Anti-logic or “scruffy” ‘to ‘Modeling implicit common-sense knowledge with frames and scripts: the “scruffies” ‘

Veritas Aeterna (talk) 01:45, 13 July 2022 (UTC) Veritas Aeterna (talk) 01:45, 13 July 2022 (UTC)[reply]

References

  1. ^ Kautz, Henry (2022-07-06). "The Third AI Summer: AAAI Robert S. Engelmore Memorial Lecture". AI Magazine. 43 (1): 93–104. doi:10.1609/aimag.v43i1.19122. ISSN 2371-9621. Retrieved 2022-07-12.

Expanding the Section on Key Ideas and Clarifying the First Summer[edit]

1. I added the maxim, "In the knowledge lies the power" to the section on Foundational Ideas.

2. I also added a brief discussion of the Type I and Type II distinction and their relation to symbolic and deep learning.

3. Finally, I added a bit more under the first time period in the short history of AI, e.g., that the Logic Theorist was able to prove 38 elementary theorems from Whitehead and Russell's Principia Mathematica.

Veritas Aeterna (talk) 21:40, 14 July 2022 (UTC)[reply]

Expanding the section "The second AI Summer: knowledge is power, 1978–1987" and adding in references at the end[edit]

The next change I'm making is to add a more detailed discussion of expert systems, both with examples and a discussion of architecture. I've also added references at the end and will switch to shortened footnotes where I can.

I made some changes in wording to clarify that it was due to increased memory available but rather to limitations in weak problem solving that motivated knowledge-based systems.

I am also working on draft changes to this article over at https://en.wikipedia.org/wiki/User:Veritas_Aeterna/Work_in_Progress,_Symbolic_Artificial_Intelligence before moving them over here.

(talk) 00:23, 19 July 2022 (UTC)[reply]

History is a bit off.[edit]

☒N Deleted

Fixed a bit of history[edit]

@Veritas Aeterna: I'm glad you added the material that argues symbolic and sub-symbolic methods are complementary, and a hybrid approach will be needed. I emphatically agree with this. There are things that symbolic reasoning can do that neural networks will never be able to do on their own. I also appreciate Kahnemann's insight that human brains seem to work this way. I've thought this for a long time.

However, some of the dates and discussion in the lede were not correct. The article needs to capture the experience of the 80s and 90s (the twilight of symbolic AI) more accurately. There was a collapse in confidence in AI (as a whole) in the early 1990s. This was preceded by a lot of criticism of the symbolic approach in the 1980s, mostly be people who had higher hopes for "connectionism" (like Geoffrey Hinton, Rumelhart, etc.) or for some version of Rodney Brooks' approach. In other words, in the late 80s and early 90s (1) symbolic AI was failing (for very real reasons that most people understood and discussed at the time) (2) soft computing, neural networks, optimization and other "statistical" methods offered ways forward that didn't have these problems.

I watched the Rossi talk given in the citation. See slide 9: it talks about 3 "phases" in the history of AI. (1) "High level cognition" (Symbolic AI) (2) "Data driven" (Sub-symbolic A)) and (3) "Reunification". So she agreesthat symbolic AI fell out of favor, and that it was replaced by data-driven ("statistical") approaches. She does not say, as the article currently does, this happened in 2012. (It happened in the 1990s.)

If you listen to the talk, she's saying that the next phase could be "Reunification". The article gives the impression that this reunification is already happening or has happened. On slide 10, she quotes the 100 report: "The pendulum has swung towards learning systems" (in other words, away from symbolic AI), but that “We think we’re seeing the beginning of the end that trend and move towards more hybrid designs in AI.” The beginning of a trend. The trend hasn't happened yet.

Have a look at the lede and see if it's consistent with your sources. If your sources disagree with R&N and my recollection, then let's talk. I'll leave the rest of the article to you. (I have some more notes in the next section). ---- CharlesTGillingham (talk) 06:03, 25 July 2022 (UTC)[reply]

Notes[edit]

  1. The history section only needs to cover the history of symbolic AI.
    1. J. Walter's turtles were sub-symbolic, as were perceptrons, Samuelson's checkers, Oliver Selfridge's learning programs. We don't need this.
    2. Don't need the first AI winter, as it didn't really effect the paradigm, and it was only tangentially caused by the paradigm.
    3. Don't need history after the 80s, because it's not symbolic AI any more.
  2. Instead, I think we could use a limits of symbolic AI section to explain the rise of sub-symbolic methods.
  3. As I said above "neurosymbolic" computing hasn't happened yet, at least not at the level of these other approaches. We should have a final section Hybrid approaches that covers (in a paragraph or two) "limits of machine learning with neural networks" and has "calls for hybrid approach" using the material you have in the first section. --- CharlesTGillingham (talk) 07:16, 25 July 2022 (UTC)[reply]

Conversation on Changes to the Symbolic AI Article[edit]

I'm glad you added the material that argues symbolic and sub-symbolic methods are complementary, and a hybrid approach will be needed. I emphatically agree with this. There are things that symbolic reasoning can do that neural networks will never be able to do on their own. I also appreciate Kahnemann's insight that human brains seem to work this way. I've thought this for a long time.

Hi, Charles, thanks for your suggestions. Daniel Kahneman's ideas are quite wide-spread now in the industry and I've seen his ideas presented countless times now, that the two approaches are complementary. It provides a very useful way for looking at both approaches, where we don't have to say there is a single correct only way to proceed with AI.

However, some of the dates and discussion in the lede were not correct. The article needs to capture the experience of the 80s and 90s (the twilight of symbolic AI) more accurately. There was a collapse in confidence in AI (as a whole) in the early 1990s. This was preceded by a lot of criticism of the symbolic approach in the 1980s, mostly be people who had higher hopes for "connectionism" (like Geoffrey Hinton, Rumelhart, etc.) or for some version of Rodney Brooks' approach. In other words, in the late 80s and early 90s (1) symbolic AI was failing (for very real reasons that most people understood and discussed at the time) (2) soft computing, neural networks, optimization and other "statistical" methods offered ways forward that didn't have these problems.

I lived through all this, working in AI industry after grad school then. I got my PhD in CS in the mid-80s. To say that time was the twilight of symbolic AI is only correct if you look at success as measured by commercial funding and media coverage. Yes, AI receded from the media limelight and the LISP-based hardware companies went under. But work in symbolic AI research continued in universities, and to a lesser extent, in industry, although often under other guises.

I think overall, the approach to AI history I'm advocating here is consistent with both Henry Kautz [[1]] and Russell & Norvig. Both express the view that after the second AI winter there was a period of time where the field went back to addressing problems with handling uncertainty and then began incorporating Bayesian and more statistical approaches. However, there was no sudden burst of sub-symbolic research, instead the work was more on Bayesian approaches to expert systems and new approaches to machine learning such as inductive logic programming, decision trees, symbolic machine learning, and probabilistic logic approaches such as statistical relational learning (e.g., Markov Logic Networks). I'm not saying there was no work in neural networks, just that it was not the primary focus on the field.

To imply that the field instead turned to sub-symbolic methods at the time implies that areas such as neural networks and deep learning became predominant at that time, which is not the case. Instead, the explosion of deep learning is widely dated to around 2012, when one of Hinton's deep-learning based neural networks, AlexNet roundly beat all competitors on an ImageNet benchmark. E.g., in Russell and Norvig, section 1.3.8 dates it as 2011- present, and Kautz dates it as (201[26] - ?). Please also see the quote from Henry Kautz that I mentioned below for the 4th sentence.

Let me address some problems in the second paragraph as it reads now.

1. The first sentence is fine.

2. The second, "Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field." is overly broad. Perhaps if we said "ultimate goal" that would be more precise, since many other researchers, especially those in KBS (knowledge-based systems), were pursuing more limited models of emulating human skill, dialogue, and thinking. Obviously, expert systems were not intended to be all-intelligent, just performant in one area. And, yet others, like John Anderson, were building cognitive architectures simulating human performance.

3. The third, "However, in the late 80s and 90s specific technical problems (such as brittleness and intractability) showed the limits of the symbolic approach.", I would rewrite to: "However, in the late 80s and 90s, specific technical problems, such as brittleness, difficulties handling uncertainty, and problems with acquiring knowledge from subject matter experts and maintaining the large knowledge bases that resulted, showed the limits of the symbolic approach at that time."

So, basically, not saying symbolic AI is dead and buried, just that it had to pause and address problems.

4. The fourth sentence, "AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks.[6]" I think is incorrect. Symbolic AI was not abandoned for sub-symbolic AI. There was research in these areas before the second AI Winter, including genetic algorithms and neural networks. Danny Hillis's work on connectionism was different from most neural network work now. If I recall correctly, it focused on spreading activation and message passing, not on back propagation. And those of us in AI didn't say we were doing sub-symbolic work.

Certainly, there was a massive shift around 2012 on and then it seemed as if symbolic AI had all but disappeared, and those in the deep learning camp presented it as if it were dead and buried and had never made any useful contributions. Also, as Kautz points out,

Overcoming the knowledge acquisition bottleneck led the field of AI to a renewed focus on machine learning. For most of the second winter, however, few researchers returned to the roots of machine learning in artificial neural networks. [1]

which contradicts the fourth sentence.

If you listen to the talk, she's saying that the next phase could be "Reunification". The article gives the impression that this reunification is already happening or has happened. On slide 10, she quotes the 100 report: "The pendulum has swung towards learning systems" (in other words, away from symbolic AI), but that “We think we’re seeing the beginning of the end that trend and move towards more hybrid designs in AI.” The beginning of a trend. The trend hasn't happened yet.

Thanks for mentioning the talks, I need to add them to the citations, they are really important and more accessible than the papers.

I went back to her talk. In the context of Bart Selman's talk, which occurred just a day or two earlier, which she refers to, and given the title, "Thinking Fast and Slow", it is clear that they believe this new trend has begun, not just that it might. See also her slide 10, and these spoken words, quoting Kautz: "...there is a violent agreement on the need to bring together neural and symbolic traditions...". Further context: She is at IBM, they are working on neurosymbolic systems, and she presents an example of neurosymbolic research from her work later on.

I'd also recommend the video of [Kautz's talk] and his coverage. For the future of AI, starting at 29:01, Kautz says, "We essentially have violent agreement on the need to bring together the neural and symbolic traditions.", but there is disagreement on how to do this. He proposes a taxonomy of six kinds of neuro-symbolic systems.

Going back to the Second AI Winter, (about 16:42) he cites the problem of expert system maintenance foremost. The collapse of AI workstations was more due to the availability of equivalent performance for LISP and Prolog on alternative, standard workstations. He also shows how the collapse was an impetus to other successful work: "I would argue that it's kind of the drive to model expert knowledge combined with the shortcomings of knowledge engineering that really led to knowledge induction or modern machine learning in expert domains: so, decision tree learning, inductive logic programming, and decision theoretic expert systems, and other such work." (about 16:22-16:42) There is no mention of subsymbolic systems such as deep learning until 2012.

Other Notes


A brief digression, Rossi also presents an alternative overview of AI history on slide 9 that might also work in the introductory part of the Symbolic AI article, although it is less detailed (just one slide) of course. For [Selman's talk] (start about 1:45:00 in!), you'll see he also dates the Deep Learning revolution at 2012. His main theme is a reunification of subfields such as vision, NLP, planning, etc. and that we can "use output from a perceptual system and leverage a broad range of existing AI techniques" (slide 95) that we could not before. The parts where he addresses combinations of symbolic and neural reasoning start at slide 114 (1:58:17), although he casts this more as combining knowledge-driven and data-driven approaches. He emphasizes that "scientific knowledge has an explanatory, causal component. It's cumulative" (about 2:01:00), unlike data. He says "Concept discovery is central to scientific discovery." (2:03:22). He also talks about systems that integrate reasoning and learning, but his focus is a but more on the reunification of subfields.

Have a look at the lede and see if it's consistent with your sources. If your sources disagree with R&N and my recollection, then let's talk. I'll leave the rest of the article to you. (I have some more notes in the next section).

Thanks, Charles. Thanks for not just reverting my edits. Feel free to write on my user page. For now, I suggest we just talk. I'll just add the references I mention here.

I'd like to expand the section on neurosymbolic systems and bring in material from [AI: The 3rd Wave]. For example, just in the abstract: "Neural-symbolic computing has been an active area of research for many years seeking to bring together robust learning in neural networks with reasoning and explainability via symbolic representations for network models. ... The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI." shows there is much more work in this area than most people know about.

Later, I'd also like to expand the section on symbolic machine learning, which it seems to be largely overlooked now. Instead, you almost get the impression that there was no machine learning done until neural networks, which is not true.

This week is fairly busy so I may not be able to get to either until later this week or early next week.

I wanted to put all the arguments against Symbolic AI under Controversies. I'm not sure I'd consider mathematical optimization or statistical classifiers as subsymbolic AI, but rather tools that can be used for either kind of AI. E.g., Dan Roth uses ILP (integer linear programming) for coreference resolution and I've seen optimization used in abductive reasoning. For statistical classifiers, certainly decision trees are symbolic, but I'd agree random forests are more arguable, harder to interpret. And an SVM also more cryptic.

If you wanted to expand the arguments against symbolic AI there, from the standpoint of sub-symbolic AI, you could add text there.


00:00, 26 July 2022 (UTC) Veritas Aeterna (talk) 00:21, 26 July 2022 (UTC)[reply]


So we're both veterans of the same era. I was at Cal until 1989, and then worked in industry from 1990-95 (at Aion corporation, an expert system shell company, which eventually merged with Intellicorp). So we were both there.
I mostly agree with you.
when (sentence 4). You make good points about the dates. I'm happy to use the 2012 date, but it has to be clear that this paradigm shift had been coming for a long time. These was no sudden shift in either 2012 or 1995. There was a huge overlap and the whole transition took three decades -- 1986 (Publication of Parallel distributed processing) to 2015 (Investment in machine learning skyrockets). (See below)
who (sentence 4). You're right, it wasn't the whole field: the machine learning people and the KBS people were working in parallel, through the whole period 1980-2012. So I agree that it's wrong to say "AI research" changed -- the two had become, in many ways, different fields. We should say something like "new work in machine learning showed steady progress while symbolic AI research slowed."

_____

A more complete history of the decline would include all these turning points:

  1. Discovery of intractability (Carp, Cook, Lighthill)
  2. Around 1980 or so, symbolic AI slowed down; it stopped producing dramatic, astonishing successes at the same rate it had 1956-1980. This is not the kind of thing that is easy to verify, but if you think about all those amazing accomplishments of the 60s and 70s and compare with the 80s, you see what I mean.
  3. In the 1980s, intelligent critiques began to show up, from Hinton, Brooks, etc., and people began to whisper "maybe Dreyfus was right". (I took three philosophy classes from Dreyfus (and more from Searle) so maybe my perspective is skewed.)
  4. In the 1980s machine learning starts small. Rumelhart, Hinton and others show that there is another way. The machine learning world begins working on a different path.
  5. The Lisp machine collapse in 1988.
  6. In the 90s - 2010s machine learning and symbolic AI are working in parallel, but machine learning is gaining. They barely talk to each other -- some are even trying to rename or split the field (e.g., "Computational intelligence"). They're publishing separate textbooks. However, machine learning is moving forward, becoming clearer, more unified and more scientific. Symbolic AI not so much -- dramatic widely accepted new results are rare. Symbolic AI becomes small, but mostly just relative to other things, which are growing fast.
  7. Around 1995 you have the media and the business world turning on AI, and this is when investment really dries up. The nadir of the second AI winter. This is the date the article chose, because, media-wise, it's a turning point in the history of AI.
  8. And then finally, in 2012, it becomes obvious that machine learning has won, and the transition is over. But, by this point, there are people who are too young to remember and who aren't even aware that the older paradigm existed -- they studied "Data science" or "Computational intelligence" or something and never learned anything about it.
  9. In 2015, massive investment in AI starts for real. At this point, people in the business world use the term "AI" as synonymous with "machine learning with neural networks". Symbolic AI is invisible in the wider world.
The point is that the date is arguable and I don't think the sources will agree. I think I could probably find a source that gives Hinton's work in the 80s credit for changing the paradigm, or a source that (like Wikipedia) put the date in the second AI winter. All of these dates are incorrect from a certain point of view.

In my view, the mid-nineties is the middle of an S-curve that starts in the 80s and bottoms out in the 2010s, but (as you point out) is showing signs of an uptick here in the 2020s.

____

On sentence 2, I want to make a larger point:
There are several transitions that happened in the period 1980-2010:
  • Symbolic -> sub-symbolic
  • Hard computing -> soft computing
  • "Full" AI rhetoric -> "narrow" AI results
  • Scruffiness and speculation -> formal methods
These are orthogonal transitions, but because they are coincident, sometimes popular sources (and even some researchers) confuse these, sadly when they are presenting a "good guys"/"bad guys" narrative about the history or future of AI.
You point out, correctly, plenty of late 80s symbolic AI was "narrow". It's also true that AI 1960-1975 was vociferously pursuing "general AI". The full->narrow transition happened inside and outside of symbolic AI. The second sentence was trying to avoid critiques of symbolic AI coming from uninformed sources that advocate for AGI; it's important for the reader understand that Bernard Goetz and the AGI people were objecting to "narrow" AI and not symbolic AI. Their target was modern successful machine learning, which is (almost) always narrow. In fact, it is the AGI people who are most likely to pursue and promote ideas in the "neurosymbolic" area we've been talking about. The second sentence isn't really necessary, but, a long time ago, it replaced a sentence that incorrectly claimed the opposite.
Another point, not in the article. You also mentioned that hard->soft was something that happened inside symbolic AI as well as outside. Bayesian networks, fuzzy logic were all symbolic, all happening around the early 90s. It's also true that statistical machine learning (optimization, neural networks, "computational intelligence" stuff) is mostly naturally soft. It's also true that statistical machine learning was eventually better at creating applications that need to handle uncertainty and "fuzziness". Some popular sources (from 2005 or so) derided GOFAI on the basis that it wasn't "soft" -- which isn't true or fair.
(Just for completeness: "formal" includes both statistical machine learning and AI research based on mathematical logic, and "scuffiness" applies to the "tuning" of machine learning models and ad-hoc, hand-coded knowledge representation. So the orthogonality is more obvious on this measure.)

____

So basically, in my opinion, you're half right about everything and your point of view is needed here. Keep up the good work. ---- CharlesTGillingham (talk) 17:37, 27 July 2022 (UTC)[reply]

Continuing the Conversation on the Symbolic AI Article[edit]

Hi, Charles. Actually, I was across the Bay at your frenemy school, got my MSEE there while taking all the AI courses I could, including from Feigenbaum, McCarthy, and Winograd. Grad school and PhD in AI was UT-Austin, where I was in between the neats and the scruffies. I worked in the field from the early 1980s to the present, have one book published in the field, some book chapters and journal articles, a few patents, and altogether some 40+ refereed citations in AI conferences or workshops—including AAAI and IJCAI. I have worked on research contracts for DARPA, ONR, IARPA, ARI, along with many corporate research projects. I didn't have any of this on my user profile, but thought I should add some of it now, as it seems more relevant. From 1987-2005, I was in various AI groups including FMC's AI group, Stanford Knowledge Systems Laboratories, and for the last nine years of that time span at Teknowledge. I knew Tom Kehler from Texas Instruments' AI group. I'm still in AI. I like Counting Crows, too.

I think we should not portray all of AI as monolithic, where first there was only symbolic AI, then at some point everyone switched gears and now there is only subsymbolic AI. Instead, there have always been subgroups—multiple strands—with competing theories and overlapping histories. E.g., Minsky's early work was on neural nets and backpropagation appears to have been invented multiple times in the 60s, then popularized by Hinton in 1986. So, even at the start and through the heights of Symbolic AI, it wasn't all one or the other.

We also need to distinguish between:

  • Media perception
  • Government and commercial funding
  • Research community perceptions

So, in both the AI Winters that Kautz mentions both symbolic AI and neural net research continued, but to lesser extents. And after deep learning exploded circa 2012, symbolic AI still continued. And, over the past twenty years there has also been a thread of researchers looking at neurosymbolic AI.

And to your point:

   ...people in the business world use the term "AI" as synonymous with "machine learning with neural networks". Symbolic AI is invisible in the wider world.

Yes, I would agree that much of the business world treats AI as the same as deep learning and symbolic AI is invisible to the wider public. But, we also want to paint an accurate picture of the state of the field, including where leaders of the field see the research going.

I know Hinton is certainly biased against symbolic AI. I was at a AAAI conference where he was invited to speak. When asked how those who viewed symbols as necessary to reasoning—or a similar question, I can't remember the exact phrasing—he said, bluntly, they should "Just get over it." Gary Marcus has also pointed that there is a significant bias in the deep learning community against the use of symbols or attempts to incorporate knowledge.

So, the misconceptions I want this article to address, by showing these are not the case, are:

  1. Symbolic AI died in the second AI winter.
  2. Symbolic AI only relies on rule-based methods.
  3. Symbolic AI made no contributions worth mentioning.
  4. Subsymbolic AI fixes the problems of symbolic AI, and has no problems itself.
  5. There are no examples of symbolic machine learning, instead machine learning was invented later.
  6. There are no examples of hybrid neuro-symbolic systems.

Some examples of neuro-symbolic systems include:

  • AlphaGO
  • Rossi's work
  • Research by Garcez and Lamb[2]


There is more. Marcus also points out Google's search uses both its knowledge graph and a large language model as a sample hybrid system, even though it is not considered an AI system. I can start writing the neurosymbolic section to address all this in a better way. I agree that it has not happened "at the level of these other approaches", but it is happening, there are good examples, and Kautz even has a taxonomy of the various approaches so far.

After that, I plan to add a discussion and examples of symbolic machine learning for the period following the AI winter.

Basically, I was just about half-way done with the article when we started talking. So, the section on the First AI Winter I hadn't started. I think we can address the concern about intractability there. Also, I have even started the section on techniques.

For now, I added the section below, using Kautz's language, see if it addresses your needs.

The first AI winter was a shock:

During the first AI summer, many people thought that machine intelligence could be achieved in just a few years. The Defense Advance Research Projects Agency (DARPA) launched programs to support AI research with the goal of using AI to solve problems of national security; in particular, to automate the translation of Russian to English for intelligence operations and to create autonomous tanks for the battlefield. Researchers had begun to realize that achieving AI was going to be much harder than was supposed a decade earlier, but a combination of hubris and disingenuousness led many university and think-tank researchers to accept funding with promises of deliverables that they should have known they could not fulfill. By the mid-1960s neither useful natural language translation systems nor autonomous tanks had been created, and a dramatic backlash set in. New DARPA leadership canceled existing AI funding programs.

...

Outside of the United States, the most fertile ground for AI research was the United Kingdom. The AI winter in the United Kingdom was spurred on not so much by disappointed military leaders as by rival academics who viewed AI researchers as charlatans and a drain on research funding. A professor of applied mathematics, Sir James Lighthill, was commissioned by Parliament to evaluate the state of AI research in the nation. The report stated that all of the problems being worked on in AI would be better handled by researchers from other disciplines — such as applied mathematics. The report also claimed that AI successes on toy problems could never scale to real-world applications due to combinatorial explosion.[3]

Notes[edit]

  1. ^ Kautz, p. 11.
  2. ^ Garcez 2020.
  3. ^ Kautz 2022, p. 109. sfn error: multiple targets (2×): CITEREFKautz2022 (help)

References[edit]

Working on Neuro-symbolic AI Section[edit]

Just a note that I am currently working on another part of this article addressing neuro-symbolic AI. See https://en.wikipedia.org/w/index.php?title=User:Veritas_Aeterna/Work_in_Progress,_Symbolic_Artificial_Intelligence&action=edit&section=18 for work in progress. I'm currently revising the text and adding in the citations.

There are three key sections:

  • Motivation
  • Kind of neuro-symbolic AI with examples
  • Open research questions.

I should be able to put this in within the next few days, or at least start adding the references.

There also needs to be some discussion, or at least a reference to, the controversies between deep learning adherents who swear off of symbols, such as Hinton, and those in symbolic AI. I'm not sure yet whether to put it in this section or in the controversies section.

I'm also aware we need to add a section on symbolic machine learning, partly as people seem to have forgotten the rich history of these contributions. That will be next. Then finally the controversies section. I'm happy for help there, especially with regards to philosophical arguments against symbolic AI from Dreyfus, Searle, and other philosophers. Veritas Aeterna (talk) 20:49, 4 August 2022 (UTC)[reply]

Added in the new section. Seems like both 'neurosymbolic' and 'neuro-symbolic' are used, but the last is slightly more popular and more readable, so went with that. Added in the new citations and tried to fix some existing ones that seemed entered incorrectly. I tried to avoid getting into the symbolic versus neural debate in this section, seems like that can go in the controversies section more easily, as it can get long!

Veritas Aeterna (talk) 01:52, 6 August 2022 (UTC)[reply]

Changes to the Lead Text to Correct Some Inaccuracies and Misleading Statements[edit]

Current Lead Text[edit]

Since many people may only read the introductory paragraphs, it is important to ensure they are correct. Unfortunately, the middle paragraph of the the current lead section has some key inaccuracies and parts that are misleading. I am referring to this paragraph:

"Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s. Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. However, in the late 80s and 90s specific technical problems (such as brittleness and intractability) showed the limits of the symbolic approach. AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks. These methods were directed towards specific problems with specific solutions, rather than general intelligence. "Deep learning" (a sub-symbolic approach) had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. But by 2020 difficulties with bias, explanation, comprehensibility, and robustness have become more apparent with deep learning approaches and AI researchers have called for combining the best of both the symbolic and neural network approaches."

Parts that are Misleading or Inaccurate[edit]

The problem with these sentences is that they give an erroneous view of symbolic AI, especially in the three sentences in bold (added). Specifically, it propagates these viewpoints:

  1. The key problems of symbolic AI that led to the Second AI Winter were primarily (a) brittleness, and (b) computational intractability.
  2. Russell and Norvig, 2021, pages 21, and 23-25 is claimed to support that contention. Those pages do not, see the clarifications below.
  3. From the mid-1990s, AI research focused on connectionism, soft computing, mathematical optimization and neural networks.
  4. These methods were "new methods", implying there had been no prior work on connectionism, soft computing, mathematical optimization or neural networks.
  5. All of these methods were considered sub-symbolic, not just neural networks and connectionism.
  6. There is an implication that by a "turn to" these methods, AI researchers turned away from symbolic approaches. That is not true.
  7. Russell and Norvig, 2021, pages 25-26 is claimed to support "AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks". It does not.

Clarifications[edit]

However, in the late 80s and 90s specific technical problems (such as brittleness and intractability) showed the limits of the symbolic approach.
Russell and Norvig, page 21 is describing problems from earlier AI work, around 1957-1959, and prior to the LightHill Report in 1973, which indeed mentioned combinatorial explosion as a problem.
Attempts to address computational intractability were part of early efforts regarding the use of heuristics in searching, such as A*.
Heuristic search, such as the A* algorithm, published in 1968 is one means of addressing this problem.
Applying expert knowledge is another approach to focusing search, as Feigenbaum mentions in his CACM interview.
So, yes this was a problem for the First AI Winter, but at this point we are talking about the Second AI Winter, starting in 1988.
The main reasons for the failure of expert systems leading to the Second AI Winter are described as problems with knowledge acquisition and handling uncertainty by Kautz. Similarly, Russell and Norvig on P. 24 say, "It turned out to be difficult to build and maintain expert systems for complex domains, in part because the reasoning methods used by the systems broke down in the face of uncertainty and in part because the systems could not learn from experience."
Brittleness, applies to both deep learning and symbolic systems. It was indeed a problem for expert systems, but is not unique to symbolic AI. I am still including it, however.
Instead, of brittleness and intractability, Kautz cites two key technical problems for the end of enthusiasm for expert systems:
"The first challenge was the need for principled and practical methods for probabilistic reasoning." [Kautz, 2022, p. 110]
"The second unsolved challenge for the expert system approach was named the “knowledge acquisition bottleneck.” " [Kautz, 2022, p. 110]
AI research turned to new methods (called "sub-symbolic" at the time) including connectionism, soft computing, mathematical optimization and neural networks. These methods were directed towards specific problems with specific solutions, rather than general intelligence.
Kautz characterizes the next two decades and the primary focus on P. 110: "overcoming these challenges set the workplan for the next two decades of research in AI. ". It is not a focus on subsymbolic systems, but instead on handling uncertainty and the knowledge acquisition bottleneck.
Both Kautz and Russell & Norvig cite probabilistic reasoning (Bayesian networks, HMMs, and later statistical relational learning) and machine learning (primarily symbolic approaches, but also SVMs and other classifiers, including Valiant's theoretical work).
It is not until about 2012 that deep learning takes off. Russell and Norvig say 2011, citing speech recognition, but Kautz starts that period as 2012, and most references I have seen, such as Marcus, also use 2012.
Indeed, [Marcus, 2019] describes neural net research as being considered unsuccessful to around 2012:

"Still, many people continued in Rosenblatt's tradition for decades. And until recently, his successors too struggled mightily. Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods.

... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks.

Suddenly, for the first time, Hinton's team and others began setting records, most notably in recognizing images in the ImageNet database we mentioned earlier. Competitors Hinton and others focused on a subset of the database-1.4 million images, drawn from one thousand categories. Each team trained its system on about 1,25 million of those, leaving 150,000 for testing. Before then, with older machine-learning techniques, a score of 75 percent correct Was a good result; Hinton's team scored 84 percent correct, using a deep neural network, and other teams soon did even better; by 2017, Image labeling scores, driven by deep learning, reached 98 percent."

These methods were directed towards specific problems with specific solutions, rather than general intelligence.
The implication is that symbolic AI had need been focused on narrow AI before, but only AGI. But that is not the case. Clearly, expert systems are an obvious case of narrow AI. So, I propose just dropping this sentence.

Approach to Address these Problems[edit]

I think the problem is that overall the explanation is too coarse, and does not break down the periods of the Second AI Winter, the period immediately following that when probabilistic reasoning and symbolic machine learning received much greater focus, and then the period in which deep learning took off (circa 2012). Finally, a shift to a greater focus on hybrid systems appears to have started about 2020.

I propose refining the introductory discussion to break out these periods and reserving "sub-symbolic" to describe only neural nets and connectionism, and not using it to encompass probabilistic methods, Bayesian approaches, or optimization. The latter techniques can be used for symbolic AI, deep learning, and in various hybrid logical-probabilistic approaches, such as Markov Logic Networks.

Regarding the use of "soft", fuzzy logic was introduced in 1965, and Danny Hillis founded Thinking Machines Corporation in 1983. So, there wasn't a sudden shift to soft and sub-symbolic approaches in the late 80's and neural nets didn't become dominant until about 2012. We can certainly talk more about fuzzy logic and other extensions to logic later on.

Revised Sentences and New Paragraph for Deep Learning[edit]

Here is what I propose, discussed one part at a time:

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s.
{ No changes. }
Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the ultimate goal of their field.
{ Just clarified that AGI was an **ultimate** goal and not the only one. Certainly, many others were focused on specific applications, just as they are today. E.g., theorem provers, planners, symbolic mathematics, etc. And of course, expert systems were, by definition, narrow AI.}
An early boom, with early successes such as the Logic Theorist and Samuel's Checker's Playing Program led to unrealistic expectations and promises and was followed by the First AI Winter as funding dried up.[1][2]
{ I'm adding a bit more of the early history and a mention of the first AI winter. Russell mentions Samuel's program. Kautz mentions the Logic Theorist. }
A second boom (1969-1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace. [3][4] That boom, and some early successes, e.g., with XCON at DEC, was followed again by later disappointment.[4] Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988-2011) followed.[5]
{ Boom and bust again, with time periods from first Russell then Kautz and some specifics on why there was a boom. It all looked very promising back then, initially. }
Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.[5] Uncertainty was addressed with formal methods such as Hidden Markov Models, Bayesian reasoning, and statistical relational learning. Symbolic machine learning addressed the knowledge acquisition problem with contributions including Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning, case-based learning, and inductive logic programming to learn relations.[6]
{ Some specifics about symbolic machine learning until I write that section. For handling uncertainty, Bayesian reasoning and HMMs are mentioned by Russell & Norvig while Kautz mentions Bayesian reasoning and statistical relational reasoning. }

---

Next, I'd start a new paragraph just to address deep learning and history to the present:

Neural networks, a sub-symbolic approach, had been pursued from early days. Early examples are Rosenblatt's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams,[7] and work in convolutional neural networks by LeCun et al. in 1989.[8]
{ Brief intro and overview. I could cite say even more as to William Grey Walter's work on cybernetics and the work of Minsky and Papert on neural networks, but I'm trying to keep it shorter.}
However, neural networks were not viewed as successful until about 2012

Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ...A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks.[9]

{ Explain why we view 2012 as the jumping off point for when deep learning really takes off. It seems necessary to make this point as so much of what you read attempts to rewrite history to imply first there was symbolic AI, and then deep learning; End of story. That's just not true.}
Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation.
{ Acknowledge incredible results. }
However, by 2020 as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; AI researchers have begun calling for combining the best of both the symbolic and neural network approaches[10][11] and addressing areas that both approaches have difficulty with, such as common-sense reasoning.[9]
{ Finally, how both approaches may be best together. I may need to add more on hybrid approaches that extend logic to handle probability, I intend to put that in somewhere. }

New Paragraphs[edit]

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s.[12][13] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the ultimate goal of their field.[14] An early boom, with early successes such as the Logic Theorist and Samuel's Checker's Playing Program led to unrealistic expectations and promises and was followed by the First AI Winter as funding dried up.[1][2] A second boom (1969-1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.[3][4] That boom, and some early successes, e.g., with XCON at DEC, was followed again by later disappointment.[4] Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988-2011) followed. [5] Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.[6] Uncertainty was addressed with formal methods such as Hidden Markov Models, Bayesian reasoning, and statistical relational learning.[15][16] Symbolic machine learning addressed the knowledge acquisition problem with contributions including Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning, case-based learning, and inductive logic programming to learn relations.[6]

Neural networks, a sub-symbolic approach, had been pursued from early days and was to reemerge strongly in 2012. Early examples are Rosenblatt's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams[17], and work in convolutional neural networks by LeCun et al. in 1989.[18] However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ...A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks [9] Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for combining the best of both the symbolic and neural network approaches[10][11] and addressing areas that both approaches have difficulty with, such as common-sense reasoning.[9] Veritas Aeterna (talk) 23:44, 9 August 2022 (UTC)[reply]

References

  1. ^ a b Kautz 2020, pp. 107–109.
  2. ^ a b Russell and Norvig 2021, p. 19.
  3. ^ a b Russell and Norvig 2021, p. 22-23.
  4. ^ a b c d Kautz 2020, pp. 109–110.
  5. ^ a b c Kautz 2020, p. 110.
  6. ^ a b c Kautz 2020, pp. 110–111.
  7. ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
  8. ^ LeCun, Y.; Boser, B.; Denker, I.; Henderson, D.; Howard, R.; Hubbard, W.; Tackel, L. (1989). "Backpropagation Applied to Handwritten Zip Code Recognition". Neural Computation. 1 (4): 541–551.
  9. ^ a b c d Marcus and Davis 2019.
  10. ^ a b Rossi, Francesca. "Thinking Fast and Slow in AI". AAAI. Retrieved 5 July 2022.
  11. ^ a b Selman, Bart. "AAAI Presidential Address: The State of AI". AAAI. Retrieved 5 July 2022.
  12. ^ Kolata 1982.
  13. ^ Russell & Norvig 2003, p. 5.
  14. ^ Russell & Norvig 2021, p. 24.
  15. ^ Russell and Norvig 2020, p. 25.
  16. ^ Kautz 2020, p. 111.
  17. ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
  18. ^ LeCun, Y.; Boser, B.; Denker, I.; Henderson, D.; Howard, R.; Hubbard, W.; Tackel, L. (1989). "Backpropagation Applied to Handwritten Zip Code Recognition". Neural Computation. 1 (4): 541–551.

Adding the Rest of the Section on History[edit]

I'm adding the section on Uncertain Reasoning after the Second AI Winter today, then should have the section on Machine Learning within a few days. I'm trying to keep it short enough to be readable while still hitting key highlights. Veritas Aeterna (talk) 00:16, 18 August 2022 (UTC)[reply]

Here's the section on machine learning. I'm focusing on contributions made in symbolic machine learning primarily and especially in the period after the Second AI Winter up until about 2011. Veritas Aeterna (talk) 22:45, 20 August 2022 (UTC)[reply]

Adding the Sections on Techniques and Controversies[edit]

I have started working on the remaining two sections on techniques and controversies. For techniques, I will mostly briefly mention key algorithms, projects, or contributions with links to the appropriate Wikipedia pages. I will try to keep the overviews brief -- a sentence or less, so the article does not grow unmanageably long.

For the controversies section, I intend to include some comments from Gary Marcus discussing the cultural animus against symbolic AI in the deep learning community, along with criticisms of symbolic AI from Hinton. I am first moving the discussion of "GOFAI" there, to the controversy section, as it is definitely not a neutral term, rather it has the negative connotations of "old-fashioned" implying that it has been entirely superseded.

Hopefully I can have the Techniques section sometime this week and the Controversies sometime next week. Veritas Aeterna (talk) 23:10, 29 August 2022 (UTC)[reply]

OK, I have added the Techniques and Contributions section and will start on the Controversies section next week.

Update: Added content for the section on Controversies. Added a new reference to Rodney Brooks paper, "Intelligence without Representation".

Veritas Aeterna (talk) 02:21, 4 September 2022 (UTC) Veritas Aeterna (talk) 04:41, 13 September 2022 (UTC)[reply]

Broken short citations[edit]

In this edit, I added a reference that was missing for a broken short citation (Marcus & Davis 2019), but I noticed that there are other broken short citations, which I didn't fix (Marcus 2019; Marcus 2020; Marcus 2022).

Also, Veritas Aeterna, please note that headings should be in sentence case per MOS:HEAD. I corrected a lot of improperly formatted headings in the aforementioned edit. Thanks, Biogeographist (talk) 17:20, 25 September 2022 (UTC)[reply]

Hi, thanks so much for your improvements. Sometimes when I have pasted in quoted materials I have indeed picked up curly apostrophes or quotes. I’ll also check the short citations by searching for sfn and ensuring there is a proper citation. But if you know a quicker way to style check and proof a Wikipedia page, please let me know. Something like flake8 for checking Python code or lint for C…only for Wikipedia pages, if we have such a tool. Oh, and yes you’re right about the headings, thanks for fixing them!! Veritas Aeterna (talk) 23:11, 25 September 2022 (UTC)[reply]
Some people use {{Automated editing}} tools that may include linter-like functions, but I don't use those and don't know what they can do. On my computer I have some text services that I use on- and off-wiki: I do conversion of quote marks using one of those. If there are more serious formatting issues in an article, I will edit the text in a dedicated text editor for use of regex, etc. Biogeographist (talk) 00:51, 26 September 2022 (UTC)[reply]

GOFAI Philosophical Discussions[edit]

CharlesTGillingham wanted to move discussions of the term GOFAI to a page under Philosophy, to which I agreed, but removing the entire section on the Qualification Problem, which Turing first raised, removes a key part of the discussion of Controversies, so I have restored it here, rather than reverting his recent sequence of changes entirely. I don't mind adding a See Also to that GOFAI discussion, if he likes. Veritas Aeterna (talk) 21:32, 5 July 2023 (UTC)[reply]

Could we describe the qualification, ramification and frame problems from McCarthy's point of view? He was the one who identified these.
It's only tangentially related to Dreyfus -- it follows from his critique of the "epistemological assumption", but he never actually wrote a program that didn't work because of it. McCarthy did. I think McCarthy's experience is much more grounded in computer science, and much more interesting and useful for future work.
The frame, ramification and qualification problems are, in my view, related to the common sense knowledge problem. When you try to describe real-world situations or goals, your symbolic description tends to get longer and longer the more you think about it, because you keep thinking of more special cases and details that have to be specified. It's really difficult to know if you've got it right now and you can safely stop working on it. McCarthy developed logic that could side-step any obviously inessential stuff if necessary, but this doesn't really solve the problem -- it just kicks it down the road.
This is a computer science problem. It relates to the brittleness of expert systems, the difficulty of creating a useful universal ontology, and it's the way value-alignment problem is usually framed. It belongs in this article.
Dreyfus doesn't, in my view. He's too far away from the code to be useful. ---- CharlesTGillingham (talk) 22:46, 5 July 2023 (UTC)[reply]
I can think of two solutions here:
(1) I just change this part to what it was before when it was labeled "Philosophical: critiques from Dreyfus and other philosophers" and restore the previous content, perhaps linking in the GOFAI article, or removing the sentence on Haugeland completely if you think it inaccurate.
(2) I add a paragraph describing McCarthy's views on the frame problem and use of circumscription in an attempt to address it, then you can review and see what you think.
Let me know which you prefer but please don't do anything just yet.
Also, I can move the part on Dreyfus to the section on Situated Robotics.
Similarly, please me know which you prefer but please don't do anything just yet.
By the way, the Reply formatting isn't working too well for me -- it keeps adding new material at the front of sentences. I don't know why. Veritas Aeterna (talk) 01:00, 6 July 2023 (UTC)[reply]
I still feel Dreyfus has always been irrelevant to the practice of symbolic AI. Minsky said that he "misunderstands and should be ignored." Wikipedia should cover him where he belongs, in philosophy & cognitive science. He doesn't belong in this article. ---- CharlesTGillingham (talk) 16:29, 6 July 2023 (UTC)[reply]
I can remove the Dreyfus references -- trying to work with you here. I would make similar arguments about GOFAI, which I think you disagree with, as I never knew anyone who held the GOFAI view. Most were familiar with the physical symbol hypothesis and have come to view the "sufficient" part of it as overly strong.
However, I think this following part on embodied cognition remains relevant, and can be moved to the section covering Rodney Brooks work.
"The embodied cognition approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its environment, including the rest of its body. Under the embodied cognition approach, robotics, vision, and other sensors become central, not peripheral."
So, I am moving that and trying to write transitions to tie it into Brook's approach. See if it works for you, too, once I have finished the edits.
Regarding (1) vs (2) did you have a preference? Or yet another alternative?
Thanks for slowing things down.
Right now I am submitting some white papers to NSF and DARPA on deadlines. Veritas Aeterna (talk) 19:16, 6 July 2023 (UTC)[reply]
That would be (2). Add McCarthy, delete Dreyfus and keep Brooks.
My idea here is this: In this article, we cover problems that were discovered by computer scientists working in AI, and disputes between computer scientists. We cover criticism of the PSSH/Cognitivism/GOFAI/Strong AI hypothesis elsewhere. ---- CharlesTGillingham (talk) 00:41, 7 July 2023 (UTC)[reply]
OK, let me work on this and give it a try. Should have it by Monday. BTW I took a class from McCarthy, but it was on LISP, and he could say all combinations of CAR, CADR, CADDR, etc. that you can think of. FOL and automatic theorem proving was at UT-Austin. Veritas Aeterna (talk) 22:01, 7 July 2023 (UTC)[reply]
I rewrote much of the first section and changed the title to emphasize problems encountered applying FOL to dynamic situations (the Frame Problem and the Qualification Problem) and similar difficulties with common-sense reasoning. There are no references to Haugeland and Dreyfus there anymore, and much more discussion of McCarthy's contributions with circumscription and with his view of common-sense reasoning in the Advice Taker. Suggestions welcome.Veritas Aeterna (talk) 00:25, 9 July 2023 (UTC)[reply]
This all looks very good. ---- CharlesTGillingham (talk) 18:09, 9 July 2023 (UTC)[reply]
Great! I'm glad we could reach an agreement. Veritas Aeterna (talk) 20:51, 10 July 2023 (UTC)[reply]

@ 154.198.89.90 (talk) 04:58, 29 April 2024 (UTC)[reply]