Talk:Rule-based machine learning

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Sorting out 'rule-based system'[edit]

This article presently excludes RBML from RBS, but the rule-based system article is largely written in generic language, and should conceptually cover all rule-based systems ... until it doesn't. It's a big mess.

For my own notes, I'm using rule-based system as an inclusive, conceptual term, and I've marshalled the hand-crafted aspects into [[boutique rule-based system]] (works for me!), but I doubt that would fly here. Any better ideas fit for public consumption? — MaxEnt 00:18, 18 March 2017 (UTC)[reply]

Look what I found in my notes: A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems. from A Conversation With Judea Pearl. There was once a scalability problem associated with rule-based systems on the inference side, quite independent of the hand-construction hassle. It's not apparent how RBML side-steps the inference problem. Is perhaps the nature of the rules in this context also made of incompatible green cheese? Are the old-style rules more like logic, and the new-style rules more like formulas? This really needs to be better. — MaxEnt 00:28, 18 March 2017 (UTC)[reply]

Singular vs Smooth Model (following on MaxEnt above remarks)[edit]

I wonder if "Singular Model" (as opposed to RMBL in the article) would better be renamed "Smooth Model". My thought behind this is that recent research trend around Neurosymbolic AI (see Hybrid_intelligent_system) is also about trying to uncover implicit decision rules embedded in neural network. For example, see (Lamb et al.)[1] on this. Hence one can make the assumption that any model is kind of decision model where knowledge is stored in a more or less cripsy/diffuse way (human interpretable vs vague set of weights). RBML would therefore be higher on this "crispy" scale (readable scale) than more global methods (e.g. neural networks). Same for Bayesian networks. Any thoughts on this ? — GenEars (talk) 16:40, 5 November 2021 (UTC)[reply]

References

  1. ^ Lamb, Luis C., Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar, et Moshe Vardi. « Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective ». arXiv:2003.00330 [cs], 21 mai 2020. http://arxiv.org/abs/2003.00330.