Jump to content

Newton Howard

From Wikipedia, the free encyclopedia
(Redirected from User:Bbrink8/sandbox)

Newton Howard is a brain and cognitive scientist, the former founder and director of the MIT Mind Machine Project[1][2] at the Massachusetts Institute of Technology (MIT). He is a professor of computational neurology and functional neurosurgery at Georgetown University.[3] He was a professor of at the University of Oxford, where he directed the Oxford Computational Neuroscience Laboratory.[4][5] He is also the director of MIT's Synthetic Intelligence Lab,[6] the founder of the Center for Advanced Defense Studies[7] and the chairman of the Brain Sciences Foundation.[8] Professor Howard is also a senior fellow at the John Radcliffe Hospital at Oxford, a senior scientist at INSERM in Paris and a P.A.H. at the CHU Hospital in Martinique.

His research areas include Cognition, Memory, Trauma, Machine Learning, Comprehensive Brain Modeling, Natural Language Processing, Nanotech, Medical Devices and Artificial Intelligence.

Education and career

[edit]

Howard earned his B.A. from Concordia University Ann Arbor and an M.A. in Technology from Eastern Michigan University. He went on to study at MIT and at the University of Oxford where, as a graduate member of the Faculty of Mathematical Sciences, he proposed the Theory of Intention Awareness (IA).[9] He also received a Doctorate in Cognitive Informatics and Mathematics from the University of Paris-Sorbonne, where he was also awarded a Habilitation a Diriger des Recherches for his work on the Physics of Cognition (PoC).[10]

Howard is an author and national security advisor[11][12] to several U.S. Government organizations[13] and his work has contributed to more than 30 U.S. patents and over 90 publications. In 2009, he founded the Brain Sciences Foundation (BSF),[8] a nonprofit 501(c)3 organization with the goal of improving the quality of life for those suffering from neurological disorders.

Research

[edit]

Howard is known for his Theory of Intention Awareness (IA),[14] which provides a possible model for explaining volition in human intelligence, recursively throughout all layers of biological organization. He next developed the Mood State Indicator (MSI)[15] a machine learning system capable of predicting emotional states by modeling the mental processes involved in human speech and writing. The Language Axiological Input/Output system (LXIO)[15] was built upon this MSI framework and found to be capable of detecting both sentiment and cognitive states by parsing sentences into words, then processing each through time orientation, contextual-prediction and subsequent modules, before computing each word's contextual and grammatical function with a Mind Default Axiology. The key significance of LXIO was its ability to incorporate conscious thought and bodily expression (linguistic or otherwise) into a uniform code schema.[15]

In 2012, Howard published the Fundamental Code Unit (FCU)[16] theory, which uses unitary mathematics (ON/OFF +/-) to correlate networks of neurophysiological processes to higher order function. In 2013, he proposed the Brain Code (BC)[17] theory, a methodology for using the FCU to map entire circuits of neurological activity to behavior and response, effectively decoding the language of the brain.[18]

In 2014, he hypothesized a functional endogenous optical network within the brain[citation needed], mediated by neuropsin (OPN5). This self-regulating cycle of photon-mediated events in the neocortex involves sequential interactions among 3 mitochondrial sources of endogenously-generated photons during periods of increased neural spiking activity: (a) near-UV photons (~380 nm), a free radical reaction byproduct; (b) blue photons (~470 nm) emitted by NAD(P)H upon absorption of near-UV photons; and (c) green photons (~530 nm) generated by NAD(P)H oxidases, upon NAD(P)H-generated blue photon absorption. The bistable nature of this nanoscale quantum process provides evidence that an on/off (UNARY +/-) coding system exists at the most fundamental level of brain operation.

Transformers sculptures

[edit]

In 2021 Howard installed two-ton (1,814 kg) sculptures depicting Bumblebee and Optimus Prime, characters from the Transformers media franchise, outside of his home in the Georgetown neighborhood of Washington, D.C. His inspiration for the sculptures came from his work with artificial intelligence and "because the Transformers represent human and machine living in harmony, if you will."[3][19] The reaction from locals was mixed and he ran into legal issues with local government officials. He was eventually granted permission to keep the statues installed for a period of six months, but they remained after that time.[3][19][20]

Selected works

[edit]

Books

[edit]

Most-cited journal articles

[edit]
[edit]

References

[edit]
  1. ^ "MIT Mind Machine Project". Mind Machine Project. Massachusetts Institute of Technology.
  2. ^ Chandler, David (December 7, 2009). "Rethinking artificial intelligence". MIT News. Massachusetts Institute of Technology.
  3. ^ a b c Austermuhle, Martin (March 3, 2021). "Optimus Prime Faces A New And Unexpected Foe: Georgetown's Historic District". NPR. Retrieved May 8, 2022.
  4. ^ "Nuffield Department of Surgical Sciences". Nuffield Department of Surgical Sciences. University of Oxford.
  5. ^ "Oxford Computational Neuroscience Laboratory". Oxford Computational Neuroscience Laboratory. University of Oxford.
  6. ^ "Synthetic Intelligence Lab". Synthetic Intelligence Lab. Massachusetts Institute of Technology.
  7. ^ "Center for Advanced Defense Studies". Center for Advanced Defense Studies.
  8. ^ a b "Brain Sciences Foundation". Brain Sciences Foundation.
  9. ^ Newton Howard, “Theory of Intention Awareness in Tactical Military Intelligence: Reducing Uncertainty by Understanding the Cognitive Architecture of Intentions", Author House First Books Library, Bloomington, Indiana. 2002.
  10. ^ Howard, Newton (1999). "The Logic of Uncertainty and Situational Understanding". Center for Advanced Defense Studies (CADS)/Institute for the Mathematical Complexity & Cognition (MC) Centre de Recherche en Informatique, Université Paris Sorbonne.
  11. ^ JMO, CWID (2007). "CWID - Coalition Warrior Interoperability Demonstration" (PDF). CWID JMO.
  12. ^ NATO, MIP (2007). "Joint C3 Information Exchange Data Model Overview" (PDF). MIP-NATO Management Board.
  13. ^ Howard, Newton (2013). "Development of a Diplomatic, Information, Military, Health, and Economic Effects Modeling System" (PDF). Massachusetts Institute of Technology.
  14. ^ Howard, Newton (2002). Theory of Intention Awareness in Tactical Military Intelligence: Reducing Uncertainty by Understanding the Cognitive Architecture of Intentions. Bloomington, IN: Author House First Books Library.
  15. ^ a b c Howard, Newton; Guidere, Mathieu (January 2012). "LXIO: The Mood Detection Robopsych" (PDF). The Brain Sciences Journal. 1: 98–109. doi:10.7214/brainsciences/2012.01.01.05.
  16. ^ Howard, Newton (2012). "Brain Language: The Fundamental Code Unit" (PDF). The Brain Sciences Journal. Brain Sciences Foundation.
  17. ^ Howard, Newton (2013). "The Twin Hypotheses". Advances in Artificial Intelligence and Its Applications. Lecture Notes in Computer Science. Vol. 8265. Springer. pp. 430–463. doi:10.1007/978-3-642-45114-0_35. ISBN 978-3-642-45113-3.
  18. ^ Howard, Newton (2015). The Brain Language. London, UK: Cambridge Scientific Publishing. ISBN 978-1-908106-50-6.
  19. ^ a b Kois, Dan (March 4, 2021). "You Gotta Love These Two Enormous Transformers Statues This Guy Erected on His Fancy Georgetown Block". Slate. Retrieved May 8, 2022.
  20. ^ Mathews, Topher (January 3, 2022). "Transformers Carry On". The Georgetown Metropolitan. Retrieved May 8, 2022.