Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind, Version 2

73 Pages Posted: 9 Jul 2022 Last revised: 18 Jul 2022

Date Written: July 13, 2022

Abstract

Miriam Yevick’s 1975 holographic logic suggests we need both symbols and networks to model the mind. I explore that premise by adapting Sydney Lamb’s relational network notation to represent a logical structure over basins of attraction in a collection of attractor landscapes, each belonging to a different neurofunctional area (NFA) of the cortex. Peter Gärdenfors provides the idea of a conceptual space, a low dimensional projection of the high-dimensional phase space of a NFA. Vygotsky’s account of language acquisition and internalization is used to show how the mind is indexed. We then define a MIND as a relational network of logic gates over the attractor landscape of a neural network loosely partitioned into many NFAs. An INDEXED MIND consists of a GENERAL network and an INDEXING network adjacent to and recursively linked to it. A NATURAL MIND is one where the substrate is the nervous system of a living animal. An ARTIFICIAL MIND is one where the substrate is inanimate matter engineered by humans to be a mind; it becomes AUTONOMOUS when it is able to purchase its compute with services rendered.

Keywords: neuroscience, cognitive science, brain, complex neurodynamics, language, cognition, mind, artificial intelligence, deep learning, artificial neural nets

Suggested Citation

Benzon, William L., Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind, Version 2 (July 13, 2022). Available at SSRN: https://ssrn.com/abstract=4141479 or http://dx.doi.org/10.2139/ssrn.4141479

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
48
Abstract Views
279
PlumX Metrics