IMR Press / JIN / Volume 22 / Issue 2 / DOI: 10.31083/j.jin2202039
Open Access Review
The Quest for Cognition in Purposive Action: From Cybernetics to Quantum Computing
Show Less
1 Istituto Italiano di Tecnologia, Center for Human Technologies, Robotics, Brain, and Cognitive Sciences, 16152 Genoa, Italy
*Correspondence: pietro.morasso@iit.it (Pietro Morasso)
J. Integr. Neurosci. 2023, 22(2), 39; https://doi.org/10.31083/j.jin2202039
Submitted: 1 August 2022 | Revised: 7 November 2022 | Accepted: 9 November 2022 | Published: 16 February 2023
Copyright: © 2023 The Author(s). Published by IMR Press.
This is an open access article under the CC BY 4.0 license.
Abstract

Norbert Wiener and Nikolai Bernstein set the stage for a worldwide multidisciplinary attempt to understand how purposive action is integrated with cognition in a circular, bidirectional manner, both in life sciences and engineering. Such a ‘workshop’ is still open and far away from a satisfactory level of understanding, despite the current hype surrounding Artificial Intelligence (AI). The problem is that Cognition is frequently confused with Intelligence, overlooking a crucial distinction: the type of cognition that is required of a cognitive agent to meet the challenge of adaptive behavior in a changing environment is Embodied Cognition, which is antithetical to the disembodied and dualistic nature of the current wave of AI. This essay is the perspective formulation of a cybernetic framework for the representation of actions that, following Bernstein, is focused on what has long been considered the fundamental issue underlying action and motor control, namely the degrees of freedom problem. In particular, the paper reviews a solution to this problem based on a model of ideomotor/muscle-less synergy formation, namely the Passive Motion Paradigm (PMP). Moreover, it is shown how this modeling approach can be reformulated in a distributed manner based on a self-organizing neural paradigm consisting of multiple topology-representing networks with attractor dynamics. The computational implication of such an approach is also briefly analyzed looking at possible alternatives of the von Neuman paradigm, namely neuromorphic and quantum computing, aiming in perspective at a hybrid computational framework for integrating digital information, analog information, and quantum information. It is also suggested that such a framework is crucial not only for the neurobiological modeling of motor cognition but also for the design of the cognitive architecture of autonomous robots of industry 4.0 that are supposed to interact and communicate naturally with human partners.

Keywords
cybernetics
degrees of freedom problem
embodied cognition
passive motion paradigm
equilibrium point hypothesis
self-organization
topology representing networks
quantum brain hypothesis
1. The Cybernetic Framework

The invention of cybernetics by Norbert Wiener more than seventy years ago [1] marked the acquisition of two main interdisciplinary concepts: (1) the large common ground between neurophysiology and engineering methodologies and (2) the unitary nature of the multi-scale/multi-level investigation of “cognitive agents”, whether biological or artificial.

There is no doubt that modern engineering methodologies, from informatics to signal processing, control methodologies, and telecommunications are somehow spin-offs with a common cybernetic origin. At the same time, we should remember that Wiener and colleagues [2] advocated for neurophysiology “a new step in the study of that part of neurophysiology which concerns not solely the elementary processes of nerves and synapses but the performance of the nervous system as an integrated whole”. This was the preliminary background that suggested, a few years later, the proposal of “Cybernetics” [1] to denote “the entire field of control and communication theory, whether in the machine or in the animal”, based on the concept that “the problems of control engineering and communication engineering were inseparable and that they centered not around the techniques of electrical engineering but the much more fundamental notion of the message, whether this should be transmitted by electrical, mechanical, or nervous means”.

One of the early accomplishments of cybernetics was to establish the role of feedback both in engineering design and biology. Not much attention was devoted to cognition per se, although it was clear that feedback and feedback control may require the integration, in the closed loop, of specific information/knowledge, as reflected in the concept of “control by informative feedback”. A system that counts on feedback for its behavior and stability is fundamentally linked and integrated with the surrounding environment, partially destroying the clarity and rationale of simple causal reasoning: in the closed loop between two interacting systems (say a purposive agent and its environment), the first system influences the second and second system influences the first, leading to a circular pattern of interaction that requires to analyze the system as a whole. Such an intrinsic circularity of purposive action inexorably leads to the notion of cognition as a necessary side-effect of feedback control, a concept that was clearly stated by Maturana and Varela [3], thus complementing the overall view of Cybernetics: they proposed Enactivism, namely a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. Stretching this concept to the extreme the authors stated that it is valid for all organisms, with or without a nervous system, because living implies adaptation to a specific environment.

However, even avoiding such extremization and focusing the attention on purposive agents like humans and humanoid robots, the close relation suggested by Enactivism between feedback and adaptation to the environment, clarifies why and how cognition is fundamental for purposive action, both at the phylogenetic and ontogenetic level. Along the same line, there is also the emergence of a specific view on the nature of cognition known as Embodied Cognition [4]. This theory, fully positioned in the cybernetic framework, is fundamentally opposed to the many forms of dualism that, since the time of Descartes, separated the body from the mind, pragmatic from intellectual activities, hardware from software, and so on: thus, cognitive processes are shaped by and integrated with sensory and motor processes of the entire body in its continuous interaction with the environment. Such circularity also resonates with another stream of psychological research on the cognitive development in children, expressed, in particular, by the “circular reaction strategy” advocated by Jean Piaget [5], namely the efference-reafference cycle that allows humans, particularly during early development, to learn sensory-motor transformations via an active exploration of the environment [6, 7, 8]. Moreover, this strategy is a self-organizing paradigm that can be associated with self-organizing neurodynamics for learning sensory-motor transformations [9, 10, 11, 12, 13, 14].

At the same time, we cannot ignore a divergent line of research that had initially a strong link with cybernetics, namely Artificial Intelligence (AI) [15]. Apart from the unresolved issue of the specific relation between cognition and intelligence, it is a fact that AI, departing from the cybernetic context, was articulated into two main streams, one focused on the symbolic representation of knowledge (Symbolic AI) and another (Connectionist AI) focused on the acquisition of knowledge via supervised training (typically by using the backpropagation algorithm) of feedforward neural networks [16]. The former stream is currently active in the development of cognitive architectures for robotics [17] and the latter stream in deep learning techniques [18] aiming at achieving Artificial General Intelligence (AGI). In both cases, however, the type of cognition that is aimed at is overall “disembodied” and what is lost is one of the fundamental elements of cybernetics, namely the issue of self-organization through agent-environment interaction, learning, and adaptation. However, we should not forget that the self-organization issue was kept alive, since the early time of cybernetics, by two minority streams of research in neural networks: (1) self-organizing neural networks [12] trained by unsupervised Hebbian learning or self-supervised learning [19], inspired by the already mentioned Piagetian “circular reaction strategy”; (2) associative memories [20, 21, 22, 23, 24], also trained by Hebbian learning and characterized by a collective energy function that implies attractor dynamics.

In this framework, articulated around the central role of cybernetics for understanding the organization of purposive actions in humans and robots, we should also consider the markedly original work of Nikolai Bernstein [25]. Although he is mostly known for the definition of the “degrees of freedom problem” as the crucial topic underlying the organization of action, his research achievements are indeed multifaceted and synergic in many senses with cybernetics. His early research was aimed at overcoming the limitations of Pavlov’s theory on conditioned reflexes: it was clearly stated by Bernstein in 1924 that such a theory cannot explain human skills because it ignores their purposeful character [26]. A similar evolution occurred in Western neurophysiology when the limitation of the spinal reflex as the basic building block of motor neurophysiology, advocated by Charles Sherrington [27], became evident, thus promoting the emergence of cognitive neuroscience. It is important to note that the importance of feedback, central in the development of Wiener’s cybernetics, was clear as well in the mind of Bernstein not only as a crucial ingredient of motor control (providing corrections of the on-going movement) but even more importantly from the cognitive point of view, suggesting a mechanism of anticipation or “ante factum corrections” [26]. In agreement with the rationale of embodied cognition, Bernstein suggested that the organization of purposive actions is driven by the goal, which is the “meaning of the action” and plays the role of an invariant in the production of the action. This line of research led to the theory of non-individualized control of complex systems and the principle of minimal interaction [28].

According to Berthoz and Petit [29], Bernstein was one of the first to conceive anticipation/prediction as a constructive element of purposive action, although we should also consider that this issue has a clear link with the ideomotor theory of action, dating back to James’ Principles of Psychology [30] and recently revisited [31]. Moreover, the concept that the “idea” of an action, i.e., the predicted/desired sensory consequences of an action, apply both to real (overt) and imagined (covert) actions [32] is reflected in the concept of Kinesthetic Imagination [33, 34] as a driving force for the acquisition of skilled behavior.

In summary, the cybernetic framework for the representation of action must be focused on what has long been considered the fundamental issue underlying action and motor control, namely the degrees of freedom problem. On the other hand, the solution to this problem suggested by Bernstein consisted, essentially, in “freezing” the number of possibilities at the beginning of motor learning, an idea revisited years later [35] under the name of Uncontrolled Manifold (UCM). The problem here is that the selection of the Degrees of Freedom (DoFs) that need to be “frozen”, for a specific task, is far from straightforward and thus the advocated reduction of complexity for the brain tends to disappear. The same kind of conceptual contradiction can also be found in the proposal of “muscle synergies” [36] as basic building blocks for the construction of natural motor behavior. The alternative, proposed by the author [37, 38], is based on a computational model for the generation of “muscle-less synergies” or “ideomotor synergies” that apply both to covert (imagined) actions and overt (real) actions, in agreement with the theory of the Neural Simulation of Action formulated by Jeannerod [32]. The rest of the paper builds upon ideomotor/muscle-less synergy formation, to clarify how this modeling approach is a natural heir of cybernetics, on one side, and can be grounded, on the other, on self-organizing neural modeling, including possible quantum computing implications.

The rationale of muscle-less synergies is supported by the discovery of motor imagery [39, 40, 41, 42] and, as a consequence, the distinction between ideomotor synergy formation and synergy control: the former item has mainly the purpose of anticipating the consequences of a plan of action through an internal simulation that includes the selection and recruitment of the required DoFs as well as their ranking according to the degree of relative relevance for the action; the latter item, which is relevant only in the case of overt action, includes the activation of the relevant muscles by blending different control strategies: feedforward, feedback, and stiffness (via coactivation of antagonistic muscles). The kinesthetic patterns generated by the muscle-less synergies are also crucial for optimally tuning the mentioned control strategies.

The main purpose of this review paper is to offer a computational perspective for the design of cognitive architectures of robots of industry 4.0 that are supposed to interact and communicate with human partners. The working hypothesis is that to achieve that purpose humans and robots should share the overall computational organization, although detailed “hardware” may be quite different. This is the innovative contribution of the paper that is meant to re-evaluate the deep rationale of cybernetics, namely the belief that neurobiology and neurotechnology can feed and improve each other: a belief that underlies as well the field of integrative neuroscience.

2. A Computational Model of Ideomotor Synergy Formation

The computational model of synergy formation for muscle-less or ideomotor synergies is based on the Passive Motion Paradigm (PMP) [43, 44]. It was conceived for explaining and reproducing biological motion, with particular attention on the Spatio-temporal invariants that characterize common human gestures such as reaching in 2D [45], reaching in 3D [46], whole-body gesturing as when writing on a blackboard [47], handwriting and hand-drawing [48], bimanual coordination [49]. The invariant features indicate that the figural and kinematic aspects of human gestures are not independent and the figural-kinematic link occurs whatever the number of DoFs recruited for a specific action. In particular, in point-to-point, unconstrained movements the trajectory is (approximately) straight and the speed profile is bell-shaped and symmetric, whatever the starting point, direction, and length. In common gestures, where the figural aspect is meant to express a specific meaning and is composed as a sequence of primitive gestures, the figural-kinematic linkage is expressed by the anti-correlation of the speed and curvature profiles: the times of peak curvature coincide with the times of minimum speed and the times of maximum speed coincide with the times of minimum curvature (Fig. 1, Ref. [38]).

Fig. 1.

Spatio-temporal or figural-kinematic invariants in trajectory formation. (A) Planar reaching movements between six target points; note the invariant straight point-to-point trajectories and the invariant bell-shaped speed profiles. (B) Three examples of continuous hand gestures displayed as digitized trajectories, including the profiles of the velocity (V) and curvature (C); note the anti-correlation of the two profiles. From Morasso, P. “A vexing question in motor control: the degrees of freedom problem”. Front. Bioeng. Biotechnol. 9:783501, 2022. [38].

There have been attempts to explain the two main features of the spatio-temporal invariants in terms of specific mathematical models: for example, the minimization of jerk for the approximation of reaching movements [50] and the 2/3 power law for reproducing repetitive curved shapes [51]. The proposed model, based on the PMP, was originally conceived in the framework of the Equilibrium Point Hypothesis (EPH) [52, 53, 54], namely the idea that the motor system has point-attractor dynamics determined by the visco-elastic properties of muscles. In other words, the body is viewed as a network of spring-like elements that store elastic energy, contributing to the global potential energy that recapitulates, in a smooth, analog manner, the complex set of bodily interactions, providing a “landscape”, with hills and valleys, that induce the overall body model to navigate “passively” in the landscape, attracted by the nearest equilibrium configuration. The minimization of potential energy is a “global process” arising from “local interactions”. The brain can tune such local interactions in a task-oriented manner, modifying the shape of the landscape and the corresponding force field; thus, there is no need for the brain to represent and control actions directly and continuously because an indirect and discontinuous intervention is sufficient, by preparing new equilibrium points in advance and anticipation of the future course of the action. This is a general concept that is fully in tune with the Bernsteinian viewpoint, whereas it seems at odds with the emphasis on continuous feedback attributed to the cybernetic point of view that, in a narrow interpretation, could regard the generation and control of actions as a “servomechanism”. But this is just a narrow view and the PMP model is intended to inherit the main ideas on the representation and generation of actions from both the general cybernetic view, on one side, and the Bernsteinian view, through EPH, on the other, with a clear link to the artificial neural networks with attractor dynamics such as associative memories proposed by Hopfield [20].

The PMP model solves the degrees of freedom problem in an implicit manner, whatever the degree of kinematic redundancy, by avoiding ill-posed transformations, like inverse-kinematics, but counting only on well-posed computations, as mapping joint angles to end-effector position (direct kinematics) and mapping end-effector forces to joint torques (direct statics).

In agreement with the theory on the neural simulation of action [32], as a unifying mechanism for motor cognition, the PMP model is suggested to apply both to overt or real actions, which imply the activation of muscle synergies, and to covert actions, which refer to the specific cognitive aspects of action, in terms of anticipation and imagination: the goals of action are expressed as a set of elastic force fields applied to specific parts of the body and then diffused to the whole-body network. Remarkably, force fields are additive, thus providing a natural composition of complex gestures in terms of motor primitives. Moreover, the original PMP model of synergy formation was extended [55], incorporating a non-linear gating mechanism of the virtual force field, similar to the GO-signal of the vector-integration-to-endpoint (VITE) model [56] which corresponds to the well-known cortical-subcortical loop and induces a terminal attractor dynamics to the synergy-formation model [57, 58].

Fig. 2 (Ref. [38]) is a simplified version of the PMP model as a pair of non-linear interconnected modules: module A operates in the low-dimensional exteroceptive or egocentric space (typically 3D); module B operates in the high-dimensional proprioceptive space (nD where n is the number of DoFs of the model). The input to block A is a final target point PT which triggers the generation of a force field FT in the exteroceptive space pointing to the final target: its intensity (modulated by the gain matrix KT) is proportional to the distance of the moving target pT(t) from the final target. The force field is gated by the Γ-command, which is a non-linear gain function, null before start time and then quickly increasing to infinity at the prescribed termination time: this induces a gradient descent in the force field, producing a moving target point pT(t) that reaches the final target in the prescribed time, with a bell-shaped velocity profile.

Fig. 2.

Simplified PMP model of synergy formation, as a pair of non-linear interconnected modules (A and B). A operates in the low-dimensional exteroceptive or egocentric space (typically 3D) and B in the high-dimensional proprioceptive space (nD where n is the number of DoFs of the body-model). The input to module A is a final target point PT which triggers the generation of a force field FT in the exteroceptive space pointing to the final target: its intensity (modulated by the gain matrix KT) is proportional to the distance of the moving target pT(t) from the final target PT; the force field is gated by the Γ-command, which is a non-linear gain function, null before start time and then quickly increasing to infinity at the prescribed termination time: this induces a gradient descent in the force field, producing the moving target point pT(t) that reaches the final target in the prescribed time, with a bell-shaped velocity profile. The second module (B) implements the PMP by applying to the end-effector a force field Fee, proportional to the distance of the end-effector position pee(t) from the moving target point pT(t) (modulated by the gain matrix Kee). This force field is then mapped into the exteroceptive torque field τee, through the transposed Jacobian matrix JT, is gated by the same Γ-command of module A, and is mapped into the joint velocity vector q˙ using the compliance matrix C: this matrix ranks the degree of involvement of the different DoFs in the synergy formation process. The joint velocity vector q˙ is transformed into the velocity vector of the end-effector p˙ee through the Jacobian matric J, finally yielding the evolution of the position of the end-effector pee(t) via integration and thus closing the loop. The two gradient-descent processes, in the exteroceptive and proprioceptive domains respectively, are synchronized by the same gating signal that provides an overall terminal-attractor dynamics: the moving target reaches the final target together with the end-effector and the final body configuration at the termination time of the Γ-function. From Morasso, P. “A vexing question in motor control: the degrees of freedom problem”. Front. Bioeng. Biotechnol. 9:783501, 2022. [38].

The second module (B) implements the PMP by applying to the end-effector a force field Fee, proportional to the distance of the end-effector position pee(t) from the moving target point pT(t) (modulated by the gain matrix Kee). This force field is then mapped into the exteroceptive torque field τee, using the transposed Jacobian matrix JT, which is gated by the same Γ-command of module A; the result is mapped into the joint velocity vector q˙ using the compliance matrix C, which ranks the degree of involvement of the different DoFs in the synergy formation process. The joint velocity vector is transformed into the velocity vector of the end-effector (p˙ee) through the Jacobian matric J, finally yielding the evolution of the position of the end-effector pee(t) via integration and thus closing the loop. The two gradient-descent processes, in the exteroceptive and proprioceptive domains respectively, are synchronized by the same gating signal, i.e., the Γ-command, that provides an overall terminal-attractor dynamics, namely the moving target reaches the final target together with the end-effector and the final body configuration at the termination time of the Γ-function.

As clarified above, the interaction/integration between the two representation levels (exteroceptive and proprioceptive) is provided by the Jacobian matrix of the kinematic transformation (or forward kinematic function), which is the main component of the body model:

(1) p e e = f ( q ) J = p e e q

(2) { d p T d t = Γ ( t ) K T ( P T - p T ( t ) ) d q d t = Γ ( t ) C J T K e e ( p T ( t ) - p e e ( t ) ) d p e e d t = J d q d t

The simulation of the model of synergy formation in Fig. 2 consists of the integration of the following ordinary differential equations (ODEs):

From the kinematic point of view, the Jacobian is not an invertible operator: it provides a unique solution in the mapping from the proprioceptive to the exteroceptive domain (q˙p˙ee) but it is ill-posed in the opposite direction because infinite proprioceptive patterns (or none) can match a given exteroceptive pattern. However, the opposite relation (between exteroceptive and proprioceptive patterns) occurs if we consider generalized forces instead of generalized movements. Such complementarity is the basic rationale of the PMP model: the instantiation of the target induces a force field in the exteroceptive manifold and this field is mapped to the proprioceptive manifold through the transpose Jacobian, producing a high-dimensional torque field that drives the concurrent gradient descent of the body model, ultimately providing the trajectory of the end-effector through the same Jacobian: this closes the causal loop between the two manifolds. Remarkably, although the Jacobian operator for the redundant kinematic system is not invertible, it is possible to regularize the DoF problem by separating the process into two streams that move in opposite directions and different domains. The crucial point is that the simulation of this model of synergy formation is consistent with the spatio-temporal invariants of biological motion.

It is also worth considering that although the mathematical formulation of the Jacobian matrix can be derived explicitly in closed form through standard, although generally complicated, methods, these methods are unlikely to match biological reality. A more biologically plausible approach is based on the circular reaction strategy: the general idea is that the purposive agent performs a random set of movements, where the joint angular patterns are distributed in an approximately uniform manner in the proprioceptive manifold, keeping note of both the joint angles and the position/orientation of the end-effector (training set). In other words, learning is behaviorally unsupervised, in the sense that the training set is autonomously generated by the agent in such a babbling phase. Moreover, the neural representation of the Jacobian matrix can be obtained by training a feedforward, multilayer neural network using the backpropagation method. Although for this kind of network the connection weights between input, hidden, and output neurons are generally unidirectional, it can be demonstrated [59] that the same network can be used in both directions: from the proprioceptive input neurons to the exteroceptive output neurons it approximates the Jacobian and in the opposite direction it approximates the transpose Jacobian. Multilayer feedforward networks are far from being biologically plausible, in particular, because the training method (backpropagation) is quite implausible. However, the overall plausibility of the PMP computational architecture, summarized in Fig. 2, is based on the underlying circular reaction strategy and its self-organizing flavor. This argument, as explained in the next section, is further motivated by a neural formulation of the computational model of synergy formation that uses Hebbian learning instead of backpropagation.

3. A Neural Formulation of the Computational Model of Ideomotor Synergy Formation

The computational model of Fig. 2 is a lumped system, namely a model in which the dependent variables of interest are a function of time alone and the analysis of the dynamic behavior of the model implies solving a set of ordinary differential equations (ODEs). Although we clarified how this model can capture relevant aspects of sensorimotor cognition, the biological plausibility of the model and its computational efficiency could be enhanced by a distributed implementation based on neural networks. One step in this direction was already provided in the previous section by showing how to integrate into the lumped formulation of the model a feedforward neural network for representing the Jacobian matrix, which is a crucial element of the Passive Motion Paradigm. However, this does not improve the overall biological plausibility, due in particular to the artificial nature of the back-propagation training mechanism; moreover, integrating a neural network representation in the lamped model yields an implausible hybrid computational structure. An additional step forward, outlined in this section, is a fully distributed implementation of the model of ideomotor synergy formation in which the variables of interest are distributed on collections of Processing Elements (PEs) that we call Sensorimotor Neural Fields. In particular, we propose two interacting neural fields, one related to exteroceptive or egocentric information and the other to proprioceptive information, in analogy to the subdivision of the model of Fig. 2 into two modules (A and B). More specifically, both neural fields are represented by Topology Representing Networks (TRN) [60], extended in such a way as to incorporate an attractor neurodynamics inspired by the Hopfield associative memory model [20, 22] and trained by unsupervised Hebbian learning. In passing, we wish to observe that although both neural paradigms were conceived more than 30 years ago and their computational potential was somehow obscured by the recent emphasis and commercial success of supervised learning in deep feedforward neural networks [18] both neural models are still active research paradigms [23, 24, 61, 62].

At the same time, we should clarify in which sense the proposed Sensorimotor Neural Fields are related to Neural Field research at large, pioneered by Amari and Wilson & Cowan for developing a continuum approximation of the neural activity of specific cortical areas [63, 64]. Typically, the numerous neural field models developed over time are tissue-level partial differential equations (PDEs) that describe the spatiotemporal evolution of coarse-grained variables in populations of neurons: the grains in such neural areas typically reflect micro- or macro-columns and thus represent a mean-field model, averaging neural activity over a time interval of the order of several milliseconds. Consequently, neural fields are usually continuous and coarse-grained in time and space. We should also mention that there is some similarity between neural field models and neural mass models [65], with the difference that the latter models neglect spatial extensions. Since neural field models are nonlinear spatially extended systems, they are capable in principle to support the formation of spatio-temporal patterns, such as bumps (for population coding) and traveling waves. One of the common assumptions in most neural field models is that the networks are homogeneous and isotropic, typically distributed on a bi-dimensional manifold. In contrast, the neural sensorimotor field model described in the following focuses on representing the dimensionality of the sensorimotor manifolds through the lateral connectivity of the PEs, organized indeed as Topology Representing Networks. In summary, such sensorimotor neural fields are coarse-grained in time and space but are not continuous in space, implementing a tessellation of high-dimensional manifolds, and thus are represented by a large set of ODEs rather than PDEs.

3.1 Neural Sensorimotor Fields

A neural sensorimotor field can be intended as a collection of neural assemblies or processing elements (PEs), such as cortical micro-columns, logically distributed on some smooth hyper-surface or manifold, encoded by the connectivity patterns of the PEs. A TRN, in the context of this model, can be used to represent neural fields. All the PEs of a neural field receive a common thalamocortical input vectors xand compete for activation: each PE or hyper-neuron of the field (PEi, i = 1, … N) is characterized by a prototype vector Πi, that plays the role of center of the corresponding receptive field, and by a set of lateral intra-connections Cij with other PEs of the network. Such connections are bidirectional and symmetric (Cij=Cji) without self-connections (Cii=0): such a pattern of connectivity mirrors the Hopfield model [20] and this implies that the state of the network is governed by a potential energy function and the corresponding point attractor neuro-dynamics. However, while in the Hopfield model the pattern of cross-connectivity is preassigned and typically equal to full connection, in the case of TRNs the cross-connections are generally sparse and emerge during training based on Hebbian adaptation and competitive activation, to facilitate the emergence of the topological structure of the input signal.

Now, suppose that during training the input sensory signal is an n-dimensional vector that is generated by sampling in an approximately uniform manner a finite manifold M, hosted in Rn but with lower dimensionality: 𝒙MRn. In other words, it is assumed that the information provided by x is redundant and thus the real (hidden) dimensionality of M is less than n: for example, the representation of peri-personal space (that we may assume to be 3-D) is obtained by the brain by integrating different exteroceptive signals (visual-binocular and audio-binaural), yielding a multimodal sensory signal with a dimensionality larger than 3. In any case, at the end of the training the prototype vectors of the map and the corresponding receptive fields, adapted according to the competitive Hebbian rule, will be distributed across the manifold in such a way as to fill it and the cross-connectivity will be compatible with the hidden 3D nature of the sensory signals, independent of the redundancy of x.

The relation between the connectivity of a TRN and the dimensionality of the input sensory signal can be derived from the theory of dense sphere packing [66] and, in particular, from the definition of “kissing number” K: for the regular tessellation of nD space, K is the number of hyperspheres with an equal radius that “touch” a given hypersphere in the densest packing. Reminding that no algorithm exists for computing K in general as a function of n, it is worth considering a few notable cases: K = 2,6,12,24,40,72,126,240 for n = 1,2,3,4,5,6,7,8. In practice, the tessellation of the manifold offered by a trained TRN will not be perfectly regular but the notion of kissing number as an indicator of hidden dimensionality can be associated with the distribution, over the network, of the number of cross-connections of each PE. Thus, if the hidden dimensionality of the input vector x is 3, we may expect that the PEs located well inside the manifold will have an average number of cross-connections close to 12, declining to 6 or less near the border of the manifold: in this sense, a well-trained TRN is “Topology-Representing” and implies a well-organized tessellation of the manifold M. More specifically, the tessellation of Mwill cover it by a set of Voronoi hyper-polyhedra, one for each prototype vector (M= {Wi,i=1.N}), linked according to the corresponding Delaunay triangulation: in particular, the fact that the Voronoi hyper-polyhedron Wi of a given PE is adjacent to the hyper-polyhedron Wj of another PE implies that the two PEs are connected (Cij0). Each Voronoi polyhedron Wi of the map, associated with the corresponding prototype vector Πi, may be considered the receptive field of the PE in the case of hard competitive dynamics (winner-take-all). For a more biologically plausible competition, the receptive field of a given PE may include several Voronoi polyhedra, according to the population code concept [67]. Generally, it has been shown that cortico-cortical organization is not static but changes with ontogenetic development together with patterns of thalamocortical connections [68, 69].

Although the classic neural field models [11, 12] are based on a flatness assumption, this hypothesis is contradicted by the fact that the structure of lateral connections is not genetically determined but depends mostly on activation during development: such connections are known to grow exuberantly after birth, reaching their full extent within a short period, followed by a pruning process which ends in a well-defined pattern of connectivity, characterized by a large amount of non-local connections. Such connections to non-neighboring microcolumns are organized into characteristic patterns: collaterals of pyramidal axons typically travel a characteristic lateral distance without giving off terminal branches and then producing tightly packed terminal clusters: the characteristic distance is not a universal cortical parameter and is not distributed in a purely random fashion but is different in different cortical areas [70, 71, 72]. Thus, the development of lateral connections depends on the cortical activity caused by the external inflow, in such a way as to capture and represent the (hidden) correlation in the input channels.

3.2 Multi-Field Representation

For the neuronal formulation of the synergy formation model we need at least two fields, representing two sensorimotor manifolds: one related to the representation of exteroceptive or egocentric, or distal space (MA) and the other to proprioceptive or proximal space (MAB). In a biological context, we may suppose that the two maps are trained concurrently by using the circular reaction strategy, namely by randomly sampling the proprioceptive space (producing a training set of proprioceptive signal vectors q) and evaluating the corresponding set of exteroceptive signal vectors p. Competitive Hebbian learning is applied independently to the prototype vectors of both fields (A and B) as well as to the corresponding intra-connections (CA, CB); moreover, in the same process, it is possible to use the same competitive learning for growing a set of inter-connections among PEs of the two fields (CAB).

Fig. 3 is a sketch of the two trained maps: the receptive field centers of the PEs in MA (light blue region) are denoted by blue circles and the PEs in MB (light green region) by green circles. An example of intra-connections in the first manifold is shown for a single PE (blue lines) as well as intra-connections in the second manifold (green lines); moreover, the figure shows the inter-connections (red lines) that depart from a selected (red colored) neuron of the first manifold and terminate in the corresponding (red colored) neurons of the second manifold. Such a pattern of one-to-many connectivity exemplifies, at the neural level, the concept of motor redundancy: the same end-point position can be achieved by multiple joint configurations. The highlighted set of neurons in the proprioceptive manifold corresponds to the neuronal representation of the no-motion-manifold or self-motion-manifold of the kinematic transformation [73], referred to in Eqn. 1. The inter-connections of the figure are symmetric and bidirectional; as better explained in the following, they implement in a neuronal way the bidirectional use of the Jacobian matrix in Fig. 2.

Fig. 3.

A sketch of two interacting neural fields, represented by two trained TRNs. One hosted by the exteroceptive map (MA: light blue region) and the other by the proprioceptive map (MB: light green region). The receptive field centers of the PEs in the former manifold are denoted by blue circles and the PEs in the latter by green circles. An example of intra-connections in MA is shown for a single neuron (blue lines) as well as intra-connections in MB (green lines); moreover, the figure shows the inter-connections (red lines) that depart from a selected (red colored) PE of the former neural field and terminate in the corresponding (red colored) PEs of the latter field. Such pattern of one to many connectivity exemplifies, at the neuronal level, the concept of motor redundancy: the highlighted set of PEs in the proprioceptive field corresponds to the neuronal representation of the no-motion-manifold or self-motion-manifold of the kinematic transformation.

It is worth noting that this kind of neuronal representation of synergy formation, based on the circular reaction strategy, implicitly incorporates a treatment of joint limits that is more general and more robust than the one that is provided by the modeling framework depicted in Fig. 2. Since network training through circular reaction integrates the sensory-motor data acquired during the untargeted exploration of the environment, the learned prototypes automatically incorporate all the biomechanical constraints and guarantee a safe limitation of the planned patterns. Moreover, typical amplification phenomena can occur if the exploratory movements during circular reaction do not sample uniformly the workspace but are more concentrated in some areas, a sort of attentional proprioceptive fovea. In other words, the representation of sensory-motor spaces does not need to have a uniform and pre-fixed resolution, but a variable resolution that can be fine-tuned by experience. The two processes (planning and learning) could be made to co-exist without interference using autonomous mechanisms of selective attention and vigilance similar to those studied by Gaudiano and Grossberg [74].

A likely site for the computational model based on interacting TRNs is the posterior parietal cortex (PPC), particularly as regards the association area 5 [75, 76], which is the crossroad between the somatosensory cortex (areas 1, 2, 3), the motor cortex (areas 4 and 6), the other part of the PPC (area 7) involved in the integration of external space structures, and sub-cortical as well as spinal circuits: PPC processes a combination of peripheral and centrally generated inputs and is potentially suitable to synthesize neuronal representations in active movements. It is important to note that area 5 is activated in anticipation of intended movements [77] and is insensitive to load variations [78], i.e., it appears to deal with the purely geometric and kinematic aspects of movements.

3.3 Neural Dynamics of a Single Sensorimotor Neural Field

On top of the topological organization of the two interconnected TRNs, that correspond to the two modules A and B of Fig. 2, it is necessary to design the corresponding neuro-dynamics, first of a single neural field (module A) and then of the interconnected fields (A plus B). Lateral intra-connections have a crucial role in the process, although each of them may be singularly too “weak” thus going virtually unnoticed while mapping the receptive fields of cortical neurons; however, the total effect on the overall dynamics of cortical maps may be substantial, as suggested by the sharp increase of the number of intra-connections with the increase of the dimensionality of the manifold and by cross-correlation studies [79]. Lateral connections from superficial pyramids tend to be recurrent (and excitatory) because 80% of synapses are with other PEs and only 20% with inhibitory interneurons, most of them acting within columns [80]: recurrent excitation is likely to be the underlying mechanism which produces the synchronized firing which has been observed in distant mini-columns.

The existence (and preponderance) of massive recurrent excitation in the cortex is in contrast with what could be expected, at least in primary sensory areas, considering the ubiquitous presence of peristimulus competition (or “Mexican-hat pattern”) which has been observed in many pathways as the primary somatosensory cortex and has been confirmed by direct excitation of cortical areas as well as correlation studies; in other words, in the cortex there is a significantly larger amount of long-range inhibition than expected from the density of inhibitory synapses. In general, “recurrent competition” has been assumed to be the same as “recurrent inhibition”, for providing an antagonistic organization that sharpens responsiveness to an area smaller than would be predicted from the anatomical funneling of inputs. Thus, an intriguing question is how long-range competition can arise without long-range inhibition and a possible solution is the mechanism of gating inhibition based on a competitive distribution of activation, proposed by Reggia [81] and further investigated by Morasso and Sanguineti [82].

The neurophysiological evidence summarized above about the organization of neural fields can be modeled in different manners and the following mean-field model of the dynamics of a single sensorimotor neural field is just an example. In this model, for simplicity, the generic cortical minicolumn is lumped into a single PE, characterized by an activity level Vi and two kinds of inputs, one coming from lateral connectivity and the other from thalamo-cortical pathways:

(3)

N is the number of PEs of the neural field. The first element of the equation provides the terminal attractor feature of the field neurodynamics, gating with the non-linear function Γ(t) the overall input to the PE, that includes three contributions:

(1) The first contribution -γiVi is a self-inhibition of the PE, weighted by the parameter γi: this term is consistent with the intra-columnar nature of inhibitory synapses and it gives the PE the character of a “leaky integrator”.

(2) The second contribution jCijVjkVk is a recurrent excitatory input, expressing the massive lateral excitatory connections: the connection weights Cij that link the given PE with the connected neighbors are positive and symmetric. This term also includes an element of gating inhibition, because the activity level of each PE is normalized according to the average activity of its immediate neighbors. The symmetry of the lateral connections supports the point-attractor dynamics of the map. The gating inhibition allows the population code that characterizes the state of the manifold at any given time to be much sharper than the receptive field of any PE.

(3) The third contribution is related to the external thalamo-cortical input x, broadcasted to all the PEs of the map and filtered according to the receptive field properties of the given unit. In the implementation of Eqn. 2, the receptive field of a given PE is a Gaussian, centered on the prototype vector Πi. This term also includes a shunting interaction term, an idea borrowed from Grossberg [10]: ViG(x,Πi) for further sharpening the population code.

Shunting interaction, together with gating inhibition, is crucial for inducing the emergence of a manifold-wide behavior that is analogous to the synergy formation process described above for the computational model of Fig. 2.

In summary, the transient behavior of the map can be described as follows: after the sudden shift of the input variable x (say the selection of a new target) there is first a spreading of activity throughout the map, which initially flattens the population code, distributing the pattern over a large part of the network, followed a re-sharpening process around the target (which builds up faster and faster as the diffused waveform reaches the target area). The combination of the two processes is the propagation of the population code toward the new target, following a “geodesic” in the characteristic manifold of the map, with a bell-shaped speed profile as a consequence of the non-linear gating of the Γ-command. A biologically plausible implementation of this non-linear gating is related to the basal-thalamo-cortical loop and the well-established role of the basal ganglia in the initiation and speed-control of voluntary movements [57].

Fig. 4 shows a simple simulation of a neural field that illustrates the described computational mechanism. The input environmental variable xis two-dimensional, varying in a circular domain. The map includes 128 PEs that after training are arranged in a regular tessellation of the input domain; the receptive fields are radially symmetric, with a large size, comparable to the range of variation of the input signal. In the figure, the initial state of the field is centered around the point (+0.2, –0.2); the final target (–0.2, +0.2) is then instantiated, triggering a transient that lasts 1 s and consists of the shift of the population code from the initial to the final position.

Fig. 4.

The figure shows the graphical output of the simulation of a simple neural field or map whose dynamics is described by Eqn. 3. The input environmental variable x is two-dimensional, varying in a circular domain. The neural field includes 128 PEs that after training are arranged in a regular tessellation of the input domain. The initial peak of activity of the map (T = 0) is located in (+0.2, –0.2); the final target is then instantiated at position (–0.2, +0.2), thus triggering a transient that lasts 1 s, shifting the population code from the initial to the final position. This distributed dynamic behavior mirrors the dynamics of block A in Fig. 2 where the moving target is attracted to the final target with a bell-shaped velocity profile.

If we compare the lumped implementation of module A in Fig. 2 with the distributed implementation with a neural field whose PEs are characterized by Eqn. 3, we may say that the force field, explicitly represented in the former case by the formula FT=KTΔpT, is implicitly implemented in the distributed model of Eqn. 3 by the diffusion of a distance field from the designated target throughout the map (the corresponding force field is the associated gradient field).

3.4 Combined Neuro-Dynamics of Interacting Neural Fields

Both neural fields of the distributed synergy formation process (the exteroceptive field MA and the proprioceptive field MB) are characterized by a copy of equation 3, where Cij refers to the intra-connections of each map. Since the two neural fields are characterized by very different dimensionality (the proprioceptive field has obviously a much greater dimensionality than the exteroceptive field), the corresponding patterns of intra-connections emerging from self-supervised learning will be quite different. In any case, the intrinsic dynamics of each map is capable to maintain the stability of the population codes on the corresponding manifolds. Their consistency, i.e., the fact that the current position of the end-effector coded in map A is geometrically consistent with the current articulation of the body model coded in map B is provided by the inter-connections between map A and map B. In particular, for achieving this result it is sufficient to introduce in the equation that characterizes the dynamics of map B the following ‘external’ term for each PE of the map, where CikAB the inter-connection weights that link map A to map B:

(4) { h i e x t = k C i k A B V k i = 1 N B }

This term carries out the same role as the Jacobian matrix in module B of Fig. 2. In other words, the force field that drives the motion of the population code in map A is reflected onto map B, starting the co-evolution of the two maps, synchronized by the common gating command.

In summary, the distributed implementation of the model of Fig. 2 is characterized by the following distributed set of ODEs:

(5)

is the instantaneous activity level of each PE of map A and Wi is the corresponding activity level of the PE in map B; {CijA} are the intra-connection weights of map A and {CijB} the corresponding weights of map B; {ΠiA} are the prototype vectors or centers of the receptive fields of the PEs in map A and {ΠiB} the corresponding prototype vectors in map B; p is the exteroceptive thalamo-cortical input to map A and q the corresponding proprioceptive input to map B.

The biological plausibility of this model was tested in the case of speech motor control [83] with real data: in this case, the exteroceptive space is acoustic (the targets are spoken sequences) and the proprioceptive space characterizes the articulatory structure of the vocal tract, which includes tongue, jaw, lips, and larynx and thus, mechanically, has an infinite number of DoFs. However, it is expected that the number of functional DoFs or functional articulators used by the brain is limited, although large enough to allow some redundancy in speech production. This problem was addressed by using a training set that included several thousand samples of the acoustic output of a speaker, pronouncing Vowel to Vowel (VV) transition sequences, synchronized with a cineradiographic view of the vocal tract [84]. The acoustic samples were represented by the first five formants of the recorded sounds (In speech science a formant is defined as a broad peak, or local maximum, in the spectrum), and were used for training an acoustic TRN, composed of 500 PEs, with a five-dimensional acoustic input vector. The digitized images of the vocal tract were analyzed by extracting 10 geometric indicators [85] that were used for training an articulatory TRN composed of 1000 PEs, with a ten-dimensional input vector.

After training, the analysis of the patterns of intra-connections allowed us to evaluate the intrinsic dimensionality of the acoustic map in the range 3–4 and the corresponding dimensionality of the articulatory map in the range 4–5. The inter-connections, obtained in the combined training of the two maps, implicitly code the functional relationship between the two manifolds and also allow mapping the population code of one map as external input for the other: this induces coupled acoustic-articulatory dynamics that is a general-purpose tool for solving several sensorimotor problems in a simple and unified framework.

Finally, the computational power of the dual-map model was demonstrated by testing its ability to generate coordinated acoustic-articulatory patterns in VV transitions compatible with the available experimental data, for example, the /ae/ transition. The initial conditions in the two maps were chosen by centering the two population codes according to the available data vectors and allowing the dual neural fields to stabilize. The phoneme /e/ was then given as new external input to the acoustic map at t = 0 while also activating the Γ-command. The two maps started co-evolving in time, as dictated by Eqn. 5, under the driving influence of the hext inter-coupling term (Eqn. 4). The population code in the acoustic map was attracted by the target phoneme /e/, producing a moving wave of activation in the five-dimensional articulatory manifold. At the end of the transient, the articulatory map settled in a configuration that implicitly selected, in the no-motion manifold of /e/, the configuration closest to the initial one. In other words, an effect of the cross-coupling was to establish a correspondence between phonemes and no-motion manifolds, allowing the map dynamics to operate as a navigation tool that carried out the inverse acoustic-articulatory mapping, without any explicit regularization or optimization procedure.

4. A Neuromorphic/Quantum Computing Modeling Framework?

In the previous section, it was shown that a neural architecture based on interacting TRNs is capable, in principle, to carry out goal-oriented synergy formation processes for both covert and overt actions. Synergy formation for purposive actions is an essential kernel of cognition, in the framework of the theory of Embodied Cognition and agreement with the self-awareness promoting practices such as Tai Chi: in fact, Tai Chi is defined as “meditation in motion”, namely the generation of slow and smooth gesture sequences integrated with intentional and anticipatory motor imagery [86, 87].

The neuro-dynamics of the model implies a large number of modular PEs that may correspond to the mini-columns of the cerebral cortical areas. It is estimated that in the brain there are about 108 mini-columns that include about 80–120 neurons each [88, 89]. Thus, in principle, the brain has available sufficiently powerful neural hardware to support a body-wide TRN-based neuromotor architecture. On the other hand, the size of the simulation models in the literature is too small (of the order of thousands of PEs) to allow investigating in depth a number of subtle computational aspects that link neuromotor modeling with memory and accumulation of knowledge.

The distributed model sketched in the previous section is based on self-organizing principles operating both at the behavioral level, like the circular reaction strategy, and at the local level, like the competitive interactions of neighboring PEs that support global effects like the diffusion of force fields and then the propagation of population codes. The point is that this kind of computational architecture is as far away as possible from the von Neumann digital machines that are used for simulating very limited prototypes of the model.

At the same time the dissemination of electrophysiological techniques based on multi-electrode, multi-site recording, which permits the analysis of the correlation structure of cortical areas, has focused the attention on cortical dynamics, suggesting that the cerebral cortex might exploit high-dimensional, non-linear dynamics for carrying out cognitive functions [90]. The cortical connectome, with its preponderance of reciprocal connections and the rich dynamics resulting from such reciprocal interactions, is indeed ideally suited to provide an internal representation of high-dimensional manifolds emerging from the non-linear dynamics of recurrently coupled networks, on the border of chaotic and attractor dynamics [91]. In this conceptual framework, which allows performing a flexible and efficient computation in a distributed manner, the representation and internal simulation of purposive actions are distributed and encoded both in the discharge rate of individual PEs and in the specific temporal relations among the discharge sequences of distributed PEs in different cortical maps.

On the other hand, one may ask whether this conceptual “digital” framework, based on the firing patterns of neuronal assemblies, is sufficient to capture short-range “analog” interactions which support Hebbian learning, at the basis of TRNs, as well as the force-fields and wave-like behavior of neural fields that implement the PMP model of synergy formation on a very large number of PEs. A possible solution, away from the von Neumann paradigm, could be offered by a large family of neuromorphic technologies, rapidly growing but still in their infancy [92]: this new generation of computing architectures is believed to deal with the storage and processing of large amounts of information with much lower power consumption than von Neumann architectures. The neuromorphic architectures use very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures in the nervous system. Generally speaking, the term neuromorphic is used to describe analog, digital, and mixed-mode analog/digital VLSI systems. Just as a few examples, we may quote Stanford University’s Neurogrid system [93], Intel’s neuromorphic research chip Loihi [94], or Heidelberg University’s BrainScaleS Neuromorphic Hardware System [95], in the framework of the Human Brain Project. We are still far away from a deep understanding of how to design and assemble neuromorphic computing systems with billions of artificial neural assemblies that can match, at least approximately and in a coarse-grain manner, the organizing principles of brain function: the problem is that it is still unknown how computational functions can emerge from neuromorphic network structures [96]. Moreover, it is expected that the hybrid integration of neuromorphic technologies with other nanotechnology concepts, including photonics, can be extended down to quantum computing technologies [97].

Behind the amazing technological potential breakthroughs hinted above, it is also necessary to face a fundamental question: the analog component of cortical dynamics, underlying the measurable digital component expressed in terms of neuronal firing patterns, should be described and understood in terms of classical physics (for example using ordinary differential equations) or quantum physics or as a combination of the two? There is no doubt that the brain obeys quantum mechanics as any physical system but the question is whether quantum effects are observable and measurable and, for specific brain functions, are relevant or irrelevant in comparison to the account offered by classical physics.

Various general objections are raised against any quantum brain hypothesis: even though neurons and neuronal components are small, still they are orders of magnitude too big for assuming that quantum effects may influence directly neuronal activity; moreover, the physiological features of the brain environment (warm, wet, and noisy) seem to imply sure destruction of any non-trivial quantum effect, such as superpositions or entanglements [98]. In addition, major philosophical and conceptual problems surround the process of making measurements in quantum mechanics. From the engineering point of view, together with the lure of achieving asymptotic speedups, there is also the formidable challenge that quantum computations are difficult to implement, as exemplified by the fact that no scalable large quantum computer, with a size comparable to the human brain, is known so far despite the size of the employed funds. More specifically, two key biophysical operations underlie information processing in the brain: chemical transmission across the synaptic barrier and the generation of action potentials. Both events involve thousands of ions and neurotransmitter molecules, driven by mechanical diffusion or by electrical potential over tens of microns, and this is likely to inhibit the emergence of any coherent quantum states. Thus, spiking neurons can only receive and send classical, rather than quantum, information. On the other hand, spiking is the ‘digital’ final event that occurs on the top and at the end of complex ‘analog’ processes that may or may not include quantum effects.

In contrast to classical physics, quantum mechanics is fundamentally indeterministic but this is true as well for non-linear dynamical systems in general. Quantum effects are small and local and thus it seems unlikely that can influence brain dynamics at large in a non-trivial way. However, the complex non-linear dynamics of the brain is frequently characterized by quasi-chaotic behavior, or at least it operates on the stability edge with high sensitivity to small fluctuations: such sensitivity may amplify the small and local quantum effects and help dissemination at relatively long distances, such as the sensorimotor maps suggested in the previous section [99].

In any case, there is mounting evidence of non-trivial quantum effects in the brain, at least in the sensory domain: Rhodopsin, an important protein for retinal photoreceptors, was found to exhibit quantum waves [100]; Olfactory receptors appear to exploit quantum effects (electron tunneling) for the detection of odorant molecules [101]; Magnetoreception, that is crucial for avian navigation skills, revealed robust quantum entanglements in the cryptochromes of the retina [102]. Moreover, quantum effects are expected to play a role in a fundamental neural function such as the opening of ion channels [103].

However, it is fair to say that, at present, non-trivial quantum effects were not detected or hypothesized in the central nervous system concerning general cognitive functions, except for the highly controversial quantum consciousness hypotheses formulated by Penrose and Hameroff and based on the supposed quantum computation carried out by the tubulin components of microtubules, filamentous protein polymers that form the cytoskeleton of cells [104].

In any case, there is agreement that the neurodynamics underlying motor cognition should be characterized as a non-linear system, capable of oscillatory and/or chaotic behavior [105, 106]. In such a context, small (even infinitesimal) fluctuations due to generic noise or quantum effects need not be averaged out in the large but, at least in some cases, can be amplified in a multi-scale manner. In both cases, such rich dynamic effects are intrinsically indeterministic but critical from the self-organization point of view.

In summary, the intricate interplay between quantum effects and non-linear complex dynamics might be able (a) to generate new persistent quantum-chaotic patterns at a microscopic scale and (b) to amplify quantum effects to a macroscopic scale. How exactly the indeterminacy of complex quantum dynamics of the brain is embedded in classical neuronal mechanisms of the cognitive organization remains to be investigated in depth. In the meantime, we may suggest a crucial side-effect of the possible massive exploitation of quantum effects in brain physiology: since local quantum interactions are likely to be very efficient from the energetic point of view, their exploitation/amplification through non-linear quasi-chaotic global dynamics might be the key to the energetic efficiency of the neural control of actions in general. It is also worth mentioning that the global, quasi-chaotic dynamics should not be restricted to the brain per se, isolated from the environment, but should include body-environment interaction as well. An example, in this context, is the issue of Intermittent control for the stabilization of unstable tasks, such as balancing inverted pendulum paradigms [107, 108, 109, 110]. The intrinsic dynamics of the task involves a saddle-like instability that implies a partition of the state space of the inverted pendulum in stable and unstable areas. The intermittent control strategy means to open/close the feedback loop according to the current state. The result is a quasi-chaotic oscillation (the well-known sway motion in the case of upright standing) that enhances readiness for sudden phase transitions if the task changes and provides energetic efficiency because no muscle energy is required in about 50% of the time (i.e., when the feedback loop is open).

Quantum computing and quantum (neuro)-biology are certainly linked but are separated logically and technologically. The fundamental motivation of the former is the promise of achieving asymptotic speedups in hard computational problems that are crucial for specific applications, including genomics, genetics, biochemistry, and deep phenotyping [111]. The challenge for neurobiological modeling is to outline a hybrid framework for integrating digital information, analog information, and quantum information.

5. Conclusions

In the original spirit of cybernetics, we believe that a better and deeper understanding of the neurobiology of purposive action should have an impact on the design of robotic systems capable of similar functionality. Although this requirement was practically ignored so far in most industrial robotics, it is now re-evaluated in the framework of Industry 4.0 which implies a high degree of cognitive interaction and communication between humans and robotic partners. This is the reason for (re)taking inspiration from neurobiology for designing better robotic systems, capable of cognition in purposive action and multi-agent interaction and cooperation, revisiting the general framework outlined by Nikolai Bernstein and Norbert Wiener.

In particular, it is worth considering the relationship between the Passive Motion Paradigm, which has a central role in the modeling framework outlined in this review paper, and Active Inference [112]. The issue was discussed by Friston and Parr [113], enhancing the fact that these two concepts are strongly related also in a deep philosophical sense: “The anti-symmetry between active inference and passive motion speaks to the complementary but convergent view of how we use our forward models to generate predictions of sensed movements. This view is another example of Dennett’s ‘strange inversion’ [114], in which motor commands no longer cause desired movements – but desired movements cause motor commands (in the form of the predicted consequences of movement)”. Moreover, beyond simple goal-oriented actions, gesture representation, production, and understanding [87] are topics of crucial interest for human-robot communication, as emphasized by recent research activities along this line [115, 116].

Among the different issues that are implied by such vision, there is also the energetic efficiency of the neuro-biological implementation, characterized by hybrid integration of computing tools: the optimality of human motion is appreciated by roboticists [117] and the energetic “frugality” of neural computational architectures, away from the von Neuman paradigm, is emphasized by the roadmap to the development of neuromorphic systems [92], including the focus on spiking neural control [118, 119].

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Ethics Approval and Consent to Participate

Not applicable.

Acknowledgment

Not applicable.

Funding

This research was supported by internal funds of the RBCS (Robotics, Brain, and Cognitive Sciences) research unit of the Italian Institute of Technology, Genoa, Italy in the framework of the ICOG initiative (CDC22032).

Conflict of Interest

The author declares no conflict of interest.

References
[1]
Wiener N. Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press: Cambridge, Massachusetts. 1948.
[2]
Rosenblueth A, Wiener N, Bigelow J. Behavior, Purpose, and Teleology. Philosophy of Science. 1943; 10: 18–24.
[3]
Maturana HR, Varela FJ. Autopoiesis and Cognition: The Realization of the Living. Reidel Publ: Dordrecht. 1980.
[4]
Varela FJ, Thompson E, Rosch E. The embodied mind: Cognitive science and human experience. MIT Press: Cambridge, Massachusetts. 1991.
[5]
Piaget J. La Naissance de l’intelligence chez l’enfant. T Delachaux & Niestlé: Neuchâtel en Suisse. 1937.
[6]
Held R, Hein A. Movement-Produced Stimulation in The Development of Visually Guided Behavior. Journal of Comparative and Physiological Psychology. 1963; 56: 872–876.
[7]
Hein A, Held R, Gower EC. Development and segmentation of visually controlled movement by selective exposure during rearing. Journal of Comparative and Physiological Psychology. 1970; 73: 181–187.
[8]
Von Hofsten C. Early development of grasping an object in space-time. In: Goodale M, ed. Vision and action: The control of grasping. Ablex: Norwood, NJ. 1990; 65–79.
[9]
von der Malsburg C. Self-organization of orientation sensitive cells in the striate cortex. Kybernetik. 1973; 14: 85–100.
[10]
Grossberg S. Contour enhancement, short term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics. 1973; 52: 213–257.
[11]
Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics. 1977; 27: 77–87.
[12]
Kohonen T. Self-Organized Formation of Topologically Correct Feature Maps. Biological Cybernetics. 1982; 43: 59–69.
[13]
Amari S. Dynamical stability of formation of cortical maps. In: Arbib M, Amari S, eds. Dynamic interactions in neural networks: Models and data. Springer-Verlag: Berlin. 1989; 15–34.
[14]
Kohonen T. Self organization and associative memory. 3rd edn. Springer-Verlag: Berlin. 1989.
[15]
McCarthy J, Minsky M, Rochester N, Shannon C. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 1955. Available at: http://jmc.stanford.edu
[16]
ROSENBLATT F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review. 1958; 65: 386–408.
[17]
Kotseruba J, Tsotsos JK. A Review of 40 Years in Cognitive Architecture Research Core Cognitive Abilities and Practical Applications. Artificial Intelligence Review. 2020; 53: 17–94.
[18]
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521: 436–444.
[19]
Kuperstein M. Infant neural controller for adaptive sensory-motor coordination. Neural Networks. 1991; 4: 131–145.
[20]
Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America. 1982; 79: 2554–2558.
[21]
Cohen M, Grossberg S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics. 1983; 13: 815–826.
[22]
Hopfield JJ. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences of the United States of America. 1984; 81: 3088–3092.
[23]
Krotov D, Hopfield JJ. Dense associative memory for pattern recognition. Advances in Neural Information Processing Systems. 2016; 29: 1–12.
[24]
Ramsauer H, Schäfl B, Lehner J, Seidl P, Widrich M, Adler T, et al. Hopfield networks is all you need. 2020. (Preprint)
[25]
Bernstein N. The Co-ordination and Regulation of Movements. Pergamon Press: Oxford, UK. 1967.
[26]
Sirotkina IE, Biryukova EV. Futurism in Physiology: Nikolai Bernstein, Anticipation, and Kinaesthetic Imagination. In: Nadin M, ed. Anticipation: Learning from the Past. Springer: Cham. 2015.
[27]
Sherrington CS. The integrative action of the nervous system. 1st edn. Oxford University Press: Oxford, UK. 1906.
[28]
Gelfand IM, Gurfinkel VS, Fomin SV, Tsetlin ML. Models of the Structural-Functional Organization of Certain Biological Systems. MIT Press: Cambridge. 1971.
[29]
Berthoz A, Petit JL. The Physiology and Phenomenology of Action. Oxford University Press: Oxford, UK. 2008.
[30]
James W. The Principles of Psychology. Henry Holt and Company: New York. 1890.
[31]
Shin YK, Proctor RW, Capaldi EJ. A review of contemporary ideomotor theory. Psychological Bulletin. 2010; 136: 943–974.
[32]
Jeannerod M. Neural simulation of action: a unifying mechanism for motor cognition. NeuroImage. 2001; 14: S103–S109.
[33]
Gardner H. Frames of Mind: The Theory of Multiple Intelligence. Heinemann: London. 1983.
[34]
Biryukova EV, Bril B. Biomechanical analysis of tool use: a return to Bernstein’s tradition. The Journal of Psychology. 2012; 220: 53–54.
[35]
Scholz JP, Schöner G. The uncontrolled manifold concept: identifying control variables for a functional task. Experimental Brain Research. 1999; 126: 289–306.
[36]
d’Avella A, Saltiel P, Bizzi E. Combinations of muscle synergies in the construction of a natural motor behavior. Nature Neuroscience. 2003; 6: 300–308.
[37]
Mohan V, Bhat A, Morasso P. Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics. Physics of Life Reviews. 2019; 30: 89–111.
[38]
Morasso P. A Vexing Question in Motor Control: The Degrees of Freedom Problem. Frontiers in Bioengineering and Biotechnology. 2022; 9: 783501.
[39]
Decety J, Jeannerod M. Mentally simulated movements in virtual reality: does Fitts’s law hold in motor imagery? Behavioural Brain Research. 1995; 72: 127–134.
[40]
Grush R. The emulation theory of representation: motor control, imagery, and perception. The Behavioral and Brain Sciences. 2004; 27: 377–377–96; discussion 396–442.
[41]
Karklinsky M, Flash T. Timing of continuous motor imagery: the two-thirds power law originates in trajectory planning. Journal of Neurophysiology. 2015; 113: 2490–2499.
[42]
O’Shea H, Moran A. Does Motor Simulation Theory Explain the Cognitive Mechanisms Underlying Motor Imagery? A Critical Review. Frontiers in Human Neuroscience. 2017; 11: 72.
[43]
Mussa Ivaldi FA, Morasso P, Zaccaria R. Kinematic networks. A distributed model for representing and regularizing motor redundancy. Biological Cybernetics. 1988; 60: 1–16.
[44]
Mussa Ivaldi FA, Morasso P, Hogan N, Bizzi E. Network Models of Motor Systems with many Degrees of freedom. In: Fraser MD, ed. Advances in Control Networks and Large Scale Parallel Distributed Processing Models. Ablex Publishing Corporation: Norwood, NJ. 1989.
[45]
Morasso P. Spatial control of arm movements. Experimental Brain Research. 1981; 42: 223–227.
[46]
Morasso P. Three dimensional arm trajectories. Biological Cybernetics. 1983; 48: 187–194.
[47]
Morasso P. Trajectory formation. In: Morasso P, Tagliasco V, eds. Human Movement Understanding. Elsevier Science Publishers: North Holland. 1986; 9–58.
[48]
Morasso P, Mussa Ivaldi FA. Trajectory formation and handwriting: a computational model. Biological Cybernetics. 1982; 45: 131–142.
[49]
Morasso P. Coordination aspects of arm trajectory formation. Human Movement Science. 1983; 2: 197–210.
[50]
Flash T, Hogan N. The coordination of arm movements: an experimentally confirmed mathematical model. The Journal of Neuroscience: the Official Journal of the Society for Neuroscience. 1985; 5: 1688–1703.
[51]
Lacquaniti F, Terzuolo C, Viviani P. The law relating the kinematic and figural aspects of drawing movements. Acta Psychologica. 1983; 54: 115–130.
[52]
Feldman AG. Functional Tuning of the Nervous System with Control of Movement or Maintenance of a Steady Posture: II Controllable Parameters of the Muscle. Biophysics. 1966; 11: 565–578.
[53]
Feldman AG. Once more on the equilibrium-point hypothesis (lambda model) for motor control. Journal of Motor Behavior. 1986; 18: 17–54.
[54]
Bizzi E, Hogan N, Mussa-Ivaldi FA, Giszter S. Does the nervous system use equilibrium-point control to guide single and multiple joint movements? The Behavioral and Brain Sciences. 1992; 15: 603–613.
[55]
Mohan V, Morasso P. Passive motion paradigm: an alternative to optimal control. Frontiers in Neurorobotics. 2011; 5: 4.
[56]
Bullock D, Grossberg S. Neural dynamics of planned arm movements: emergent invariants and speed-accuracy properties during trajectory formation. Psychological Review. 1988; 95: 49–90.
[57]
Barhen J, Gulati S, Zak MM. Neural Learning of Constrained Nonlinear Transformations. Computer. 1989; 22: 67–76.
[58]
Zak M. Terminal Attractors for Addressable Memory in Neural Networks. Physics Letters. 1988; 133: 218–222.
[59]
Morasso P. Gesture formation: A crucial building block for cognitive-based Human–Robot Partnership. Cognitive Robotics. 2021; 1: 92–110.
[60]
Martinetz T, Schulten K. Topology Representing Networks. Neural Networks. 1994; 7: 507–522.
[61]
Meyer-Bäse A, Jancke K, Wismüller A, Foo S. Mafrtintetz T. Medical image compression using topology-preserving networks. Engineering Applications of Artificial Intelligence. 2005; 18: 383–392.
[62]
Vathy-Fogarassy A, Kiss A, Abonyi J. Topology Representing Network Map – A New Tool for Visualization of High-Dimensional Data. In: Gavrilova ML, Tan CJK, eds. Transactions on Computational Science I. Springer: Berlin, Heidelberg. 2008.
[63]
Amari SI. Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements. IEEE Transactions on Computers. 1972; 100: 1197–1206.
[64]
Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal. 1972; 12: 1–24.
[65]
Beurle RL. Properties of a mass of cells capable of regenerating pulses. Philosophical transactions of the Royal Society of London. Series B, Biological sciences. 1956; 240: 55–94.
[66]
Sloane NJA. The Packing of Spheres. Scientific American. 1984; 250: 116–125.
[67]
Georgopoulos AP, Kettner RE, Schwartz AB. Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population. The Journal of Neuroscience: the Official Journal of the Society for Neuroscience. 1988; 8: 2928–2937.
[68]
Knudsen EI, du Lac S, Esterly SD. Computational maps in the brain. Annual Review of Neuroscience. 1987; 10: 41–65.
[69]
Katz LC, Callaway EM. Development of local circuits in mammalian visual cortex. Annual Review of Neuroscience. 1992; 15: 31–56.
[70]
Gilbert CD, Wiesel TN. Morphology and intracortical projections of functionally characterised neurones in the cat visual cortex. Nature. 1979; 280: 120–125.
[71]
Schwark HD, Jones EG. The distribution of intrinsic cortical axons in area 3b of cat primary somatosensory cortex. Experimental Brain Research. 1989; 78: 501–513.
[72]
Calvin W. Cortical columns, modules and hebbian cell assemblies. In: Arbib M, ed. The handbook of brain theory and neural networks (pp. 269–272). MIT Press: Cambridge, MA. 1995.
[73]
Burdick JW. On the inverse kinematics of redundant manipulators: Characterization of the self-motion manifolds. Advanced Robotics. Springer: Berlin, Heidelberg. 1989; 25–34.
[74]
Gaudiano P, Grossberg S. Vector associative maps: Unsupervised real-time error-based learning and control of movement trajectories. Neural Networks. 1991; 4: 147–183.
[75]
Hyvarinen J. The Parietal Cortex of Monkey and Man. Springer-Verlag: New York. 1982.
[76]
Stein J. Space and the parietal association areas. In: Paillard J, ed. Brain and space (pp. 185–222). Oxford University Press: Oxford, UK. 1991.
[77]
Crammond DJ, Kalaska JF. Neuronal activity in primate parietal cortex area 5 varies with intended movement direction during an instructed-delay period. Experimental Brain Research. 1989; 76: 458–462.
[78]
Kalaska JF, Cohen DA, Prud’homme M, Hyde ML. Parietal area 5 neuronal activity encodes movement kinematics, not movement dynamics. Experimental Brain Research. 1990; 80: 351–364.
[79]
Singer W. Development and plasticity of cortical processing architectures. Science (New York, N.Y.). 1995; 270: 758–764.
[80]
Nicoll A, Blakemore C. Patterns of local connectivity in the neocortex. Neural Computation. 1993; 5: 665–680.
[81]
Reggia JA, D’Autrechy CL, Sutton III GG, Weinrich M. A competitive distribution theory of neocortical dynamics. Neural Computation. 1992; 4: 287–317.
[82]
Morasso P, Sanguineti V. How the brain can discover the existence of external egocentric space. Neurocomputing. 1996; 12: 289–310.
[83]
Morasso PG, Sanguineti V, Frisone F, Perico L. Coordinate-free sensorimotor processing: computing with population codes. Neural Networks: the Official Journal of the International Neural Network Society. 1998; 11: 1417–1428.
[84]
Bothorel A, Simon P, Wioland F, Zerling JP. Cinéradiographie des voyelles et consonnes du francais. Technical report, Travaux de l’Institut de Phonetique de Strasbourg: France. 1986.
[85]
Badin P, Gabioud B, Beautemps D. Cineradiography of VCV sequences: articulatory-acoustic data for speech production model. International Congress on Acoustics: Trondheim, Norway. 1995; 349–352.
[86]
Morasso P, Morasso M. Taichi Meets Motor Neuroscience: An Inspiration for Contemporary Dance and Humanoid Robotics. Cambridge Scholars Publishing: UK. 2021.
[87]
Morasso P, Mohan V. Pinocchio: A language for action representation. Cognitive Robotics. 2022; 2: 119–131.
[88]
Buxhoeveden DP, Casanova MF. The minicolumn hypothesis in neuroscience. Brain: a Journal of Neurology. 2002; 125: 935–951.
[89]
Johansson C, Lansner A. Towards cortex sized artificial neural systems. Neural Networks: the Official Journal of the International Neural Network Society. 2007; 20: 48–61.
[90]
Singer W, Lazar A. Does the Cerebral Cortex Exploit High-Dimensional, Non-linear Dynamics for Information Processing? Frontiers in Computational Neuroscience. 2016; 10: 99.
[91]
Laje R, Buonomano DV. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature Neuroscience. 2013; 16: 925–933.
[92]
Christensen DV, Dittmann R, Linares-Barranco B, Sebastian A, Le Gallo M, Redaelli A, et al. 2022 roadmap on neuromorphic computing and engineering. Neuromorphic Computing and Engineering. 2022; 2: 022501.
[93]
Boahen K. Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations. Proceedings of the Institution of Electrical Engineers. 2014; 102: 699–716.
[94]
Davies M, Srinivasa N, Lin T H, Chinya G, Cao Y, Choday S. H, et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro. 2018; 38: 82–99.
[95]
Grübl A, Billaudelle S, Cramer B, Karasenko V, Schemmel J. Verification and Design Methods for the BrainScaleS Neuromorphic Hardware System. Journal of Signal Processing Systems. 2020; 92: 1277–1292.
[96]
Suárez LE, Richards BA, Lajoie G, Misic B. Learning function from structure in neuromorphic networks. Nature Machine Intelligence. 2021; 3: 771–786.
[97]
Markovic D, Grollier J. Quantum neuromorphic computing. Applied Physics Letters. 2020; 117: 150501.
[98]
Koch C, Hepp K. Quantum mechanics in the brain. Nature. 2006; 440: 611.
[99]
Jedlicka P. Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology? Frontiers in Molecular Neuroscience. 2017; 10: 366.
[100]
Wang Q, Schoenlein RW, Peteanu LA, Mathies RA, Shank CV. Vibrationally coherent photochemistry in the femtosecond primary event of vision. Science (New York, N.Y.). 1994; 266: 422–424.
[101]
Huelga SF, Plenio MB. Vibrations, quanta and biology. Contemporary Physics. 2013; 54: 181–207.
[102]
Ball P. Physics of life: The dawn of quantum biology. Nature. 2011; 474: 272–274.
[103]
Vaziri A, Plenio M. Quantum coherence in ion channels: resonances, transport and verification. New Journal of Physics. 2010; 12: 085001.
[104]
Hameroff S. Quantum computation in brain microtubules? The Penrose–Hameroff ‘Orch OR’ model of consciousness. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences. 1998; 356: 1869–1896.
[105]
van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science (New York, N.Y.). 1996; 274: 1724–1726.
[106]
Harish O, Hansel D. Asynchronous Rate Chaos in Spiking Neuronal Circuits. PLoS Computational Biology. 2015; 11: e1004266.
[107]
Cabrera JL, Milton JG. On-off intermittency in a human balancing task. Physical Review Letters. 2002; 89: 158702.
[108]
Bottaro A, Yasutake Y, Nomura T, Casadio M, Morasso P. Bounded stability of the quiet standing posture: an intermittent control model. Human Movement Science. 2008; 27: 473–495.
[109]
Asai Y, Tasaka Y, Nomura K, Nomura T, Casadio M, Morasso P. A model of postural control in quiet standing: robust compensation of delay-induced instability using intermittent activation of feedback control. PLoS ONE. 2009; 4: e6169.
[110]
Morasso P, Cherif A, Zenzeri J. Quiet standing: The Single Inverted Pendulum model is not so bad after all. PLoS ONE. 2019; 14: e0213870.
[111]
Emani PS, Warrell J, Anticevic A, Bekiranov S, Gandal M, McConnell MJ, et al. Quantum computing at the frontiers of biological sciences. Nature Methods. 2021; 18: 701–709.
[112]
Friston K, Mattout J, Kilner J. Action understanding and active inference. Biological Cybernetics. 2011; 104: 137–160.
[113]
Friston KJ, Parr T. Passive motion and active inference: Commentary on ”Muscleless motor synergies and actions without movements: From motor neuroscience to cognitive robotics” by Vishwanathan Mohan, Ajaz Bhat and Pietro Morasso. Physics of Life Reviews. 2019; 30: 112–115.
[114]
Dennett D. Darwin’s ”strange inversion of reasoning”. Proceedings of the National Academy of Sciences of the United States of America. 2009; 106 Suppl 1: 10061–10065.
[115]
Morgenstern A, Goldin-Meadow S. Afterword: gesture as part of language or partner to language across the lifespan. In: Morgenstern A, Goldin-Meadow S, eds. Gesture in language: development across the lifespan. De Gruyter Mouton: Berlin. 2022.
[116]
Gontier N. Defining Communication and Language from Within a Pluralistic Evolutionary Worldview. Topoi. 2022; 41: 609–622.
[117]
Liu L, Ballard D. Humans use minimum cost movements in a whole-body task. Scientific Reports. 2021; 11: 20081.
[118]
DeWolf T. Spiking neural networks take control. Science Robotics. 2021; 6: eabk3268.
[119]
Sepulchre R. Spiking Control Systems. Proceedings of the IEEE. Institute of Electrical and Electronics Engineers. 2022; 110: 577–589.

Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share
Back to top