- Academic Editor
Norbert Wiener and Nikolai Bernstein set the stage for a worldwide multidisciplinary attempt to understand how purposive action is integrated with cognition in a circular, bidirectional manner, both in life sciences and engineering. Such a ‘workshop’ is still open and far away from a satisfactory level of understanding, despite the current hype surrounding Artificial Intelligence (AI). The problem is that Cognition is frequently confused with Intelligence, overlooking a crucial distinction: the type of cognition that is required of a cognitive agent to meet the challenge of adaptive behavior in a changing environment is Embodied Cognition, which is antithetical to the disembodied and dualistic nature of the current wave of AI. This essay is the perspective formulation of a cybernetic framework for the representation of actions that, following Bernstein, is focused on what has long been considered the fundamental issue underlying action and motor control, namely the degrees of freedom problem. In particular, the paper reviews a solution to this problem based on a model of ideomotor/muscle-less synergy formation, namely the Passive Motion Paradigm (PMP). Moreover, it is shown how this modeling approach can be reformulated in a distributed manner based on a self-organizing neural paradigm consisting of multiple topology-representing networks with attractor dynamics. The computational implication of such an approach is also briefly analyzed looking at possible alternatives of the von Neuman paradigm, namely neuromorphic and quantum computing, aiming in perspective at a hybrid computational framework for integrating digital information, analog information, and quantum information. It is also suggested that such a framework is crucial not only for the neurobiological modeling of motor cognition but also for the design of the cognitive architecture of autonomous robots of industry 4.0 that are supposed to interact and communicate naturally with human partners.
The invention of cybernetics by Norbert Wiener more than seventy years ago  marked the acquisition of two main interdisciplinary concepts: (1) the large common ground between neurophysiology and engineering methodologies and (2) the unitary nature of the multi-scale/multi-level investigation of “cognitive agents”, whether biological or artificial.
There is no doubt that modern engineering methodologies, from informatics to signal processing, control methodologies, and telecommunications are somehow spin-offs with a common cybernetic origin. At the same time, we should remember that Wiener and colleagues  advocated for neurophysiology “a new step in the study of that part of neurophysiology which concerns not solely the elementary processes of nerves and synapses but the performance of the nervous system as an integrated whole”. This was the preliminary background that suggested, a few years later, the proposal of “Cybernetics”  to denote “the entire field of control and communication theory, whether in the machine or in the animal”, based on the concept that “the problems of control engineering and communication engineering were inseparable and that they centered not around the techniques of electrical engineering but the much more fundamental notion of the message, whether this should be transmitted by electrical, mechanical, or nervous means”.
One of the early accomplishments of cybernetics was to establish the role of feedback both in engineering design and biology. Not much attention was devoted to cognition per se, although it was clear that feedback and feedback control may require the integration, in the closed loop, of specific information/knowledge, as reflected in the concept of “control by informative feedback”. A system that counts on feedback for its behavior and stability is fundamentally linked and integrated with the surrounding environment, partially destroying the clarity and rationale of simple causal reasoning: in the closed loop between two interacting systems (say a purposive agent and its environment), the first system influences the second and second system influences the first, leading to a circular pattern of interaction that requires to analyze the system as a whole. Such an intrinsic circularity of purposive action inexorably leads to the notion of cognition as a necessary side-effect of feedback control, a concept that was clearly stated by Maturana and Varela , thus complementing the overall view of Cybernetics: they proposed Enactivism, namely a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. Stretching this concept to the extreme the authors stated that it is valid for all organisms, with or without a nervous system, because living implies adaptation to a specific environment.
However, even avoiding such extremization and focusing the attention on purposive agents like humans and humanoid robots, the close relation suggested by Enactivism between feedback and adaptation to the environment, clarifies why and how cognition is fundamental for purposive action, both at the phylogenetic and ontogenetic level. Along the same line, there is also the emergence of a specific view on the nature of cognition known as Embodied Cognition . This theory, fully positioned in the cybernetic framework, is fundamentally opposed to the many forms of dualism that, since the time of Descartes, separated the body from the mind, pragmatic from intellectual activities, hardware from software, and so on: thus, cognitive processes are shaped by and integrated with sensory and motor processes of the entire body in its continuous interaction with the environment. Such circularity also resonates with another stream of psychological research on the cognitive development in children, expressed, in particular, by the “circular reaction strategy” advocated by Jean Piaget , namely the efference-reafference cycle that allows humans, particularly during early development, to learn sensory-motor transformations via an active exploration of the environment [6, 7, 8]. Moreover, this strategy is a self-organizing paradigm that can be associated with self-organizing neurodynamics for learning sensory-motor transformations [9, 10, 11, 12, 13, 14].
At the same time, we cannot ignore a divergent line of research that had initially a strong link with cybernetics, namely Artificial Intelligence (AI) . Apart from the unresolved issue of the specific relation between cognition and intelligence, it is a fact that AI, departing from the cybernetic context, was articulated into two main streams, one focused on the symbolic representation of knowledge (Symbolic AI) and another (Connectionist AI) focused on the acquisition of knowledge via supervised training (typically by using the backpropagation algorithm) of feedforward neural networks . The former stream is currently active in the development of cognitive architectures for robotics  and the latter stream in deep learning techniques  aiming at achieving Artificial General Intelligence (AGI). In both cases, however, the type of cognition that is aimed at is overall “disembodied” and what is lost is one of the fundamental elements of cybernetics, namely the issue of self-organization through agent-environment interaction, learning, and adaptation. However, we should not forget that the self-organization issue was kept alive, since the early time of cybernetics, by two minority streams of research in neural networks: (1) self-organizing neural networks  trained by unsupervised Hebbian learning or self-supervised learning , inspired by the already mentioned Piagetian “circular reaction strategy”; (2) associative memories [20, 21, 22, 23, 24], also trained by Hebbian learning and characterized by a collective energy function that implies attractor dynamics.
In this framework, articulated around the central role of cybernetics for understanding the organization of purposive actions in humans and robots, we should also consider the markedly original work of Nikolai Bernstein . Although he is mostly known for the definition of the “degrees of freedom problem” as the crucial topic underlying the organization of action, his research achievements are indeed multifaceted and synergic in many senses with cybernetics. His early research was aimed at overcoming the limitations of Pavlov’s theory on conditioned reflexes: it was clearly stated by Bernstein in 1924 that such a theory cannot explain human skills because it ignores their purposeful character . A similar evolution occurred in Western neurophysiology when the limitation of the spinal reflex as the basic building block of motor neurophysiology, advocated by Charles Sherrington , became evident, thus promoting the emergence of cognitive neuroscience. It is important to note that the importance of feedback, central in the development of Wiener’s cybernetics, was clear as well in the mind of Bernstein not only as a crucial ingredient of motor control (providing corrections of the on-going movement) but even more importantly from the cognitive point of view, suggesting a mechanism of anticipation or “ante factum corrections” . In agreement with the rationale of embodied cognition, Bernstein suggested that the organization of purposive actions is driven by the goal, which is the “meaning of the action” and plays the role of an invariant in the production of the action. This line of research led to the theory of non-individualized control of complex systems and the principle of minimal interaction .
According to Berthoz and Petit , Bernstein was one of the first to conceive anticipation/prediction as a constructive element of purposive action, although we should also consider that this issue has a clear link with the ideomotor theory of action, dating back to James’ Principles of Psychology  and recently revisited . Moreover, the concept that the “idea” of an action, i.e., the predicted/desired sensory consequences of an action, apply both to real (overt) and imagined (covert) actions  is reflected in the concept of Kinesthetic Imagination [33, 34] as a driving force for the acquisition of skilled behavior.
In summary, the cybernetic framework for the representation of action must be focused on what has long been considered the fundamental issue underlying action and motor control, namely the degrees of freedom problem. On the other hand, the solution to this problem suggested by Bernstein consisted, essentially, in “freezing” the number of possibilities at the beginning of motor learning, an idea revisited years later  under the name of Uncontrolled Manifold (UCM). The problem here is that the selection of the Degrees of Freedom (DoFs) that need to be “frozen”, for a specific task, is far from straightforward and thus the advocated reduction of complexity for the brain tends to disappear. The same kind of conceptual contradiction can also be found in the proposal of “muscle synergies”  as basic building blocks for the construction of natural motor behavior. The alternative, proposed by the author [37, 38], is based on a computational model for the generation of “muscle-less synergies” or “ideomotor synergies” that apply both to covert (imagined) actions and overt (real) actions, in agreement with the theory of the Neural Simulation of Action formulated by Jeannerod . The rest of the paper builds upon ideomotor/muscle-less synergy formation, to clarify how this modeling approach is a natural heir of cybernetics, on one side, and can be grounded, on the other, on self-organizing neural modeling, including possible quantum computing implications.
The rationale of muscle-less synergies is supported by the discovery of motor imagery [39, 40, 41, 42] and, as a consequence, the distinction between ideomotor synergy formation and synergy control: the former item has mainly the purpose of anticipating the consequences of a plan of action through an internal simulation that includes the selection and recruitment of the required DoFs as well as their ranking according to the degree of relative relevance for the action; the latter item, which is relevant only in the case of overt action, includes the activation of the relevant muscles by blending different control strategies: feedforward, feedback, and stiffness (via coactivation of antagonistic muscles). The kinesthetic patterns generated by the muscle-less synergies are also crucial for optimally tuning the mentioned control strategies.
The main purpose of this review paper is to offer a computational perspective for the design of cognitive architectures of robots of industry 4.0 that are supposed to interact and communicate with human partners. The working hypothesis is that to achieve that purpose humans and robots should share the overall computational organization, although detailed “hardware” may be quite different. This is the innovative contribution of the paper that is meant to re-evaluate the deep rationale of cybernetics, namely the belief that neurobiology and neurotechnology can feed and improve each other: a belief that underlies as well the field of integrative neuroscience.
The computational model of synergy formation for muscle-less or ideomotor synergies is based on the Passive Motion Paradigm (PMP) [43, 44]. It was conceived for explaining and reproducing biological motion, with particular attention on the Spatio-temporal invariants that characterize common human gestures such as reaching in 2D , reaching in 3D , whole-body gesturing as when writing on a blackboard , handwriting and hand-drawing , bimanual coordination . The invariant features indicate that the figural and kinematic aspects of human gestures are not independent and the figural-kinematic link occurs whatever the number of DoFs recruited for a specific action. In particular, in point-to-point, unconstrained movements the trajectory is (approximately) straight and the speed profile is bell-shaped and symmetric, whatever the starting point, direction, and length. In common gestures, where the figural aspect is meant to express a specific meaning and is composed as a sequence of primitive gestures, the figural-kinematic linkage is expressed by the anti-correlation of the speed and curvature profiles: the times of peak curvature coincide with the times of minimum speed and the times of maximum speed coincide with the times of minimum curvature (Fig. 1, Ref. ).
Spatio-temporal or figural-kinematic invariants in trajectory formation. (A) Planar reaching movements between six target points; note the invariant straight point-to-point trajectories and the invariant bell-shaped speed profiles. (B) Three examples of continuous hand gestures displayed as digitized trajectories, including the profiles of the velocity (V) and curvature (C); note the anti-correlation of the two profiles. From Morasso, P. “A vexing question in motor control: the degrees of freedom problem”. Front. Bioeng. Biotechnol. 9:783501, 2022. .
There have been attempts to explain the two main features of the spatio-temporal invariants in terms of specific mathematical models: for example, the minimization of jerk for the approximation of reaching movements  and the 2/3 power law for reproducing repetitive curved shapes . The proposed model, based on the PMP, was originally conceived in the framework of the Equilibrium Point Hypothesis (EPH) [52, 53, 54], namely the idea that the motor system has point-attractor dynamics determined by the visco-elastic properties of muscles. In other words, the body is viewed as a network of spring-like elements that store elastic energy, contributing to the global potential energy that recapitulates, in a smooth, analog manner, the complex set of bodily interactions, providing a “landscape”, with hills and valleys, that induce the overall body model to navigate “passively” in the landscape, attracted by the nearest equilibrium configuration. The minimization of potential energy is a “global process” arising from “local interactions”. The brain can tune such local interactions in a task-oriented manner, modifying the shape of the landscape and the corresponding force field; thus, there is no need for the brain to represent and control actions directly and continuously because an indirect and discontinuous intervention is sufficient, by preparing new equilibrium points in advance and anticipation of the future course of the action. This is a general concept that is fully in tune with the Bernsteinian viewpoint, whereas it seems at odds with the emphasis on continuous feedback attributed to the cybernetic point of view that, in a narrow interpretation, could regard the generation and control of actions as a “servomechanism”. But this is just a narrow view and the PMP model is intended to inherit the main ideas on the representation and generation of actions from both the general cybernetic view, on one side, and the Bernsteinian view, through EPH, on the other, with a clear link to the artificial neural networks with attractor dynamics such as associative memories proposed by Hopfield .
The PMP model solves the degrees of freedom problem in an implicit manner, whatever the degree of kinematic redundancy, by avoiding ill-posed transformations, like inverse-kinematics, but counting only on well-posed computations, as mapping joint angles to end-effector position (direct kinematics) and mapping end-effector forces to joint torques (direct statics).
In agreement with the theory on the neural simulation of action , as a unifying mechanism for motor cognition, the PMP model is suggested to apply both to overt or real actions, which imply the activation of muscle synergies, and to covert actions, which refer to the specific cognitive aspects of action, in terms of anticipation and imagination: the goals of action are expressed as a set of elastic force fields applied to specific parts of the body and then diffused to the whole-body network. Remarkably, force fields are additive, thus providing a natural composition of complex gestures in terms of motor primitives. Moreover, the original PMP model of synergy formation was extended , incorporating a non-linear gating mechanism of the virtual force field, similar to the GO-signal of the vector-integration-to-endpoint (VITE) model  which corresponds to the well-known cortical-subcortical loop and induces a terminal attractor dynamics to the synergy-formation model [57, 58].
Fig. 2 (Ref. ) is a simplified version of the PMP model as a pair of non-linear
interconnected modules: module A operates in the low-dimensional exteroceptive or
egocentric space (typically 3D); module B operates in the high-dimensional
proprioceptive space (nD where n is the number of DoFs of the model). The input
to block A is a final target point
Simplified PMP model of synergy formation, as a pair of
non-linear interconnected modules (A and B). A operates in the low-dimensional
exteroceptive or egocentric space (typically 3D) and B in the high-dimensional
proprioceptive space (nD where n is the number of DoFs of the body-model). The
input to module A is a final target point
The second module (B) implements the PMP by applying to the end-effector a force
As clarified above, the interaction/integration between the two representation levels (exteroceptive and proprioceptive) is provided by the Jacobian matrix of the kinematic transformation (or forward kinematic function), which is the main component of the body model:
The simulation of the model of synergy formation in Fig. 2 consists of the integration of the following ordinary differential equations (ODEs):
From the kinematic point of view, the Jacobian is not an invertible operator: it
provides a unique solution in the mapping from the proprioceptive to the
exteroceptive domain (
It is also worth considering that although the mathematical formulation of the Jacobian matrix can be derived explicitly in closed form through standard, although generally complicated, methods, these methods are unlikely to match biological reality. A more biologically plausible approach is based on the circular reaction strategy: the general idea is that the purposive agent performs a random set of movements, where the joint angular patterns are distributed in an approximately uniform manner in the proprioceptive manifold, keeping note of both the joint angles and the position/orientation of the end-effector (training set). In other words, learning is behaviorally unsupervised, in the sense that the training set is autonomously generated by the agent in such a babbling phase. Moreover, the neural representation of the Jacobian matrix can be obtained by training a feedforward, multilayer neural network using the backpropagation method. Although for this kind of network the connection weights between input, hidden, and output neurons are generally unidirectional, it can be demonstrated  that the same network can be used in both directions: from the proprioceptive input neurons to the exteroceptive output neurons it approximates the Jacobian and in the opposite direction it approximates the transpose Jacobian. Multilayer feedforward networks are far from being biologically plausible, in particular, because the training method (backpropagation) is quite implausible. However, the overall plausibility of the PMP computational architecture, summarized in Fig. 2, is based on the underlying circular reaction strategy and its self-organizing flavor. This argument, as explained in the next section, is further motivated by a neural formulation of the computational model of synergy formation that uses Hebbian learning instead of backpropagation.
The computational model of Fig. 2 is a lumped system, namely a model in which the dependent variables of interest are a function of time alone and the analysis of the dynamic behavior of the model implies solving a set of ordinary differential equations (ODEs). Although we clarified how this model can capture relevant aspects of sensorimotor cognition, the biological plausibility of the model and its computational efficiency could be enhanced by a distributed implementation based on neural networks. One step in this direction was already provided in the previous section by showing how to integrate into the lumped formulation of the model a feedforward neural network for representing the Jacobian matrix, which is a crucial element of the Passive Motion Paradigm. However, this does not improve the overall biological plausibility, due in particular to the artificial nature of the back-propagation training mechanism; moreover, integrating a neural network representation in the lamped model yields an implausible hybrid computational structure. An additional step forward, outlined in this section, is a fully distributed implementation of the model of ideomotor synergy formation in which the variables of interest are distributed on collections of Processing Elements (PEs) that we call Sensorimotor Neural Fields. In particular, we propose two interacting neural fields, one related to exteroceptive or egocentric information and the other to proprioceptive information, in analogy to the subdivision of the model of Fig. 2 into two modules (A and B). More specifically, both neural fields are represented by Topology Representing Networks (TRN) , extended in such a way as to incorporate an attractor neurodynamics inspired by the Hopfield associative memory model [20, 22] and trained by unsupervised Hebbian learning. In passing, we wish to observe that although both neural paradigms were conceived more than 30 years ago and their computational potential was somehow obscured by the recent emphasis and commercial success of supervised learning in deep feedforward neural networks  both neural models are still active research paradigms [23, 24, 61, 62].
At the same time, we should clarify in which sense the proposed Sensorimotor Neural Fields are related to Neural Field research at large, pioneered by Amari and Wilson & Cowan for developing a continuum approximation of the neural activity of specific cortical areas [63, 64]. Typically, the numerous neural field models developed over time are tissue-level partial differential equations (PDEs) that describe the spatiotemporal evolution of coarse-grained variables in populations of neurons: the grains in such neural areas typically reflect micro- or macro-columns and thus represent a mean-field model, averaging neural activity over a time interval of the order of several milliseconds. Consequently, neural fields are usually continuous and coarse-grained in time and space. We should also mention that there is some similarity between neural field models and neural mass models , with the difference that the latter models neglect spatial extensions. Since neural field models are nonlinear spatially extended systems, they are capable in principle to support the formation of spatio-temporal patterns, such as bumps (for population coding) and traveling waves. One of the common assumptions in most neural field models is that the networks are homogeneous and isotropic, typically distributed on a bi-dimensional manifold. In contrast, the neural sensorimotor field model described in the following focuses on representing the dimensionality of the sensorimotor manifolds through the lateral connectivity of the PEs, organized indeed as Topology Representing Networks. In summary, such sensorimotor neural fields are coarse-grained in time and space but are not continuous in space, implementing a tessellation of high-dimensional manifolds, and thus are represented by a large set of ODEs rather than PDEs.
A neural sensorimotor field can be intended as a collection of neural assemblies
or processing elements (PEs), such as cortical micro-columns, logically
distributed on some smooth hyper-surface or manifold, encoded by the connectivity
patterns of the PEs. A TRN, in the context of this model, can
be used to represent neural fields. All the PEs of a neural field
receive a common thalamocortical input vectors
Now, suppose that during training the input sensory signal is an n-dimensional
vector that is generated by sampling in an approximately uniform manner a finite
The relation between the connectivity of a TRN and the dimensionality
of the input sensory signal can be derived from the theory of dense sphere
packing  and, in particular, from the definition of “kissing number” K: for
the regular tessellation of nD space, K is the number of hyperspheres
with an equal radius that “touch” a given hypersphere in the densest packing.
Reminding that no algorithm exists for computing K in general as a function of
n, it is worth considering a few notable cases: K =
2,6,12,24,40,72,126,240 for n = 1,2,3,4,5,6,7,8. In practice, the
tessellation of the manifold offered by a trained TRN will not be
perfectly regular but the notion of kissing number as an indicator of hidden
dimensionality can be associated with the distribution, over the network, of the
number of cross-connections of each PE. Thus, if the hidden
dimensionality of the input vector
Although the classic neural field models [11, 12] are based on a flatness assumption, this hypothesis is contradicted by the fact that the structure of lateral connections is not genetically determined but depends mostly on activation during development: such connections are known to grow exuberantly after birth, reaching their full extent within a short period, followed by a pruning process which ends in a well-defined pattern of connectivity, characterized by a large amount of non-local connections. Such connections to non-neighboring microcolumns are organized into characteristic patterns: collaterals of pyramidal axons typically travel a characteristic lateral distance without giving off terminal branches and then producing tightly packed terminal clusters: the characteristic distance is not a universal cortical parameter and is not distributed in a purely random fashion but is different in different cortical areas [70, 71, 72]. Thus, the development of lateral connections depends on the cortical activity caused by the external inflow, in such a way as to capture and represent the (hidden) correlation in the input channels.
For the neuronal formulation of the synergy formation model we need at least two
fields, representing two sensorimotor manifolds: one related to the
representation of exteroceptive or egocentric, or distal space (
Fig. 3 is a sketch of the two trained maps: the receptive field centers of the
A sketch of two interacting neural fields, represented by two
trained TRNs. One hosted by the exteroceptive map (
It is worth noting that this kind of neuronal representation of synergy formation, based on the circular reaction strategy, implicitly incorporates a treatment of joint limits that is more general and more robust than the one that is provided by the modeling framework depicted in Fig. 2. Since network training through circular reaction integrates the sensory-motor data acquired during the untargeted exploration of the environment, the learned prototypes automatically incorporate all the biomechanical constraints and guarantee a safe limitation of the planned patterns. Moreover, typical amplification phenomena can occur if the exploratory movements during circular reaction do not sample uniformly the workspace but are more concentrated in some areas, a sort of attentional proprioceptive fovea. In other words, the representation of sensory-motor spaces does not need to have a uniform and pre-fixed resolution, but a variable resolution that can be fine-tuned by experience. The two processes (planning and learning) could be made to co-exist without interference using autonomous mechanisms of selective attention and vigilance similar to those studied by Gaudiano and Grossberg .
A likely site for the computational model based on interacting TRNs is the posterior parietal cortex (PPC), particularly as regards the association area 5 [75, 76], which is the crossroad between the somatosensory cortex (areas 1, 2, 3), the motor cortex (areas 4 and 6), the other part of the PPC (area 7) involved in the integration of external space structures, and sub-cortical as well as spinal circuits: PPC processes a combination of peripheral and centrally generated inputs and is potentially suitable to synthesize neuronal representations in active movements. It is important to note that area 5 is activated in anticipation of intended movements  and is insensitive to load variations , i.e., it appears to deal with the purely geometric and kinematic aspects of movements.
On top of the topological organization of the two interconnected TRNs, that correspond to the two modules A and B of Fig. 2, it is necessary to design the corresponding neuro-dynamics, first of a single neural field (module A) and then of the interconnected fields (A plus B). Lateral intra-connections have a crucial role in the process, although each of them may be singularly too “weak” thus going virtually unnoticed while mapping the receptive fields of cortical neurons; however, the total effect on the overall dynamics of cortical maps may be substantial, as suggested by the sharp increase of the number of intra-connections with the increase of the dimensionality of the manifold and by cross-correlation studies . Lateral connections from superficial pyramids tend to be recurrent (and excitatory) because 80% of synapses are with other PEs and only 20% with inhibitory interneurons, most of them acting within columns : recurrent excitation is likely to be the underlying mechanism which produces the synchronized firing which has been observed in distant mini-columns.
The existence (and preponderance) of massive recurrent excitation in the cortex is in contrast with what could be expected, at least in primary sensory areas, considering the ubiquitous presence of peristimulus competition (or “Mexican-hat pattern”) which has been observed in many pathways as the primary somatosensory cortex and has been confirmed by direct excitation of cortical areas as well as correlation studies; in other words, in the cortex there is a significantly larger amount of long-range inhibition than expected from the density of inhibitory synapses. In general, “recurrent competition” has been assumed to be the same as “recurrent inhibition”, for providing an antagonistic organization that sharpens responsiveness to an area smaller than would be predicted from the anatomical funneling of inputs. Thus, an intriguing question is how long-range competition can arise without long-range inhibition and a possible solution is the mechanism of gating inhibition based on a competitive distribution of activation, proposed by Reggia  and further investigated by Morasso and Sanguineti .
The neurophysiological evidence summarized above about the organization of
neural fields can be modeled in different manners and the following mean-field
model of the dynamics of a single sensorimotor neural field is just an example.
In this model, for simplicity, the generic cortical minicolumn is lumped into a
single PE, characterized by an activity level
N is the number of PEs of the neural field. The first element
of the equation provides the terminal attractor feature of the field
neurodynamics, gating with the non-linear function
(1) The first contribution
(2) The second contribution
(3) The third contribution is related to the external thalamo-cortical input
Shunting interaction, together with gating inhibition, is crucial for inducing the emergence of a manifold-wide behavior that is analogous to the synergy formation process described above for the computational model of Fig. 2.
In summary, the transient behavior of the map can be described as follows: after
the sudden shift of the input variable
Fig. 4 shows a simple simulation of a neural field that illustrates the
described computational mechanism. The input environmental variable
The figure shows the graphical output of the simulation of a simple neural field or map whose dynamics is described by Eqn. 3. The input environmental variable x is two-dimensional, varying in a circular domain. The neural field includes 128 PEs that after training are arranged in a regular tessellation of the input domain. The initial peak of activity of the map (T = 0) is located in (+0.2, –0.2); the final target is then instantiated at position (–0.2, +0.2), thus triggering a transient that lasts 1 s, shifting the population code from the initial to the final position. This distributed dynamic behavior mirrors the dynamics of block A in Fig. 2 where the moving target is attracted to the final target with a bell-shaped velocity profile.
If we compare the lumped implementation of module A in Fig. 2 with the
distributed implementation with a neural field whose PEs are
characterized by Eqn. 3, we may say that the force field, explicitly represented
in the former case by the formula
Both neural fields of the distributed synergy formation process (the
This term carries out the same role as the Jacobian matrix in module B of Fig. 2. In other words, the force field that drives the motion of the population code in map A is reflected onto map B, starting the co-evolution of the two maps, synchronized by the common gating command.
In summary, the distributed implementation of the model of Fig. 2 is characterized by the following distributed set of ODEs:
is the instantaneous activity level of each PE of map A and
The biological plausibility of this model was tested in the case of speech motor control  with real data: in this case, the exteroceptive space is acoustic (the targets are spoken sequences) and the proprioceptive space characterizes the articulatory structure of the vocal tract, which includes tongue, jaw, lips, and larynx and thus, mechanically, has an infinite number of DoFs. However, it is expected that the number of functional DoFs or functional articulators used by the brain is limited, although large enough to allow some redundancy in speech production. This problem was addressed by using a training set that included several thousand samples of the acoustic output of a speaker, pronouncing Vowel to Vowel (VV) transition sequences, synchronized with a cineradiographic view of the vocal tract . The acoustic samples were represented by the first five formants of the recorded sounds (In speech science a formant is defined as a broad peak, or local maximum, in the spectrum), and were used for training an acoustic TRN, composed of 500 PEs, with a five-dimensional acoustic input vector. The digitized images of the vocal tract were analyzed by extracting 10 geometric indicators  that were used for training an articulatory TRN composed of 1000 PEs, with a ten-dimensional input vector.
After training, the analysis of the patterns of intra-connections allowed us to evaluate the intrinsic dimensionality of the acoustic map in the range 3–4 and the corresponding dimensionality of the articulatory map in the range 4–5. The inter-connections, obtained in the combined training of the two maps, implicitly code the functional relationship between the two manifolds and also allow mapping the population code of one map as external input for the other: this induces coupled acoustic-articulatory dynamics that is a general-purpose tool for solving several sensorimotor problems in a simple and unified framework.
Finally, the computational power of the dual-map model was demonstrated by
testing its ability to generate coordinated acoustic-articulatory patterns in VV
transitions compatible with the available experimental data, for example, the
/ae/ transition. The initial conditions in the two maps were chosen by centering
the two population codes according to the available data vectors and allowing the
dual neural fields to stabilize. The phoneme /e/ was then given as new external
input to the acoustic map at t = 0 while also activating the
In the previous section, it was shown that a neural architecture based on interacting TRNs is capable, in principle, to carry out goal-oriented synergy formation processes for both covert and overt actions. Synergy formation for purposive actions is an essential kernel of cognition, in the framework of the theory of Embodied Cognition and agreement with the self-awareness promoting practices such as Tai Chi: in fact, Tai Chi is defined as “meditation in motion”, namely the generation of slow and smooth gesture sequences integrated with intentional and anticipatory motor imagery [86, 87].
The neuro-dynamics of the model implies a large number of modular PEs
that may correspond to the mini-columns of the cerebral cortical areas. It is
estimated that in the brain there are about 10
The distributed model sketched in the previous section is based on self-organizing principles operating both at the behavioral level, like the circular reaction strategy, and at the local level, like the competitive interactions of neighboring PEs that support global effects like the diffusion of force fields and then the propagation of population codes. The point is that this kind of computational architecture is as far away as possible from the von Neumann digital machines that are used for simulating very limited prototypes of the model.
At the same time the dissemination of electrophysiological techniques based on multi-electrode, multi-site recording, which permits the analysis of the correlation structure of cortical areas, has focused the attention on cortical dynamics, suggesting that the cerebral cortex might exploit high-dimensional, non-linear dynamics for carrying out cognitive functions . The cortical connectome, with its preponderance of reciprocal connections and the rich dynamics resulting from such reciprocal interactions, is indeed ideally suited to provide an internal representation of high-dimensional manifolds emerging from the non-linear dynamics of recurrently coupled networks, on the border of chaotic and attractor dynamics . In this conceptual framework, which allows performing a flexible and efficient computation in a distributed manner, the representation and internal simulation of purposive actions are distributed and encoded both in the discharge rate of individual PEs and in the specific temporal relations among the discharge sequences of distributed PEs in different cortical maps.
On the other hand, one may ask whether this conceptual “digital” framework, based on the firing patterns of neuronal assemblies, is sufficient to capture short-range “analog” interactions which support Hebbian learning, at the basis of TRNs, as well as the force-fields and wave-like behavior of neural fields that implement the PMP model of synergy formation on a very large number of PEs. A possible solution, away from the von Neumann paradigm, could be offered by a large family of neuromorphic technologies, rapidly growing but still in their infancy : this new generation of computing architectures is believed to deal with the storage and processing of large amounts of information with much lower power consumption than von Neumann architectures. The neuromorphic architectures use very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures in the nervous system. Generally speaking, the term neuromorphic is used to describe analog, digital, and mixed-mode analog/digital VLSI systems. Just as a few examples, we may quote Stanford University’s Neurogrid system , Intel’s neuromorphic research chip Loihi , or Heidelberg University’s BrainScaleS Neuromorphic Hardware System , in the framework of the Human Brain Project. We are still far away from a deep understanding of how to design and assemble neuromorphic computing systems with billions of artificial neural assemblies that can match, at least approximately and in a coarse-grain manner, the organizing principles of brain function: the problem is that it is still unknown how computational functions can emerge from neuromorphic network structures . Moreover, it is expected that the hybrid integration of neuromorphic technologies with other nanotechnology concepts, including photonics, can be extended down to quantum computing technologies .
Behind the amazing technological potential breakthroughs hinted above, it is also necessary to face a fundamental question: the analog component of cortical dynamics, underlying the measurable digital component expressed in terms of neuronal firing patterns, should be described and understood in terms of classical physics (for example using ordinary differential equations) or quantum physics or as a combination of the two? There is no doubt that the brain obeys quantum mechanics as any physical system but the question is whether quantum effects are observable and measurable and, for specific brain functions, are relevant or irrelevant in comparison to the account offered by classical physics.
Various general objections are raised against any quantum brain hypothesis: even though neurons and neuronal components are small, still they are orders of magnitude too big for assuming that quantum effects may influence directly neuronal activity; moreover, the physiological features of the brain environment (warm, wet, and noisy) seem to imply sure destruction of any non-trivial quantum effect, such as superpositions or entanglements . In addition, major philosophical and conceptual problems surround the process of making measurements in quantum mechanics. From the engineering point of view, together with the lure of achieving asymptotic speedups, there is also the formidable challenge that quantum computations are difficult to implement, as exemplified by the fact that no scalable large quantum computer, with a size comparable to the human brain, is known so far despite the size of the employed funds. More specifically, two key biophysical operations underlie information processing in the brain: chemical transmission across the synaptic barrier and the generation of action potentials. Both events involve thousands of ions and neurotransmitter molecules, driven by mechanical diffusion or by electrical potential over tens of microns, and this is likely to inhibit the emergence of any coherent quantum states. Thus, spiking neurons can only receive and send classical, rather than quantum, information. On the other hand, spiking is the ‘digital’ final event that occurs on the top and at the end of complex ‘analog’ processes that may or may not include quantum effects.
In contrast to classical physics, quantum mechanics is fundamentally indeterministic but this is true as well for non-linear dynamical systems in general. Quantum effects are small and local and thus it seems unlikely that can influence brain dynamics at large in a non-trivial way. However, the complex non-linear dynamics of the brain is frequently characterized by quasi-chaotic behavior, or at least it operates on the stability edge with high sensitivity to small fluctuations: such sensitivity may amplify the small and local quantum effects and help dissemination at relatively long distances, such as the sensorimotor maps suggested in the previous section .
In any case, there is mounting evidence of non-trivial quantum effects in the brain, at least in the sensory domain: Rhodopsin, an important protein for retinal photoreceptors, was found to exhibit quantum waves ; Olfactory receptors appear to exploit quantum effects (electron tunneling) for the detection of odorant molecules ; Magnetoreception, that is crucial for avian navigation skills, revealed robust quantum entanglements in the cryptochromes of the retina . Moreover, quantum effects are expected to play a role in a fundamental neural function such as the opening of ion channels .
However, it is fair to say that, at present, non-trivial quantum effects were not detected or hypothesized in the central nervous system concerning general cognitive functions, except for the highly controversial quantum consciousness hypotheses formulated by Penrose and Hameroff and based on the supposed quantum computation carried out by the tubulin components of microtubules, filamentous protein polymers that form the cytoskeleton of cells .
In any case, there is agreement that the neurodynamics underlying motor cognition should be characterized as a non-linear system, capable of oscillatory and/or chaotic behavior [105, 106]. In such a context, small (even infinitesimal) fluctuations due to generic noise or quantum effects need not be averaged out in the large but, at least in some cases, can be amplified in a multi-scale manner. In both cases, such rich dynamic effects are intrinsically indeterministic but critical from the self-organization point of view.
In summary, the intricate interplay between quantum effects and non-linear complex dynamics might be able (a) to generate new persistent quantum-chaotic patterns at a microscopic scale and (b) to amplify quantum effects to a macroscopic scale. How exactly the indeterminacy of complex quantum dynamics of the brain is embedded in classical neuronal mechanisms of the cognitive organization remains to be investigated in depth. In the meantime, we may suggest a crucial side-effect of the possible massive exploitation of quantum effects in brain physiology: since local quantum interactions are likely to be very efficient from the energetic point of view, their exploitation/amplification through non-linear quasi-chaotic global dynamics might be the key to the energetic efficiency of the neural control of actions in general. It is also worth mentioning that the global, quasi-chaotic dynamics should not be restricted to the brain per se, isolated from the environment, but should include body-environment interaction as well. An example, in this context, is the issue of Intermittent control for the stabilization of unstable tasks, such as balancing inverted pendulum paradigms [107, 108, 109, 110]. The intrinsic dynamics of the task involves a saddle-like instability that implies a partition of the state space of the inverted pendulum in stable and unstable areas. The intermittent control strategy means to open/close the feedback loop according to the current state. The result is a quasi-chaotic oscillation (the well-known sway motion in the case of upright standing) that enhances readiness for sudden phase transitions if the task changes and provides energetic efficiency because no muscle energy is required in about 50% of the time (i.e., when the feedback loop is open).
Quantum computing and quantum (neuro)-biology are certainly linked but are separated logically and technologically. The fundamental motivation of the former is the promise of achieving asymptotic speedups in hard computational problems that are crucial for specific applications, including genomics, genetics, biochemistry, and deep phenotyping . The challenge for neurobiological modeling is to outline a hybrid framework for integrating digital information, analog information, and quantum information.
In the original spirit of cybernetics, we believe that a better and deeper understanding of the neurobiology of purposive action should have an impact on the design of robotic systems capable of similar functionality. Although this requirement was practically ignored so far in most industrial robotics, it is now re-evaluated in the framework of Industry 4.0 which implies a high degree of cognitive interaction and communication between humans and robotic partners. This is the reason for (re)taking inspiration from neurobiology for designing better robotic systems, capable of cognition in purposive action and multi-agent interaction and cooperation, revisiting the general framework outlined by Nikolai Bernstein and Norbert Wiener.
In particular, it is worth considering the relationship between the Passive Motion Paradigm, which has a central role in the modeling framework outlined in this review paper, and Active Inference . The issue was discussed by Friston and Parr , enhancing the fact that these two concepts are strongly related also in a deep philosophical sense: “The anti-symmetry between active inference and passive motion speaks to the complementary but convergent view of how we use our forward models to generate predictions of sensed movements. This view is another example of Dennett’s ‘strange inversion’ , in which motor commands no longer cause desired movements – but desired movements cause motor commands (in the form of the predicted consequences of movement)”. Moreover, beyond simple goal-oriented actions, gesture representation, production, and understanding  are topics of crucial interest for human-robot communication, as emphasized by recent research activities along this line [115, 116].
Among the different issues that are implied by such vision, there is also the energetic efficiency of the neuro-biological implementation, characterized by hybrid integration of computing tools: the optimality of human motion is appreciated by roboticists  and the energetic “frugality” of neural computational architectures, away from the von Neuman paradigm, is emphasized by the roadmap to the development of neuromorphic systems , including the focus on spiking neural control [118, 119].
The author confirms being the sole contributor of this work and has approved it for publication.
This research was supported by internal funds of the RBCS (Robotics, Brain, and Cognitive Sciences) research unit of the Italian Institute of Technology, Genoa, Italy in the framework of the ICOG initiative (CDC22032).
The author declares no conflict of interest.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.