- Academic Editor
In attempting to comprehend how the world functions, science is compelled to explain natural events mechanistically [1]. The study of how neural systems allow us to perceive our environment has advanced quickly. Single-unit recordings served as the foundation for the majority of the early research on sensory and motor systems [2]. This provided us with a good grasp of the response characteristics of neurons, but there is likely more information that can only be obtained by studying the firing patterns of large populations of neurons [3]. We see hints of distributed representations in many brain regions and large-scale population codes when we examine hundreds of neurons recorded simultaneously [3]. One of the main topics of research in the study of nervous system function is neural coding. We need to grasp how brain circuits encode information before we can comprehend how they process it [4].
Action potentials are the main way that information is sent to, and processed by, the brain [5]. For instance, in the retina, the action potential of ganglion cells, with axons that extend via the optic nerve and tract to the thalamus, contains all of the information regarding the spatiotemporal patterns of photoreceptor activity. How information is encoded and transmitted to other neurons is represented by the series of action potentials emitted by a neuron or group of neurons [6].
Action potentials are typically all-or-none stereotypical waveforms, so all the information represented by a sequence of action potentials should be encoded in their timing. Such a code would be particularly complex because both the properties of the stimulus (e.g., sound intensity or light touch) would have to be represented as a time function. It is here that the coding model begins to break down.
According to Brette [7], we might understand the brain, which is unnecessarily detached from its structure, by using the metaphor of neural coding. The neural coding metaphor is a way of conceptualizing a brain divorced from its causal structure because coding variables — related to the temporality of experiments but not to the dynamics of the system — are assigned causal significance.
According to accepted theories, brain neural networks perform computer-like functions [8]. The prevailing theories are that (a) synchronous cortical and thalamic oscillation networks bind information in a temporal fashion, (b) neural network patterns of activity correlate with cognitive states, and (c) our cognitive skills are a unique characteristic of neuronal computational complexity [9, 10]. Our mental life is enigmatically explained by “hard-wired” theories since it is difficult to explain how spatially distributed neuronal activity is bound to unitary objects and how thoughts and memory are coherently synthesized. Tools for understanding a hard-wired system include coding, in order to understand temporal binding problems.
We agree with Brette’s [7] contention that coding is neither necessary nor sufficient to understand cognitive events, thinking, and memory. Both algorithmic and random elements have a role in cognition, which is non-computable. Such simulations of cognitions are not possible; explanations for subjective experiences and free choice are therefore required [11]. According to convention, the brain’s neural networks achieve a certain level of complexity that gives rise to conscious experience, and neurons and their chemical synapses are the basic building blocks of information [8].
In conventional thinking, the mind is computer-like in function and runs in the brain [8]. Such explanations overlook incompatible neurophysiological details in the process of the brain being fitted into a computational model. These details include broad apparent randomness at all levels of neural processes, including dendritic-dendritic processing, electrotonic gap junctions, glial cells (which comprise around 80% of the brain), cytoplasmic/cytoskeletal activity, and living condition (the brain is alive!). Additionally, emergence theory lacks testable hypotheses. Consciousness “simply happens”; there is no set threshold or logic. Awareness of our senses and experiences, as well as some capacity for volitional control or coordination, allow us to process cognitive events.
No matter how hard we look, neuroscientists and cognitive psychologists will never locate copies of words, images, grammar rules, or any other type of environmental stimulus in the brain, much less a duplicate of Bach’s “The St. Matthew Passion”. Of course, the human brain is not completely empty, but it lacks the majority of the elements that people “believe” it to have — not even basic concepts like “memories” have a fixed storage location.
Although we have a long history of incorrect conceptions of the brain, the development of the computer in the 1940s caused further confusion. The idea that the human brain functions in a computer-like fashion has been posited by psychologists [12], linguists [13], neuroscientists [8], and other specialists in human behavior, for more than fifty years. Take a look at a baby’s brain to discover how absurd this idea is.
Because of evolution, babies of even altricial mammalian species, including humans, come into the world ready to engage with their environment. Although a baby’s vision is still developing, it can recognize its mother’s face right away and pays particular attention to faces. It can discriminate between several fundamental speech sounds and favors spoken sounds over non-speech sounds. Without a doubt, social connection-making is in our nature [14].
In addition, a healthy infant has over a dozen reflexes that are critical to its survival, which are pre-programmed responses to specific stimuli. It sucks whatever goes into its mouth when something brushes across its cheek, and it turns its head towards it. When it is submerged in water, it holds its breath. It can almost sustain its own weight when holding objects with such vigor. Most importantly, infants have strong learning mechanisms built into them that enable them to adapt quickly and become more adept at interacting with their environment — even if it differs greatly from the world their distant ancestors once lived in [14].
We begin with senses, reflexes, and learning mechanisms. We would certainly struggle to survive if we were born without any of these abilities. Yet there are the things we do not inherit from birth: representations, lexicons, algorithms, models, buffers, symbols, memories, images, knowledge, processors, encoders, programs, decoders, and subroutines — are components that enable semi-intelligent behavior from digital computers. We never develop such things; we are not simply born without them.
Neither words nor the laws governing their manipulation are stored in our minds. Visual stimuli are not mentally formed, stored in a short-term memory bank, and then transferred to a long-term memory storage system. We do not access data, pictures, or text from memory registers. Although organisms cannot perform any of these tasks, computers can.
Information is processed by computers in a literal sense, including letters, number, images, words, and formulas. First, the data must be encoded into computer-accessible format entailing organizing patterns of zeros and ones (also known as “bits”) into manageable chunks (also known as “bytes”). Each byte on a computer is made up of eight bits. The letters “r”, “a”, and “p” are represented by different patterns of those bits. Those three bytes put side by side make the word “rap”. One megabyte, or one million of these bytes, surrounds a special character that instructs the computer to expect an image rather than a word to represent a single image, such as the picture of my grandson Jack on my desktop.
These patterns are moved about by computers in various physical storage spaces carved into electrical components. Computers retain internal rules that govern the copying and movement, of these data arrays. A group of combined guidelines is referred to as a “program” or “algorithm”. An “application”, as most people today refer to them, is a collection of algorithms that cooperate to assist us in doing something.
Computers work with symbolic depictions of the external environment. They actually retrieve and store, process, and possess tangible memories. Computers follow algorithms for guidance in everything they do. Conversely, humans do not and have never done so. In light of this fact, why do so many researchers discuss cognition and mental health as though they were generated by a computer system?
It is difficult if not impossible to explain intelligent human behavior without the information processing (IP) metaphor that is problematic. It complicates our thinking by using words and concepts that obfuscate what it is that we are trying to understand. The IP paradigm is illogical. It is predicated on a flawed syllogism, which has two plausible premises and an incorrect conclusion. The first realistic premise is that intelligent behavior is attainable by all computers. Second reasonable premise is that information processors are present in all computers. The erroneous conclusion is that information processors are all entities capable of intelligent behavior.
Aside from technical language, the notion that people must process information simply because computers do, is absurd. Historians will almost surely regard it that way when the IP metaphor is eventually dropped, much as we currently see the mechanical and hydraulic metaphors to be absurd. Let us elegize the metaphor and observe and measure phenomena.
GL—Writing, Conceptualization, Original draft.
Not applicable.
Not applicable.
This research received no external funding.
The author declares no conflict of interest. Gerry Leisman is serving as one of the Editorial Board members of this journal. We declare that Gerry Leisman had no involvement in the peer review of this article and has no access to information regarding its peer review. Full responsibility for the editorial process for this article was delegated to Gernot Riedel.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
