A currently discussed hypothesis is that the brain functions as a past
experience-based predictor of future actions in the environment ([1, 2, 3, 4, 5], and this
idea has a long history [6, 7, 8, 9, 10, 11]). This predictive function is possible due to
adapted activity of a huge number of neurons.
It is well known that the brain neurons have complex connectivity of two types:
some neurons are mostly locally interconnected, some have additionally
wide-spread connections — the principle known as “small-world” organization
(). The latter ones belong to so called “Rich club” forming a global hub,
which interconnects diverse parts of the whole brain [13, 14]. Such organization
is not limited by morphological connections between neurons but also revealed in
functional connectivity. Functional connectivity related to the ability to
integrate different parts of the brain into the whole is often analyzed in terms
of the information theory [15, 16] and, in particular, by the integrated
information theory (IIT) proposed by Tononi and his colleagues [17, 18, 19, 20, 21, 22], which
quantifies an advantage of a cooperative work of brain parts. might be
considered as a metric capturing an interior view of a system (“what the system
is”) instead of describing external characteristics (“what the system does”)
. IIT was successfully used in many non-biological complicated systems as a
metric of a success [24, 25], but was not used to analyze dynamics of neuronal
activity in a brain during the period of learning. A resembling approach for
functional magnetic resonance imaging data in humans showed an increase of
inter-regional activity correlations between brain regions in contrast with
correlations inside regions during different task performance .
Integrated information coefficient is calculated using a
definition given by authors :
where is the conditional entropy of variable A with given
knowledge of variable B, and are the system state vectors at time
moments and (see below); and
are the state vectors of system subsets at time moments and ; and
is the number of subsets. The time here is discrete, it is a set of sequential
system states really taken at the same time intervals. Partition by subsets,
so-called minimum information partition (MIP), is performed in such a way that
subsets must be maximally independent, i.e., the information reduction after a
system splitting must be minimal .
This approach has some limitations in application to brain neurons. First, due
to the computation complexity, the coefficient cannot be precisely
estimated for large neuron groups, and it is a discussable question does the
approximate value reflect the functional connectivity of a brain. When the state
vectors are considered, an information may be lost while converting a complex
dynamic of spikes and field potentials into a system state vectors. Finally, this
approach does not consider systems with memory, handling an information
processing only for a single time step [19, 27]. Thus, it is required to check
whether the coefficient describe the behavior of a brain.
In this work we used information integration coefficient () to quantify
changes in neuronal activity during learning. was estimated based on two
sets of neuronal activity data from hippocampus and amygdala in rats acquiring a
spatial aversive task in a linear maze. We showed that for the two
structures increased as learning progressed.
2. Materials and Methods
2.1 Calculation of
was calculated with autoregressive algorithm ()
. This approach was developed for performing calculations on time-series
data, because the estimation of entropies in the definition (1) requires to
estimate the whole distribution of data, to gather statistics about states
of a system of neurons. For real time-series data, this distribution is often
underestimated and yields to instability of . To build binary
time-series data for calculations, a timeline was divided into equal bins. If
there was a spike in the bin, this bin has value 1, and in other case it has
value 0. A bin size is used as a compromise, in order to keep all patterns of
neuron spiking and avoid excessive information. Such binning method was used for
calculation of entropies, for example in .
This algorithm has a parameter , which means how much of sequential
states are considered (for how many steps the information is generated). This is
a temporal scale for a system, it shows a degree to which an information in a
network predicts a future network state given an earlier state separated in time
by a lag . So, was determined experimentally by
calculating with different values of and finding such
which gives a maximal average value of for all learning
This classical definition (1) cannot be used for calculation of
. It is based on estimation of a partition minimizing ,
which can only be found by a brute-force search. A number of possible ways to
partition a system of N elements is the Bell number . As a system size
N increases, this number grows faster than a factorial. Thus, another calculation
method, so-called autoregressive () was used :
where is the covariation matrix of variable is
the autoregression residuals (errors of regression that predicts value of at
time moment based on value of at time moment are the two system subsets, and
is a size of a subset . Subsets are selected in a way to minimize the
resulting value. Only two subsets are selected in this method,
the number of candidate partitions is defined by a Stirling number of the second
kind and grows exponentially. But for a large number of neurons a calculation in
a considerable time is still impossible. Thus, an approximation was applied.
Experimentally we found that the limit of 15 neurons is the point when
computations become intractable. To perform calculations, at each session we
selected 15 neurons with the most variance of activity, as it was done in .
In previous works, other approximations based on simplified partition were used
In some cases, this algorithm could not return a value. It is known , that
it happens when some neurons had little or no variance in their activities —
bins were almost always on or almost always off. In this case, the one neuron
with the least variance was excluded from analysis and calculation was performed
again, and so on. If an algorithm was unable to calculate on any
neurons in some period, this period was excluded from statistical analysis. Each
obtained value was normalized by a number of neurons used in this
calculation to make it possible to compare obtained values and investigate its
dynamics through each animal learning sessions.
To achieve a higher time resolution, each session was divided into 8 equal
periods and information integration coefficient was also
calculated for each period. Calculations were performed in MATLAB. Neural and
behavioral data manipulations were performed with FMAToolbox open-source library
2.2 Behavioral Paradigm
We used an open dataset [34, 35] in order to analyze behavior and neuronal
activity during learning, so details of the experimental design may be found in
. Animal experiments were approved by the Institutional Animal Care and Use
Committee (IACUC) at New York University Medical Center, as it was mentioned in
an original manuscript . In short, male Long-Evans rats (N = 3, 300 g, 3
months) were initially trained a simple spatial task in a linear track. During
behavior sessions the animals ran from one end of the track to the other to get
water from a water well. After shaping or initial training sessions,
microelectrode arrays were implanted into the amygdala and the dorsal
hippocampus. After a rehabilitation period of 5 days, animals started learning
the second procedure. Two air-puff sites were located at the equal distance from
the middle of the track and applied at one of two sites when the rat moved in a
particular direction. The direction and the site varied pseudorandomly across
2.3 Data Analysis
Along with information integration coefficient we quantified learning success
for every period of each session. Learning success was assessed as a number of
rewards in a period divided by a length of a period. Correlations between the two
variables in each period were calculated using Spearman’s correlation; we also
calculated Spearman’s using Student’s t test. The Bonferroni
correction for multiple comparisons was performed.
Two animals acquired the complete task and showed learning progress, and the
third did not exhibit significant progress and its performance decreased at the
beginning of the second half of a learning process. At most each of them
performed about 100–150 rewarded trials per day (session). The number of
learning days for each rat was different, but enough to estimate the presence or
lack of a progress.
An integrated information coefficient was calculated for each day and each
period. A bin size was selected as 0.2 seconds (see Materials and method
section). The characteristic temporal scales t for all animals and the
brain regions were determined (Table 1).
Table 1.Number of steps for information generation used to calculate an
integration information coefficient for each animal and brain
We found a positive correlation between the integrated information coefficient
for the neural activity in the brain areas and the number of
rewards obtained by animals. Such positive correlations were evident in both
hippocampus (r = 0.3792, p = 0.0001, ***), and amygdala
(r = 0.3592, p = 0.0001, ***) (Fig. 1).
Integrated information coefficient for all
animals in hippocampus and amygdala vs. number of rewards in corresponding
periods. X axis: , Y axis: relative number of rewards, each point
represents 1/8 length period in each session. Left: for hippocampal neurons;
right: for amygdala neurons. Statistical significance is given taking into
account the Bonferroni correction.
We reasoned that those behavioral adaptations were the function of the whole
brain activity and estimated mutual relationships between the two brain areas
calculating correlations between integrated information coefficient
for neuronal activity in the hippocampus and integrated
information coefficient for neuronal activity in the amygdala.
in the hippocampus and the amygdala were significantly correlated
for two animals exhibited a steady learning progress: Spearman r =
0.4299, p = 0.0001 for the first animal, and Spearman r =
0.4206, p = 0.0001 for the second animal (Fig. 2).
Learning progress and integrated information coefficient
for each animal in hippocampus and amygdala. Top: learning
progress of animals. X axis: day of learning, Y axis: number of rewards. Bottom:
Correlation of values for hippocampus and amygdala in the
periods. X axis: for neural activity in hippocampus, Y axis:
for neural activity in amygdala, each point represents 1/8 length
period of each session. Statistical levels are calculated taking
into account the Bonferroni correction for multiple comparisons.
We also found that for the hippocampal neurons correlated with
for the amygdala neurons the day the maximal number of rewards
occurred (maximal learning progression) and the day before but not the day after
it (r = 0.7678, p 0.0001, ****; r = 0.5520,
p 0.0001, ****; r = 0.1343, p = 0.2608, ns;
correspondingly) (Fig. 3). Maximal learning progression occurred on the 10-th day
for the first rat, on the 10-th day for the second rat and on the 12th day for
the third rat. We also figure out a correlation between and
number of rewards on the day before and after this point, separately.
for the amygdala and the hippocampus neurons were significantly
correlated with a relative number of rewards only before the maximal progression
day, but not after it. For the hippocampus, the values were Spearman r =
0.4268, p = 0.0001 (before) and Spearman r = 0.09521,
p = 0.4263 (after). For the amygdala the values were Spearman r
= 0.3933, p = 0.0001 (before) and Spearman r = 0.2372,
p = 0.0118 (after).
Information coefficient for all animals in
hippocampus and amygdala before a day of maximal learning progression. Top row:
days before a day of maximal learning progression and at such day; bottom row:
days after a day of maximal learning progression. X axis: for
neural activity in hippocampus, Y axis: for neural activity in
amygdala, each point represents 1/8 length period of each session. Statistical
levels are calculated taking into account the Bonferroni correction for multiple
comparisons. Statistical significance was found only for days before a day of
maximal learning progression and at this day.
Our results suggest that the concept of
integrated information, as formalized by the integrated information coefficient
, can be usefully applied to assess learning progress. Unlike
conventional assessment tools based on observable values with limitations of
being environment-specific and task-specific  the integrated information
measure might be used as a predictor of near future success in learning or as an
indicator of the optimal brain functioning during adaptation. Getting to the
point of desired outcome by a particular neuronal pattern is selected and
fixated. Given that the integrated information metric was originally proposed for
consciousness level assessments [21, 36, 37], these results open a question what
is assessed by this measure.
The metric of integrated information may also be used for establishing
interactions among various brain regions during learning. Similar estimations
were quantified as an “information flow” earlier [15, 16, 38]. In this study,
we showed a degree of joint regional contribution underlying behavioral
adaptations to the environmental demands. Behavioral paradigm applied in this
study consists of two parts: simple spatial task in the linear track mostly based
on hippocampal place neurons and spatial aversive task presumably subserved by
the amygdala neurons. Our results suggest that neurons of both regions
participate in the integration needed for successful learning of the second
skill. These results are also in accordance with reconsolidation research
findings that indicated that any learning is based on the earlier acquired
experience which is required to reconsolidate again [39, 40, 41]. The integrated
information approach opens a new way for investigation of reconsolidation
processes and suggests a possibility to find if reconsolidation processes are
regionally restricted to any brain zones under any given learning.
Another possibility is to assess functional connectivity using this approach. As
we showed correlated complexity of activity varied through learning progress and
dropped on the day after the maximal progression, which might be related to
consolidation/reconsolidation during sleep [34, 42].
In this study we proposed a new approximation of calculation. As
mentioned in Materials and Methods section, we used an approximation preserving
the algorithm of selection of a minimum information partition, but sacrificing
the coverage of all neurons. We supposed that if neurons have a multi-level
organization , then important structural changes can be detected even in
small groups of neurons, and these changes can approximately reflect changes of a
whole anatomical region. This approximation successfully demonstrated a
correlation with a classical success metric in 48 days of learning. Thus, a
15-neuron approximation is acceptable for performing calculations of .
More investigations are required to compare calculated with the proposed
approximation and calculated with known approximations. It is
necessary to note that given technical advances for large neuronal population
registration, it will be a challenging task to track the evolution of of
a bigger number of neurons in the same way.
OS conceived and designed the experiments; IN performed calculations and
analyzed the data; IN and OS wrote the paper.
Ethics Approval and Consent to Participate
For this paper, an open dataset was used. Animal experiments were conducted in
Gyorgy Buzsaki lab and approved by the Institutional Animal Care and Use
Committee (IACUC) at New York University Medical Center.
Funding support from RFBR (Project No 20-34-90023) is greatly
This study was supported by grant from the Russian foundation for basic research
(Project No 20-34-90023).
Conflict of Interest
The authors declare no conflict of interest.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.