Keywords

1 Introduction

1.1 The Hippocampus Formation

The hippocampus is a brain region that belongs to the archicortex, a cortical tissue with four or five layers, instead of the more typical six layers of the neocortex. Mammals have two hippocampi, one on each side of the brain, and each hippocampus appears as a curved structure inside the temporal lobe. In this chapter, we will discuss methods to reconstruct the rodent hippocampus in a computer model. Since the hippocampus architecture is mostly preserved across mammals, however, some of the insights may generalize beyond the rodent.

In rodents, the hippocampus appears as a prominent structure just below the neocortex. When we say hippocampus, we refer to four subregions: dentate gyrus (DG), cornu ammonis 1, 2, and 3 (CA1, CA2, and CA3). Some authors use the term hippocampus proper to refer to CA1, CA2, and CA3 only. Finally, with the term hippocampus formation, we include also subiculum, presubiculum, parasubiculum, and entorhinal cortex.

The hippocampus plays an important role in several cognitive functions, such as learning and memory (Jarrard 1993), and spatial navigation (O’Keefe and Nadel 1978). The hippocampus is also implied in some pathologies. For example, in Alzheimer’s disease, the hippocampus seems to be affected in early stages before the disease spreads to the entire brain. In epilepsy, the temporal lobe is often the focus of seizures since the hippocampal formation needs considerably less current to elicit epileptiform activity compared to other cortical areas. Additionally, the hippocampus, in particular CA1, is highly vulnerable to ischemic or hypoxic insults making this region critical in cerebrovascular diseases.

The hippocampus has facilitated many discoveries due to its particular structure and properties. First of all, it has a relatively simple and ordered structure, with four layers, where excitatory cells populate only one layer. The different hippocampal fields are connected almost unidirectionally and long-range fibers travel orthogonally to the main dendritic axes of pyramidal cells. Furthermore, the synapses are highly plastic, so that they can change their strength in response to the pre- and postsynaptic cell behavior. Finally, neurons can be grown in culture, and acute or cultured slices survive in vitro for a sufficient long time to be used in experiments. All those properties make the hippocampus a convenient benchmark to understand general principles of the brain. Key discoveries that benefited from experiments on the hippocampus are, for example, the characterization of excitatory and inhibitory synapses (Kandel et al. 1961; Hamlyn 1963; Blackstad and Flood 1963; Andersen et al. 1964a, b, 1966a, b; Curtis et al. 1970), the discovery of long-term plasticity (Bliss and Lømo 1970, 1973), and the study of oscillations and their behavioral correlates (Buzsáki 2005).

This interest in the hippocampus has generated much data that grow daily. While this is undoubtedly positive, it is clear that more data do not necessarily bring more knowledge. Data are always sparse, heterogeneous, conflicting, and strategies are necessary to convert all of this into a better understanding of the hippocampus. Computer models can help to accelerate this process. Recent seminal works (Ecker et al. 2020; Markram et al. 2015; Bezaire et al. 2016; to cite the most pertinent references) showed how it is possible to reconstruct a brain region despite available data being incomplete and heterogenous. This supports the idea that a faithful reconstruction of the hippocampus in a computer model, while albeit challenging, is nonetheless ultimately feasible.

1.2 Principles for Building a Computer Model of the Hippocampus

We can build a model for different purposes and our choice will affect the modeling approach. Here, we present a model that targets two main goals. First, the model should integrate available data on the hippocampus to provide a meaningful snapshot of what we know. Second, the model should allow us to study a variety of phenomena and not be restricted to any particular hypothesis. In reconstructing the neocortical microcircuitry, Markram et al. (2015) developed a computational reconstruction process that produced such neuroanatomically detailed and data-ready models from biological first principles which can be generalized to other brain regions. In the previous chapter of this book, “Computational Concepts for Reconstruction and Simulation of Brain Tissue,” the underlying approach is presented and serves as a prerequisite for a better understanding of the present chapter, which applies this process to the hippocampus.

The reconstruction of the neocortical microcircuitry in Markram et al. (2015) is the result of a model building process that integrates data at different scales and of different modalities. This process is a data-driven process without any preconception on any particular hypotheses one may want to test. The model includes elements for which one can find sufficient experimental constraints (e.g., single cell reconstructions and electrophysiological recordings, analysis of the connectivity, pair recordings) representing a starting point for further refinements and integration of new data.

It is useful to consider a brain tissue model as a compound model of different building blocks: morphologies, ion channels, single cell electrical models, connections, synapses, and volume (Fig. 11.1). For a full reference of those components, see Markram et al. (2015) and Chap. 10. In the present chapter we follow this structure and take into account the particularities of the hippocampus in terms of data availability, functions, and specific challenges. As described in Sect. 10.5: Validation, once we have built a model of a component, we validate it before integrating it into the compound model. This process offers an alternative to the conventional method of “hand-tuning” the parameters of the model or a building block to match the emergent properties at higher scales. Instead, using this method, the failure to capture an emergent behavior triggers the modeler to re-examine the input data and model assumptions. While such a systematic approach can be more time-consuming than hand-tuning, it proves to be more reproducible and extensible and provides more insight on the causal relationship between the building blocks and the brain tissue model’s emergent behavior.

Fig. 11.1
figure 1

Circuit building workflow. The number in parentheses shows the section number in this chapter where the topic is discussed

The chapter describes methods that can be applied to any of the hippocampal subregions of different species. Additionally, we will also examine some concrete examples, in particular, the adult rat CA1 (Romani et al., in preparation; hippocampushub.eu), for which there is ample available data.

2 Morphologies

In this section, we discuss the different cell types to include in a computational model of the hippocampus, beginning with the different morphological types (m-types). There are several classes of morphologies in CA1, but there is no single, universally accepted classification (for a systematic census of morphologies, visit hippocampome.org). First of all, there are several methods to classify cells that do not always arrive at the same conclusions. Each cell is potentially different than all the others and different classifications recognize different patterns thereby defining different classes. Furthermore, the techniques used to classify cells have evolved over time and new techniques appear regularly in the toolbox of the anatomists. Together with an increasingly better understanding of the brain, this leads to a continuous revision of the classifications. So, the same morphology can be identified in many different ways, or even the same name can identify different cell types (Petilla Interneuron Nomenclature Group et al. 2008).

A good starting point is cell types that are quite well established and characterized with strong experimental data. At least for CA1, several reviews (Bezaire and Soltesz 2013; Klausberger and Somogyi 2008) and public resources (neuromorpho.org, hippocampome.org) help us to identify this core set of morphologies (Table 11.1).

Table 11.1 Core set of cell types. The number of cells that have been reconstructed and identified as found in neuromorpho.org with three different filter options: hippocampus (+), rat hippocampus CA1 (++), rat hippocampus CA1 and complete 3D neurites (+++)

The core set of cell types found in the hippocampus is presented in Table 11.1. What is not shown in the table is that the somata of these cell types can be found in different layers. Neurons of the same type show visible differences in their morphology if their somata are in different layers, even if axon and dendrites preserve similar distribution across layers. For this reason, we consider cells that belong to the same class but have different soma locations as being classified as different morphological types. A useful convention is to put the acronym of the layer that hosts the soma in front of the cell type acronym, as was used in Markram et al. (2015). The hippocampus strata are structured depth-wise in clearly defined layers including the stratum pyramidale (SP), stratum oriens (SO), stratum radiatum (SR), and stratum lacunosum moleculare (SLM). For example, we can identify the cell types also by their locations as abbreviated by: SP_AA, SO_Tri, SR_SCA, SLM_PPA, respectively.

So far, we have treated the CA1 as a uniform region. In reality, many model parameters change along the three axes of the hippocampus—longitudinal, transverse, and radial axis. If we restrict the discussion to morphological features, we already mentioned how the morphology varies when the soma is located in different layers (i.e., along the radial axis). Cells can show morphological differences depending on their exact location even within the same layer. For example, we can distinguish a deep or superficial pyramidal cell that has the soma located, respectively, on the bottom or the top of stratum pyramidale. Deep pyramidal cells (bursting or early bursting) have more extensive tuft dendrites, while superficial ones (non-bursting or late bursting) have more extensive basal dendrites (Graves et al. 2012). Also, along the longitudinal axis of the hippocampus, we can observe differences in the morphology of pyramidal cells (Mizuseki et al. 2011; Lee et al. 2014; Masurkar et al. 2017). At first glance, PCs in the transverse axis seem quite homogeneous, but it masks a diversity in the PCs in terms of connectivity, properties, and functions. Already Lorente De Nó (1934) divided in the CA1 into “a, b, and c” on the base of different connectivity of pyramidal cells. New studies have revealed additional differences both in the anatomy and physiology of pyramidal cells along this axis (Igarashi et al. 2014).

Differences within the hippocampus emerge not only at the level of morphology, but also at the level of physiological properties of the cells, connectivity, cell density, and so on. This high heterogeneity supports the idea that the hippocampus processes different types of inputs and this could happen in parallel (Andersen et al. 1969, 2000; Danielson et al. 2016; Deguchi et al. 2011; Geiller et al. 2017; Sloviter and Lømo 2012). While we will not take this inhomogeneity into account for the sake of simplicity, the reader should not forget about that because it may have profound implications on how the hippocampus works in the real brain.

After identifying the cell types for consideration, we have to collect their morphological reconstructions. Public resources contain a large number of morphological reconstructions we can potentially use. However, not all the available reconstructions share the same quality. The optimal dataset should include: target species (rat), age (adult), target region (CA1), classification, 3D morphology, full dendritic arbor, and full axon arbor when possible. Unfortunately, the number of reconstructions that are publicly available and meet the above criteria are lamentably few. In Table 11.1, we show the result of a search in neuromorpho.org and the number of available morphologies when they match partially (1173 cells) or completely (269 cells) our quality criteria.

Before it can be used for modeling, any neuron reconstruction needs first to be checked carefully to identify and fix reconstruction errors (Donohue and Ascoli 2011; Winnubst et al. 2019) that can affect the building of models.

A set of curated high-quality reconstructions in Neurolucida ASCII format is available in the “Live Papers” section under Resources/Morphologies/View of the Brain Simulation Platform (https://humanbrainproject.github.io/hbp-bsp-live-papers/2018/migliore_et_al_2018/migliore_et_al_2018.html).

3 Ion Channels

Hippocampal neurons are characterized by a variety of different ion channels which exhibit a certain distribution and density that define their particular electrical behavior. Since the precise information about the types of ion channels expressed by a particular cell type is not known, even for the well-characterized ones, we have to assume which channels to include. While it would be desirable to model neurons with genetically identified ion channels (Ranjan et al. 2019) (channelpedia.epfl.ch), and this approach may become possible in the near future, at the time of this writing, it is currently not feasible. A more pragmatic approach is to define a set of currents that can reproduce the diversity of the firing patterns of our chosen hippocampus cell types. The Hodgkin–Huxley formalism (Hodgkin and Huxley 1952) has been widely used to build phenomenological models of currents. This formalism offers flexibility and efficiency making it a suitable approach for large-scale networks of multi-compartmental neuron models.

Considering the firing patterns of rat CA1 cells (see Sect. 11.4.1 and Fig. 11.2), we can restrict the ion currents to the following ones:

  • Sodium (Na) current and potassium delayed-rectifier (KDR) current which are ubiquitous in neurons and are needed to support action potential generation;

  • Type A potassium current (KA) and hyperpolarization-activated current (Ih) which are major players in dendritic integration;

  • Type M potassium current (KM) which is responsible for spike adaptation;

  • Type d potassium current (Kd) which is responsible for delayed firing and inverse adaptation (seen in a few types of interneurons);

  • Three calcium currents that cover the range of kinetics observed for voltage-dependent calcium channel (one fast and transient, one long-lasting, and one non-inactivating) (CaT, CaL, CaN);

  • A calcium pump that ensures that calcium entering through channels is extruded;

  • Two calcium-dependent potassium currents (one of them also voltage-dependent) (KCa and Cagk) that concur in generating a strong adaptation.

Fig. 11.2
figure 2

Morpho-electrical composition. Firing patterns (electrical type or e-type) shown by the different morphologies (morphological type or m-type). Pyramidal cells: cACpyr classical accommodating. Interneurons: cAC classical accommodating, bAC bursting accommodating, cNAC classical non-accommodating

We can refit models or ion channels or take advantage of the large number of models publicly available (see, for example, the public ModelDB model repository https://senselab.med.yale.edu/modeldb/). Nonetheless, the richness of the data available is not always positive. Researchers have built several versions of the same currents or modified existing models. They constrain their models against different set of experiments, making different assumptions that are not always explicit and documented. The forest of models can be appreciated if we compare their provenances (see Ion Channel Genealogy website at icg.neurotheory.ox.ac.uk) (Podlaski et al. 2017). In order to take full advantage of the many models already available, we have to spend time checking the models to verify if the models are in agreement with the original experimental data and if they match the experimental or modeling conditions we are going to implement.

When we pull together data to model ion currents or when we pull together different ion current models, we are most likely merging two or more datasets. Datasets are often obtained in different experimental conditions and we have to normalize them before implementing this merge. Two common problems are the liquid junction potential and the differences in temperature (Markram et al. 2015). The liquid junction potential (LJP) (Neher 1992) arises when two different solutions are in contact and have ions at different concentrations with different mobilities. Due to the presence of LJP, the recorded voltage does not correspond to the membrane potential. If the experimental data or the models are not corrected for LJP, or the authors do not provide an estimation for that, we have to make this correction. We can estimate the LJP knowing the solutions used in the experiments. This calculation is facilitated by available tools like JPCalc (Barry 1994). The other factor to consider is the temperature that affects the kinetic parameters, i.e. the time constants. We have to bring the time constants to the same reference temperature (generally the temperature of our simulations) using the Q10 temperature coefficient, which describes the change as a consequence of increasing the temperature by 10 °C.

A curated set of ion channel models is available together with the single cell models (see Sect. 11.4).

4 Single Cell Models

In this section, we discuss how to constrain hippocampal single cell models, which means defining the set of ion channels and how they are distributed across the different morphologies. Unfortunately, this information is not completely accessible—even for cell types that are intensely studied like CA1 pyramidal cells. Despite that, as described in Chap. 10, computational methods exist to overcome this problem.

4.1 Electrophysiological Features

The simplest set of electrophysiological traces that can be used to constrain a model consists of single cell recordings in current clamp mode where the soma is stimulated with a series of step currents. Ideally, the currents should cover a range of intensities and the step should be long enough to resolve the particular features of the firing patterns. For example, hyperpolarizing currents in CA1 pyramidal cells reveal a “sag” in the voltage response that is important to constrain the hyperpolarization-activated nonspecific-cation current (Ih). Depolarizing currents should also have sufficient intensity to characterize the high firing rates of some cell types (e.g., the fast spiking PV+ basket cells) or even reveal the depolarization block, a temporary arrest of the firing due to an intense depolarizing input (Bianchi et al. 2012). Finally, steps of sufficient length are necessary, for example, to better characterize the adaptation of certain neurons or reveal the first spike of late-spiking neurons that may appear after several hundreds of milliseconds under near-threshold stimulation (Tricoire et al. 2010).

After we collect the electrophysiological recordings, we have to classify them on the basis of the firing patterns shown. Despite the huge variability in cell firing, the different patterns can be sorted in a limited number of classes which have been largely agreed upon in the neuroscience community (Petilla Interneuron Nomenclature Group et al. 2008). Data on the hippocampus show that each morphological type (m-type) can express one or more electrical types (e-type) to give different morpho-electrical combinations (me-type) (Komendantov et al. 2019). If our dataset is big enough, we can also estimate the abundance of each me-type, information that will be important when we have to define the cell density in the network (see Sect. 11.5.2). Based on the data we have collected, we derived the morpho-electrical composition shown in Fig.11.2.

Markram et al. (2015) showed that an efficient way to constrain single cell models is to optimize them against features rather than the entire trace. Features are the salient elements of a trace that characterize the firing pattern (e.g., spike width, time to the first spike, adaptation index). We can use the open-source Electrophysiological Feature Extraction Library (eFEL, https://github.com/BlueBrain/eFEL) or the Blue Brain Python E-feature extraction (BluePyEfe, https://github.com/BlueBrain/BluePyEfe) to extract features to be used in a subsequent model optimization. Features extraction can be performed in a web application of the HBP platform EBRAINS (https://ebrains.eu/service/feature-extraction/).

The resulting features may come from different experiments that use different experimental conditions or may be used together with other experimental data. In any case, we have to normalize them by correcting for LJP and using the threshold-base currents. We already discussed the LJP in Sect. 11.3. Regarding the second issue, we observe that the same cell type can respond differently to the same amount of current in different experiments. A way to normalize the result is to calculate the rheobase, the current necessary to bring the cell to the action potential threshold, and inject step currents defined as a percentage of this rheobase (Markram et al. 2015). If the experiments are not done in this way, we can still estimate the threshold current by interpolating the available data.

4.2 Model Optimization

Once the target traces, or more specifically the target electrophysiological features, have been defined, we have to define the set of currents, the compartments in which they are located, and how they change within each compartment. As discussed in an earlier section, the set of active membrane properties include a sodium current (Na), four types of potassium (KDR, KA, KM, and KD), three types of calcium (CaN, CaL, CaT), the hyperpolarization-activated nonspecific-cation current Ih, two types of calcium-dependent potassium currents, KCa and Cagk, and a calcium extrusion mechanism in all the compartments containing calcium channels. In general, channels are uniformly distributed in all dendritic compartments except KA and Ih, which in pyramidal cells are known to increase with distance from the soma (Hoffman and Johnston 1999; Magee 1999).

Figure 11.3 shows our first iteration on single cell models (Migliore et al. 2018). Note the following about pyramidal cells:

  • KM is only present in the soma and axon (Shah et al. 2008);

  • KD is not present since it is implied in delayed spiking and this is not a feature observed in PCs;

  • KA has a different kinetics in dendrites, soma, and axon (Hoffman et al. 1997; Migliore et al. 1999);

  • KM has a different kinetics in the soma versus the axon; and

  • Na and KDR are treated separately in the soma and the rest of the neuron.

Fig. 11.3
figure 3

Ion current distributions. Distribution of ion currents in pyramidal cells (a) and interneurons (b). Currents present in different compartments are distinguished using an additional letter: d dendrites, s soma, ax axon (adapted from Migliore et al. 2018)

Interneurons:

  • Given the limited knowledge on the currents in interneurons, we apply the same currents of pyramidal cells with few exceptions;

  • KD is present since some interneurons show delayed firing; and

  • KA has the same kinetics in somas and dendrites because there is no experimental evidence of a different KA kinetics in the dendrites of interneurons.

We need to define the passive properties (capacitances and resistances) of the neurons and maximal conductances of the ion channels. Passive properties are more easily accessible experimentally and we can directly constrain them in the models. On the contrary, peak conductances are normally unknowns and we have to optimize them. In summary, we can combine the set of ion channels and the information about their distribution, the morphological reconstructions, and the passive properties (if known), and then optimize the remaining unknowns (mainly the peak conductances) in order to match the electrophysiological features.

For this purpose, we perform a multi-objective genetic optimization using the open-source Blue Brain Python Optimization Library BluePyOpt (Van Geit et al. 2016). BluePyOpt is part of a set of tools integrated into many online use cases of the Brain Simulation Platform (BSP) of the European Union’s Human Brain Project (https://www.humanbrainproject.eu/en/brain-simulation/). The entire workflow to build single cell models is also accessible in EBRAINS (https://ebrains.eu/service/hodgkin-huxley-neuron-builder/).

A typical optimization run for a pyramidal cell, configured to generate 128 individuals per generation, requires approximately 1 h/generation using 128 cores. Typical production runs for each optimization require approximately 60 generations to reach an equilibrated state.

This procedure produced a set of models that are publicly available in ModelDB (https://senselab.med.yale.edu/ModelDB/showmodel?model=244688#tabs-1) and in the “Live Papers” section of the BSP (https://humanbrainproject.github.io/hbp-bsp-live-papers/2018/migliore_et_al_2018/migliore_et_al_2018.html).

We constrained single cell models using mainly somatic features. For this reason, after the publication of Migliore et al. (2018), we further validated the neuron models for dendritic properties. In particular, we tested the excitability of the dendrites following synaptic inputs. This validation led to an improvement of the models and we added the following additional constraints:

  • a strong reduction of the amplitude of a back propagating action potential as a function of the distance from the soma, following experimental evidence. This feature was not originally explicitly included in the previous version, but the models predicted it anyway (see Fig. 4B in Migliore et al. 2018). However, it turned out that it was not enough to limit the excitability under synaptic inputs in most neurons, because they were firing even for a single synaptic activation;

  • an exponential reduction of the sodium channels in the dendrites of interneurons; and

  • an independent optimization of channels peak conductance in the different regions of a neuron (soma, axon, and dendrites).

The new models are available in the “Live Papers” section of the BSP (https://appukuttan-shailesh.github.io/hbp-bsp-live-papers-dev/2020/ecker_et_al_2020/ecker_et_al_2020.html).

This refinement shows once again the importance of validation. Even for the most studied cell types, we cannot constrain precisely most of model parameters. Furthermore, we often have to use experimental data that tests the cells under unphysiological conditions. For example, we already discussed how the most popular protocol to characterize the firing patterns—somatic injections of step currents—may lead to under-constrained dendritic electrical properties. For this reason, single cell models should undergo extensive testing and validation. Sáray et al. (2020) developed a validation suite dedicated to single cells called HippoUnit (https://github.com/KaliLab/hippounit). Among other things, we can use HippoUnit to compare different models or different versions of the same model.

4.3 Library of Cell Models

Although we can identify a limited number of cell types, the reality is that each cell is unique in terms of anatomy and physiology. This high variability in the brain may play an important role that we should not ignore. On the contrary, our morphological reconstructions and electrophysiological recordings most probably capture too little of this variability and this may insert a significant bias in our model.

Following Markram et al. (2015), we first produced a potentially indefinite number of unique cells by inserting noise in specific morphological features, branch lengths and rotations, while preserving the branching structure. This method normally produces cells with the same laminar distribution of axons and dendrites, and so it maintains the same cell types. In a few cases, the resulting cells did not retain their cell classes and we decided to exclude them.

Once we created thousands of unique morphologies, we would have had to create electrical models for all of them, a task that would have required too much computer time. We overcame this problem by combining the set of morphologies with an initial set of electrical models, and assessing if the new combinations retained the correct firing pattern. This procedure increased the variability of the cells sufficiently.

5 Volume

We have defined a library of single cell models, and now we have to assemble them in a network model. In order to do that, we need to define the volume of the network and populate it with the single cell models.

5.1 Define the Volume

Previous modeling efforts for the hippocampus have pursued different strategies to model the network. An example of a CA1 model that does not take into account a realistic space is the one from Cutsuridis et al. (2010).

Bezaire et al. (2016), on the other hand, defines the volume by using a regular geometrical shape that can be more or less constrained experimentally. While a regular volume simplifies the build and the analysis of the network, it has several disadvantages. With a subregion like the CA1 that is curved and quite irregular, constraining it with a regular geometry that has the same geometrical properties of the original volume is not straightforward.

Schneider et al. (2014) used an interesting hybrid approach. To constrain the volume of the rat dentate gyrus (DG), the authors started from a regular shape and applied a limited number of transformations to approximate the real volume. The result is a volume that can be described parametrically, but still captures part of the irregularities of the real tissue. Another disadvantage of using a simplified volume is that the resulting circuit is less reusable. For example, it will be more complicated to connect different networks, each defined in different simplified volumes.

Brain tissue models as described in Chap. 10, on the other hand, explicitly treat space as a modality which should be parameterized from an atlas. There are several public rat brain atlases, but not all of them contain sufficient details to be used for a large-scale model of the hippocampus. An example of an atlas with a satisfactory level of details is described in Ropireddy et al. (2012) and available at http://krasnow1.gmu.edu/cn3/hippocampus3d/.

An atlas-based volume is the most accurate approach and this is what we will consider in the rest of the chapter. However, it should be noted that the process of deriving an atlas is very laborious and error-prone; as a result, atlases are often quite noisy. For example, there could be sudden enlargement or shrinkage of the layer thickness, peninsulas or islands of one layer in another layer, holes, cavities, and detached regions. All those artifacts complicate the reconstruction and the analysis of the network.

5.2 Cell Placement

Once we have defined the set of morphologies and single cell models, we have to specify how many cells will populate the volume. In the case of the rat CA1, Bezaire and Soltesz (2013) provided a useful estimation for the total number of different morphological types. We should combine this information with the proportions of different firing patterns shown by several morphological types (see Sect. 11.4.1). Furthermore, it is important to remind the reader that Bezaire and Soltesz assumed the CA1 to be uniform in their calculation. We already discussed that CA1 is far from being homogeneous and the cell density also varies greatly within the CA1. Despite these caveats, for the sake of simplicity, we can use the same working assumption of uniformity.

Once we have defined the number of cells, we have to position their cell bodies in the volume, rotate their morphologies correctly to follow the curvature of the hippocampus, and make sure that their dendrites and axons show up in the appropriate layers (Markram et al. 2015; Ropireddy et al. 2012).

6 Connections

In this section, we discuss strategies to derive the connectome, the set of connections among cells. Different strategies are used in different published models (Bezaire et al. 2016; Cutsuridis et al. 2010) or in different parts of a model (internal versus afferent connections). Moving from single cells toward networks, available experimental data become more and more sparse and heterogenous. Among all the possible connections only a minority of them has been described at all, some of them more precisely than others. Furthermore, when available, datasets usually have a small sample size along with high variability, and often the quality is poor. For example, most of the connectivity data come from light microscopy where connections are not always very visible, or from slices where the cut can remove part of the connections. On the other hand, while the datasets from electron microscopy are certainly more precise, the number of datasets is very limited as is the volume of the sampled tissue.

The main challenge addressed in this section is how to predict the set of connections given the limited available datapoints. More precisely, our goal is to specify which pairs of cells are connected, how many synapses are present in each connection, and where the synapses are located in the morphology. To start, we can initially assume that the connectivity pattern is dictated by the morphologies in the space and the associated distribution of dendrites and axons. For simplicity, since there is not extensive evidence to the contrary, we can neglect the fact that cells with the same morphology but with differences in other properties (i.e., firing pattern, biochemical markers, transcriptome) may form different connections. The most prominent examples of this behavior are PV+ and CCK+ basket cells that show a different connectivity pattern (Bezaire and Soltesz 2013).

If cells with similar morphologies have similar connections, we can simplify our task. With each M morphological type, there are M2 potential pathways. Even if not all the M2 pathways are viable, it is convenient to assume that most of them are. When there is strong evidence on nonviable pathways, we can exclude them. The most well-known examples of nonviable pathways are the axo-axonic cells that seem to form connections only on pyramidal cells, and interneuron-specific cells that form connections only on other interneurons but not pyramidal cells. Another feature we should take into account is the location of synapses. For example, excitatory synapses tend not to have synapses on other excitatory cell somas (Markram et al. 2015). Finally, we should consider a certain degree of variability in our connections to better capture real connectomes, so we should sample connectivity parameters from the appropriate probabilistic distributions.

Chapter 10 discusses different approaches to computationally predict the connectome depending on what type of source data is available, apposition-based constraints and density-based constraints, and we used both approaches to model, respectively, internal and afferent connections of the CA1.

6.1 Apposition-Based Constraints

This approach requires that axons are sufficiently reconstructed at least within each region of interest. While this prerequisite is difficult to meet, it reduces drastically the number of assumptions we have to make and the resulting connectome will be much more predictive.

In this case, we can place potential synapses based on the proximity of axons and dendrites. The key parameter is the threshold distance between axon and dendrite to decide if we can place a potential synapse or not. Reimann et al. (2015) showed that we cannot obtain a realistic connectome even if we optimize this parameter. Instead, the authors suggested using a multi-step pruning algorithm that matches the sparse data in terms of bouton density and number of synapses per connection thereby predicting more accurately the rest of the connectome. This algorithm was initially designed for the somatosensory cortex (SSCx) microcircuit, but we can apply the same strategy to the CA1. Even though it starts from sparse data, this approach appears to be quite predictive and the resulting connectome also reproduced high-order connectivity patterns (motifs) in the SSCx (Gal et al. 2020; Nolte et al. 2020).

In Fig. 11.4, we show parameters of the predicted connectome in the rat hippocampus CA1. Note that these results come from an instantiation of the circuit (i.e., with a particular set of morphologies, volume, positioning) and should not be used as expected values.

Fig. 11.4
figure 4

Predicted connectome analysis of the rat hippocampus CA1. Bouton density (a), synapse convergence (b), and synapse divergence (c) for each morphological type (m-type). Average number of synapses per connection (d) and connection probability (e) for pairs of m-types. Connection probability as a function of soma distances between pyramidal cells (f)

6.2 Density-Based Constraints

We use this approach when we do not have axon reconstructions but do still have volumetric information. Accordingly, we can think in terms of synapse distribution in space and therefore connection probability. At least for some pathways, we can find information on synapse distributions. Alternatively, we can examine how the axons are distributed in space, assuming that the probability of finding a synapse is proportional to the axon mass, which allows us to derive a synapse distribution. In any case, once we have a synapse distribution, we can also define a connection probability. There are many ways to accomplish this task and each way may use a different order of constraints. This type of approach normally reduces the number of assumptions. For example, we do not have to specify all the viable pathways, but we can let the algorithm determine them based on the connection probability.

We can find an application of this approach in the model of Bezaire et al. (2016). The authors defined their model in a simplified volume of CA1 (see Sect. 11.5.1) and used the volumetric information together with hypothetical axonal distributions to constrain the connectivity.

This approach is also useful to constrain long-range connections, for which we normally do not have sufficient axon reconstructions. In fact, we applied this method to reconstruct the Schaffer collaterals of the CA3 pyramidal cells, the most prominent innervation that drives the CA1 network. Those fibers target both pyramidal cells and interneurons in the CA1 mainly at the level of stratum oriens (SO) and stratum radiatum (SR). We can estimate that a pyramidal cell receives on average 20,879 synapses from Schaffer collaterals, while an interneuron receives on average 12,714 synapses (Bezaire and Soltesz 2013).

7 Synapses

Once the anatomical connections are defined, we have to assign physiological properties to the synapses. We restrict our discussion to chemical synapses and in particular to ionotropic receptors at the level of glutamatergic (AMPA and NMDA receptors) and GABAergic (GABAA receptor) transmission. This section addresses the parametrization of the synapses in the rat CA1 model. This work is fully described in Ecker et al. (2020), but we will summarize the main points for the benefit of the reader.

7.1 Postsynaptic Conductance

To model ionotropic receptors, we can use a conductance-based model (similarly to the way ion channels were handled in Sect. 11.3) with a double-exponential variable conductance that is able to capture well the dynamics of hippocampal synapses. In the case of the NMDAR component, we should also include the dependency of conductance on the Mg2+ concentration for which Jahr and Stevens’s (1990) phenomenological model of this dependency is a widely used approach.

Since most of the data on synapses come from somatic recording, we have to take into account the space-clamp error (Bar-Yehuda and Korngreen 2008) in addition to the postsynaptic potential attenuation that occurs between the synapse location and the soma. To correct for both factors, we identify the synapse location and set a test value for the maximum or peak synaptic conductance, we simulate a synaptic activation and adjust the synaptic conductance to obtain the expected postsynaptic potential (PSP) (Ecker et al. 2020). Following this procedure, we estimate the peak conductance of AMPAR and GABAAR, since NMDAR is normally blocked around the resting membrane potential. In the case of NMDAR, we cannot set its peak conductance because it is always contaminated by the AMPAR component. To overcome this problem, we can estimate it by combining the AMPAR peak conductance and the ratio between NMDA and AMPA conductance (NMDA/AMPA ratio) that is accessible experimentally at the level of the soma and that we can assume to be preserved at the level of synapses.

7.2 Short-Term Plasticity

If our synapse models have only stereotypical responses, the resulting network model will have very limited validity in the time domain. Hippocampal synapses are highly plastic and show different dynamics at different time scales. Treating all the different forms of plasticity requires a book on its own and is beyond the scope of this chapter. Here, we can only feasibly introduce short-term plasticity, which is relevant in the time span between milliseconds and seconds.

There are many possible models of short-term plasticity (Hennig 2013). Here, we use the Tsodyks–Markram model (Markram et al. 1998; Tsodyks and Markram 1997), a widely used model that is relatively efficient and able to capture the dynamics of hippocampal synapses. From the original papers (Markram et al. 1998; Tsodyks and Markram 1997), the model underwent several changes (a review of the different models can be found in Hennig 2013). Since hippocampal synapses show facilitation and depression, we select a model version that is able to capture both (see Ecker et al. (2020) for the version applied to the CA1 model).

The model has several free parameters that have to be optimized to match pair recordings. To fully constrain the Tsodyks–Markram model, the pair recording should contain a series of stimuli, possibly at different frequencies. Protocols with fewer stimuli, like pair pulse, or with a limited number of frequencies generally under-constrain the model. Similarly, to the case of single cell optimization, we can optimize the parameters against the salient features of the pair recordings, in this case the peaks of the synaptic responses. We can use the python libraries eFEL or BluePyEfe and BluePyOpt, respectively, to extract the features and optimize the models.

7.3 Multivesicular Release

We can expand the Tsodyks–Markram model to include a stochastic multivesicular release, a transmission modality that occurs also in the hippocampus (Rudolph et al. 2015).

Following the classical model by Castillo and Katz (1954), we can assume our synapse contains a number of vesicle release sites per synapse, also known as the size of the readily releasable pool (NRRP), at which a vesicle can be released with the same release probability U (corresponding to the release probability of the Tsodyks–Markram model). We can incorporate the multivesicular release into the Tsodyks–Markram model to better capture the nature of certain pathways (a mathematical description can be found in Ecker et al. 2020). An implementation of the model above is accessible from the BBP neocortical microcircuit portal (https://bbp.epfl.ch/nmc-portal/welcome) (Ramaswamy et al. 2015).

After we introduce the model formalism, we have to constrain its parameters. We already mentioned that we can optimize Tsodyks–Markram model parameters using pair recording. If we include the multivesicular release, we have to also constrain NRRP, which is unknown for most of the pathways. Lacking experimental estimations for NRRP, we can predict it using our model. Barros-Zulaica et al. (2019) showed that it is possible to predict NRRP by choosing the value that best matches the coefficient of variations of the first postsynaptic current in a pair recording. In the rat hippocampus, using this approach and available pair recordings (Kohus et al. 2016), Ecker et al. (2020) predicted that certain pathways could have multivesicular release (see Table 3 from Ecker et al. 2020).

Furthermore, in general, when we approach the problem of constraining synaptic parameters in large-scale networks, we have to face two problems: data heterogeneity and sparseness.

7.4 Data Heterogeneity

Data are produced using different experimental conditions and we should pay attention when merging different datasets. The general strategy is to normalize the data and adjust all the data to reflect the same conditions. We have to consider at least three important sources for data heterogeneity in the case of synaptic parameters: liquid junction potential, temperature, and calcium concentration. We already mentioned liquid junction potential and differences in temperature (see Sect. 11.3 on ion channels).

The extracellular concentration of calcium, [Ca2+]o, impacts the synaptic release probability and consequently, the dynamics of the synapses. This relationship can be described by a Hill isotherm with n = 4 (Hill et al. 1910; Markram et al. 2015). Since there are not many specific datasets for the hippocampus, we can assume that the hippocampus is similar to the cortex and adopt the same parametrizations previously used for the SSCx microcircuit (Ecker et al. 2020; Markram et al. 2015).

7.5 Data Sparseness

As we mentioned in the section on connections, data on synapses are very sparse compared to the multitude of different pathways in a brain region. Based on available data and similarities with other brain regions, Ecker et al. (2020) divided the connections into nine categories depending on the type of connections (excitatory or inhibitory) and the biochemical markers of pre- and post-synaptic cells: pyramidal cell (PC) to PC, PC to somatostatin positive (SOM+) interneurons, PC to somatostatin negative (SOM−) interneurons, parvalbumin positive (PV+) interneurons to PC, cholecystokinin positive (CCK+) interneurons to PC, SOM+ interneurons to PC, nitric oxide synthase positive (NOS+) interneurons to PC, cholecystokinin negative (CCK−) interneurons to CCK− interneurons, and CCK+ interneurons to CCK+ interneurons.

Using our network model built up to this point and the available data, we predicted a series of synaptic parameters (see Table 11.2). As in the case of connections, those parameters should be used with caution. In fact, they reflect a particular set of assumptions and data. Still, we believe it provides a very useful reference for our modeling efforts.

Table 11.2 Predicted synaptic parameters. Synaptic parameters from presynaptic (Pre) to postsynaptic (Post) cell types in the nine categories of connections. In parenthesis the synaptic type: excitatory (E), inhibitory (I), facilitating (1), depressing (2), pseudo-linear (3). Parameter abbreviations: \( \hat{g} \) peak conductance; τdecay decay time constant; USE use of synaptic efficacy; D(ms) depression time constant; F facilitation time constant; NRRP size of the readily releasable pool of vesicles. Values are presented as mean ± SD (adapted from Table 3 in Ecker et al. 2020)

8 Simulation Experiment

We constrained single cell models, we placed them in a volume, and we predicted their connectivity and synaptic parameters (Fig. 11.5). We can now use the model not only to simulate single or pair neurons, but also to simulate slices or the entire network.

Fig. 11.5
figure 5

Rat CA1 model. Full-scale model of the rat CA1. Only 1% of the cells and dendrites are shown for clarity (a). Slice of 100 μm thick. Pyramidal cell (PC) and parvalbumin positive basket cell (PVBC) are, respectively, in blue and red (b). The same two cells, PC and PVBC, extracted from the circuit (c). Firing patterns of PC (blue) and PVBC (red) shown following a step current of 200% of the rheobase. Scale bar 10 mV, 100 ms (d). Pair recording from PC to PVBC (blue) and from PVBC to PC (red) during a train of ten stimuli at 50 Hz. Scale bar 0.1 mV, 50 ms (e)

A model contains variables that depend on time, and parameters that do not. Simulating the network means evaluating the variables in the time dimension and subsequently showing how network dynamics evolve. In computer simulations, the time is discretized and the simulation evaluates all the variables at each time step. We can store the values of the variables during the simulation for subsequent analyses. In this section, we introduce four types of simulation conditions: spontaneous or evoked activity, in vitro or in vivo.

Without any external inputs, some networks can generate intrinsic activity. Two driving forces that trigger this spontaneous activity are pacemaker neurons (Le Bon-Jego and Yuste 2007) and spontaneous synaptic release (or “minis”). To the best of our knowledge, there is not much evidence for intrinsically active neurons in the hippocampus, while spontaneous vesicle release is quite well documented in the hippocampus (Kavalali 2015). Minis occur at very low frequencies (i.e., on the order of 0.01 Hz, Kavalali 2015), but given the multitude of synapses, the impact of minis is significant. There are several reasons why we want to study the network dynamics under spontaneous activity. In this condition, we can consider the network to be in its resting state and this already tells us much about the network properties. In the case of the rat CA1, the network shows very sparse (mean frequency <1 Hz) and random activity (Romani et al., in preparation). Moreover, as we will discuss, simulating the network without inputs is important to test and validate the model.

While it is useful to study spontaneous activity, this condition does not often occur in reality. Brain regions are heavily interconnected and they are always exposed to a series of stimuli. We can mimic an external input by injecting currents in the somas or we can model action potentials through afferent fibers to our region of interest. This second approach requires an expansion of our model but it is the most accurate and flexible. In the case of CA1, we implemented a model of Schaffer collaterals that gives rise to most of the synapses in CA1. Including a model of Schaffer collaterals enables us to explore a variety of additional phenomena. It is clear that adding other innervations (e.g., perforant pathways, projections from medial septum) will expand the capability of our models even more.

Whether we want to look at spontaneous or evoked activity, we can simulate our network to mimic in vitro or in vivo conditions. Our ultimate goal is naturally to study how the hippocampus behaves in a living brain, but it is also useful to replicate in vitro conditions. In fact, most of the data are obtained in vitro, and therefore we may want to validate the network by comparing our in silico model with in vitro data, to gain insight or extend some experimental findings. In vitro conditions may differ from in vivo ones for several reasons. The region of interest is normally cut and removed from its context. As a consequence, it does not receive most of the inputs from regions connected to it, and so the background activity is significantly compromised. Additionally, the external solution cannot reproduce exactly the environment of the region in the real brain. For example, the solution may lack important molecules (i.e., ions, hormones, neuromodulators) that influence the network behavior. Sometimes, the solution is altered on purpose to simplify experiments. For example, experimentalists use a higher Ca2+ concentration to make the synapses respond more strongly, rendering them more easily recordable. In general, reproducing experimental conditions accurately is quite challenging. The fact that experimental conditions are (apparently) under the control of the experimenters may give the false illusion that replicating the conditions of an experiment is an easy task. Unfortunately, even well-written methods cannot fully capture the reality of the experiment, and our models may not include all the necessary parameters to match the experimental conditions. Considering all of that, we can conclude that reproducing an in vitro experiment is possible only approximately.

If reproducing an in vitro experiment is a challenging task, this is even more true for an in vivo experiment. Here, we have to reproduce the extracellular solution and the background activity—and both are seldom known. While we cannot reproduce in vivo condition exactly, we can nonetheless make approximations to have an idea in which direction the system is moving when passing from in vitro to in vivo. Markram et al. (2015) approximated in vivo conditions by lowering extracellular calcium concentrations to match in vivo values (1.1–1.3 mM) and thus applying tonic depolarization to compensate for the reduced background activity.

9 Validation

Even when each of the building blocks is apparently well-constrained, the correct behavior of the network is not guaranteed. The interaction of the different building blocks is often complex, and the overall behavior cannot be predicted by looking at each block individually. As a consequence, extensive validation is essential and it can unmask incorrect behavior of the building blocks inside the network and the underlying assumptions. There are several types of validation as described in Chap. 10.

9.1 Different Types of Validation

Once we assemble the network, we should have already validated each model component (see Sect. 10.5.1: High-Throughput Model Component Validation). This does not guarantee that the model component continues to behave as expected once embedded in a compound model, i.e. the network. For this reason, we should validate model components also in the context of the network (see Sect. 10.5.2: Sample-Based In Situ Model Component Validation). For example, we can inspect the position of the morphologies within a series of slices along the main axis on the hippocampus (Fig. 11.5, Panel b). Another example is the validation of the single cell models. Many problems may occur at the level of single cells, while at the same time, the network activity appears reasonable. Cells may enter into a depolarization block (Bianchi et al. 2012) even when the input is expected to be low, or they get “stuck” at certain depolarization levels even in the absence of the input.

We reconstruct a network using a multitude of constraints that may conflict each other. This problem, together with the inclusion of random number generation used in the model building process, does not guarantee that the final model reflects the initial set of inputs. To confirm that the model is still consistent with the input data, we have to perform a new set of validations (see Sect. 10.5.3: Intrinsic Validation). For example, we can compare the analysis of the connectome (Fig. 11.4) again against the input parameters or perform in silico pair recordings (Fig. 11.5, panel e).

With the three types of validations presented above, we assess the quality of our network in default conditions. Indeed, network manipulation is another useful approach to test our model. The idea is to apply simple manipulations (we change only one parameter at the time) for which we know the results, either quantitatively or qualitatively. For example, we can block GABAAR and check if the network shows an increased activity as we would expect.

After performing all the previously discussed checks, the model should be reasonably consistent with the input data and each component should work as expected. We can say that the network is valid within a space defined by its input parameters. While this is an important step, it could be limiting. We would like to use the model to explore uncharacterized regimes and make predictions. To achieve that, we need to test how much the model generalizes and goes beyond the input data. We need another set of validations that compare the model with new datasets and validate the emergent properties of the network (see Sect. 10.5.4: Extrinsic Validation).

The first simple emergent property we may want to check is the spontaneous activity using default parameters (Fig. 11.6). Even if we lack specific information on how the network behaves under those conditions, we should have an idea about what to expect. For example, we know that CA1 neurons fire with low frequency (~1 Hz) when the network is in a resting state (Czurkó et al. 1999; Hirase et al. 1999; Wiener et al. 1989). Cells too active or too silent could be an indication of some issues in the model.

Fig. 11.6
figure 6

Rat CA1 spontaneous activity. Simulation frame (a). Examples of single cell traces. Scale bar of 10 (b). Firing rate distribution (c)

More complex validations are possible. We should select experiments that test different aspects of our region. In addition, the more our target experiments depend on many network components, the more they strongly validate the model. Once we have identified the set of experiments, we have to reproduce (as much as possible) the same experimental conditions, stimuli (if any), and analyses. Our model is an approximation of the real system and the simulation is an approximation of the experimental conditions. If we also consider the high variability of biological systems, it is clear that we cannot expect a perfect match between simulation and experimental results. What we want here is to reproduce the essence of the concerned phenomena. If this is not the case, we have to understand the reason(s) for this mismatch. For example, the model may lack an important component, some of our constraints or assumptions may be incorrect, or we may have failed to reproduce the exact experimental conditions. This exercise can be quite laborious but often leads to an improvement of the model.

Examples of more complex validations are the reproduction of the different types of oscillations observed in the hippocampus (Colgin 2016). Those rhythms include a range from slow-frequency oscillations like theta (Buzsáki 2002) to the high-frequency oscillations like the ones observed during the sharp-wave ripples (Buzsáki 2015) and have been correlated with different types of behavior.

9.2 Sensitivity Analysis

It is important to mention another general principle when we simulate models: biological systems are quite robust despite their high variability. For this reason, our simulation results may be not very strong if they are valid only for a narrow space of parameters or for a particular stream of random numbers (if the model contains random processes). To address this problem, we can replicate the simulations with slightly different key parameters (e.g., inserting noise in the stimuli) or random number seeds, and check if the results are robust. An additional option is to create different instances of the network model, wherein we change key parameters within biological range. For example, Markram et al. (2015) created six equivalent circuits by varying cell composition, selection and positioning of model neurons, and synaptic connectivity.

10 Conclusions

In this chapter, we discussed how we can adapt the approach described in Markram et al. (2015), which was covered in Chap. 10: “Computational Concepts for Reconstructing and Simulation Brain Tissue,” and apply it as a use case to reconstruct a large-scale model of the hippocampus using the example of the rat hippocampus CA1. The method is duly generalizable and we need only minor changes to take into account the particular anatomy and physiology of the hippocampus, and the available data for this brain region.

Despite the sparseness and heterogeneity of the data, reconstructing a faithful model of the hippocampus is a feasible task thanks to a series of strategies that mitigate the variable quality of the input data. Of crucial importance is the systematic use of validations that corroborate each building block and demonstrate the credibility of the final circuit.

If we proceed with rigor, we can use the final circuit model to make in silico experiments and predictions. There are a series of questions that we can answer with our model that are not tied to any particular brain region, but rather concern dynamical systems in general. For example, we can study which dynamical regimes the network can enter, or what the input–output (IO) function of the network produces.

Furthermore, each brain region has its own specific roles and properties and research on each brain region generates its own questions. In this context, we can use the model to support an existing theory, reveal the mechanism behind a given behavior, and/or predict the behavior of the system in conditions that are not possible experimentally. A prominent example is the different types of oscillations in the hippocampus; despite significant research, we lack a complete understanding of how those rhythms are generated and of their functional roles. What is clear is that brain rhythms, like other emergent network phenomena, can be explained only by considering different spatial and time scales. Only a biophysically detailed model, like the one we describe in Chap. 10 and this chapter, can provide a significant step forward in deciphering complex network behaviors and, more generally, can provide novel insights into the fascinating brain region of the hippocampus.

Funding and Acknowledge

This study was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.

Funding was also provided by The Human Brain Project through the European Union Seventh Framework Program (FP7/2007–2013) under grant agreement no. 604102 (HBP) and from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 720270 (Human Brain Project SGA1) and No. 785907 (Human Brain Project SGA2).

Michele Migliore continues to receive funding from the European Union’s Horizon 2020 Framework Program for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

The authors would like to thank all the people involved in the hippocampus project over the last years.

We further thank Fabien Petitjean for the help with the figures, and Karin Holm for copyediting the chapter.