Skip to content Skip to navigation Skip to collection information


You are here: Home » Content » The Art of the PFUG » Mathematical Models of Hippocampal Spatial Memory


Table of Contents


What is a lens?

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • Lens for Engineering

    This collection is included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Mathematical Models of Hippocampal Spatial Memory

Module by: Andrew Wu. E-mail the author

Summary: Here we present a comparison of Spike-time Dependent Plasticity (STDP) and Calcium Dependent Plasticity (CaDP) in modeling spatial memory of a simplified place cell network in the hippocampus. This research was supported by Rice University's Department of Computational and Applied Mathematics as part of a Vertically Integration of Research and Education in the Mathematical Sciences (VIGRE) grant from the National Science Foundation. In VIGRE, teams of Postdocs, Faculty, Undergraduate and Graduate students collaborate in groups known as PFUGs to work on problems within a field of the mathematical sciences. This report is a compilation of research done as part of a PFUG on Hippocampal Spatial Memory.

Mathematical Models of Hippocampal Spatial Memory

2011 Summer VIGRE PFUG

Andrew Wu


Katie Ward

Dr. Steven Cox

Rice University

Department of Computational and Applied Mathematics


The hippocampus is a structure within the brain that is believed to be involved with memory, spatial representation, learning, and navigation. We are particularly interested in the concept of spatial memory, or the ability of an animal (with a hippocampus) to internally represent its own surroundings and orient itself within it. Spatial memory has been theorized to be facilitated by a group of cells in the hippocampus known as place cells [8], which only spike when the animal is in a specific location within its environment. Our project aim is to continue the development of a computational model of these place cells to match experimental data gathered by collaborators and analyze the network interactions/dynamics and its equilibrium properties in order to gain a better understanding of the underlying mechanisms behind the function of place cells in the hippocampus and their greater role in spatial memory and other functions.

Biological experiments have been conducted to investigate how the hippocampal place cells are involved in learning about an animal's environment/location. One common experiment is to monitor rat hippocampal place cells as the rat moves around a track collecting food, as was conducted by Mehta et al. [6]. As the rat became more accustomed to the path and its environment, there were two noticeable phenomena among the place cells: increased place field size, or the increase in the area of the track in which a place cell fires (as a result of increased firing rates); and backward shift, or the tendency for place cells to fire in an upstream position compared to the original firing position (opposite to the direction of movement). After a few laps around the track, the place fields stabilize and stop their backward shift, indicating that they have finished "learning" the track.

The experiment from which we will be analyzing data from is known as the Double Rotation experiment. In this specific experiment, the subject is initially "calibrated" by running a track similar to the one given in the Mehta experiment. Once this is done, the subject then runs the same track with the local landmarks/cues rotated counterclockwise and the distal cues rotated clockwise [5]. The firing of the place cells is then monitored to observe the effect of disorienting the rat. In this experiment they were able to differentiate between two different sets of hippocampal cells: CA3 cells, which featured more recurrent connections and were stable; and CA1 cells, which were less stable and had fewer interconnected cells. Despite the rotation of local and distal cues, the hippocampal cells still show the same backward shift and final place cell stability, leading us to believe that these two phenomena are strongly associated with the development of spatial memory. As such, we want to understand the mechanisms behind these phenomena.

My mentor, Katie Ward, has developed a model to monitor the interactions between the hippocampal cells as well as a number of other types of cells (Grid Cells, Head-direction cells) involved in spatial memory during the Double Rotation experiment. Through collaboration with Dr. James Knierim at Johns Hopkins University, we plan to analyze the interaction of the recurrent connections between CA3 cells in the hippocampus and their influence on the final stability and weight distribution of place fields in order to better understand how these interactions are involved in developing spatial memory.

Modeling Hippocampal Place Cells

In order to model spatial memory in the hippocampus, we focus upon the CA3 place cells of the hippocampus. As they have more recurrent connections and show more stability than CA1 cells, they are the most likely candidate for the development of spatial memory.

We begin with the experimental setup used by Mehta, where a rat moves clockwise around a circular track. This simplified simulation setup and the parameters given within this chapter were taken from Gabbiani and Cox's book Mathematics for Neuroscientists[2]. In our simplified interpretation of place cell input, each part of the track provides a single place cell with excitatory input of uniform firing rate. We represent the rat's place cell network as a ring of 120 place cells with bidirectional connections between adjacent cells, as depicted in Figure 1 below. As such, each place cell has a three degree window in which it receives external stimulus. If we refer to the figure below and define degree zero as the one at which cell 1 begins to receive stimulus, cell 1 will receive input at degrees 0-3, cell 2 will receive input at degrees 3-6, etc. We denote the cell's place field as the regions on the track during which a cell receives enough stimulus to send signals to neighboring cells. Initially, the cell's place field is synonymous with its three degree window of external stimulus, as it receives enough external input to cause it to stimulate other cells. However, when we introduce synaptic plasticity to our model, we expect the cell's place field to change. We will discuss this in more detail along with synaptic plasticity.

Figure 1: A portion of the 120 Cell Ring. Architecture consists of 120 integrate-and-fire place cells with excitatory plastic bidirectional connections and nonplastic external inputs of uniform firing rate. Figure from Mathematics for Neuroscientists.
Figure 1 (120cellring2.png)

Conductance-based Integrate and Fire Model

Place cells communicate with each other by transmitting and receiving electrical signals through synapses, just like other neurons. Whether or not the cell sends a signal will depend primarily on its voltage: if it exceeds a specified firing threshold, VthVth, the cell will depolarize (reach a positive voltage), send an action potential to neighboring postsynaptic cells, and undergo an obligatory refractory period during which the cell is set to a reset voltage, VresetVreset, and cannot fire again. If the cell voltage does not exceed the threshold, it will remain silent. For our model, we set the threshold VthVth to -54 mV and the reset voltage VresetVreset to -60 mV in our simulation. We also use a refractory period treftref = 5 ms.

Figure 2: Integrate and Fire cell Circuit Diagram. Consists of leak current, capacitive current, and excitatory postsynaptic current. In our simulation, all synapses are excitatory. Figure from Mathematics for Neuroscientists.
Figure 2 (circuit2.png)

We model the place cell's voltage using the Conductance-based Integrate and Fire model, as pictured by the circuit diagram above. Cell voltage, or the voltage difference across the cell membrane, which we define as VmVm, is modified via the flow of charged ions across the cell membrane. In our integrate and fire model, we implement three parallel components through which cell voltage is adjusted. The first parallel component represents our leak current resulting from the flow of chloride ions through leaky channels. The chloride ions will travel across the membrane to reach their resting potential, which we denote as VClVCl or VrestVrest (used later in the text), of -70 mV. The flow of ions is limited by a conductance gClgCl, which remains fixed at a value of 1 mS/cm22. S represents Siemens, the reciprocal of the unit of resistance ΩΩ. When we put these terms together using Ohm's law and solve for chloride current, we get:

I C l = g C l ( V m - V C l ) I C l = g C l ( V m - V C l )

The second parallel component of the integrate and fire model consists of a capacitor, indicative of the cell membrane's ability to separate electrical charge (in the form of ions). For this component, the capacitive current, ICIC represents the current due to the change in transmembrane voltage. This current takes the following form:

I C = C m d V m d t I C = C m d V m d t

Here, CmCm denotes the membrane's capacitance, or ability to store charge. We use the value of CmCm = 20 μμF/cm22.

The third parallel component of our circuit involves current from synaptic inputs. This input will act as the driving force for depolarization (in this setup we do not use any inhibitory synapses). As such, we use a excitatory reversal potential of VsynVsyn = 0 mV, which will help raise the cell voltage toward the firing threshold in the presence of input. The magnitude of the input current is limited by synaptic conductance, which we denote gsyngsyn in the circuit diagram (also referred to as gEgE). The synaptic conductance, unlike the leak conductance of the chloride channels, is a variable conductance (denoted by the arrow through the resistor) whose magnitude is discussed in the next section. The synaptic input current is represented by the following:

I s y n = g s y n ( V m - V s y n ) I s y n = g s y n ( V m - V s y n )

Since these three currents are in parallel, we can use Kirchhoff's current law, which states that:

I C l + I C + I s y n = 0 I C l + I C + I s y n = 0

Substituting values in for the currents yields the following:

C m d V d t + g C l ( V - V C l ) + g s y n ( V - V s y n ) = 0 C m d V d t + g C l ( V - V C l ) + g s y n ( V - V s y n ) = 0

Note that this differential equation only applies as long as the membrane voltage remains subthreshold. Having explained the voltage dynamics of the Integrate and Fire model, we now discuss how our variable synaptic conductance is adjusted.

Synaptic Conductance

In our model, we will represent input from external stimuli or neighboring place cells as an increase in synaptic conductance. We define our synaptic weight as the degree of increase in conductance of the postsynaptic cell due to presynaptic cell firing. As such, the excitatory synaptic conductance (gEgE) of each cell takes the following form:

τ E d g E d t = - g E + i w i i n p n δ ( t - T n i n p ) τ E d g E d t = - g E + i w i i n p n δ ( t - T n i n p )

τEτE represents the decay time constant for gEgE, which we set at 5 ms. wiinpwiinp represents the synaptic weight from the presynaptic cell i. TninpTninp holds all of the presynaptic spike times for a single cell. We use δδ to denote the Dirac delta function (the unit impulse function). As such, cell conductance will decay exponentially without input and increase by the degree of the synaptic weight upon external stimulus or the firing of neighboring cells. Note that if there are multiple inputs, their effect on conductance is additive, resulting in faster cell depolarization.

Figure 3: Integrate and Fire dynamics. Cell 1 receives external stimulus and has a feedforward connection to Cell 2. Depolarization occurs following increases in conductance (which are a result of presynaptic input). Figure adapted from Mathematics for Neuroscientists.
Figure 3 (gV2.png)

Having established the means by which we model our hippocampal place cells, we now turn to the question of how we modify our synaptic weights, which we answer in the next chapter.

Table 1: Integrate and Fire Ring Parameters
Parameter Value Description
V r e s t , V C l V r e s t , V C l -70 mV Resting Membrane Potential
V r e s e t V r e s e t -60 mV Membrane Reset Potential
V t h V t h -54 mV Firing Threshold
t r e f t r e f 5 ms Refractory Period
g C l g C l 1 mS/cm22 Chloride (Leak) Conductance
C m C m 20 μμF/cm22 Membrane Capacitance
V s y n V s y n 0 mV Excitatory Synaptic Reversal Potential
τ E τ E 5 ms Synaptic Conductance Time Constant

Synaptic Plasticity

Synaptic plasticity is the ability of neurons to change the strength of their connections in response to certain stimuli or lack thereof. It is believed to play a major role in learning and the development of memories.

There exist several theories on how place cells in the hippocampus are involved in spatial memory. Most are based upon the fundamental concepts of synaptic plasticity: Hebbian learning and long term potentiation (LTP). They dictate that the synaptic weight, or the strength of the connection between two neurons, will increase (LTP) under certain conditions; such as an increased rate of firing or synchronized firing times; and decrease when the opposite occurs (which is known as long term depression, or LTD) [3]. The biological mechanisms of synaptic plasticity have not been fully elucidated, but it is believed to be dependent on the binding of glutamate receptors and signaling cascades dependent on calcium levels [1].

In our discussion of synaptic plasticity, we aim to explain two distinct phenomena that we observe in Hippocampal Place Cells: (1): the backward shift in the firing of place fields (the region of the track in which the cell fires) and (2): the final stabilization of place fields. While there are other notable experimental phenomena in the experiments, such as expansion of place fields and increases in place cell firing rates, we will not discuss these in depth. We compare two different plasticity models: Spike-time Dependent Plasticity (STDP) and Calcium Dependent Plasticity (CaDP).

Spike-time Dependent Plasticity

One of the most prominent plasticity theories is Spike Time-Dependent Plasticity (STDP), which dictates that an increase or decrease in synaptic weight between two cells is based upon the time between the firing of the cells and the order of the firing [11]. The magnitude of the synaptic weight change, dW=f(Δt)dW=f(Δt) (Δt=tpost-tpreΔt=tpost-tpre), is dictated by the following rule:

f ( Δ t ) = A + e - Δ t / τ + : Δ t > 0 - A - e Δ t / τ - : Δ t 0 f ( Δ t ) = A + e - Δ t / τ + : Δ t > 0 - A - e Δ t / τ - : Δ t 0

A+A+ and A-A- represent coefficients for the maximum values of potentiation and depression per spike pair, respectively. We set A+A+ = .4 and A-A- = .42. τ+τ+ and τ-τ- are the time constants for LTP and LTD, both of which are set to 20 ms. We depict a plot of weight changes versus spike times below.

Figure 4: STDP Weight Plot vs. Spike Times
Figure 4 (STDP.png)

The rule is fairly intuitive: if the presynaptic cell fires before the postsynaptic cell, the weight increases. If the postsynaptic cell fires before the presynaptic cell, the weight decreases. The closer the spike times are, the greater the magnitude of the weight change. As this weight change model is only dependent on spike times, more spike pairs will result in more weight changes. As such, the weights would be able to increase without bound. To prevent excessive weight increases, we impose a maximum weight, which we denote WmaxWmax, of 5 in our simulations involving STDP. We also set a lower weight limit at zero, ensuring no negative synaptic weights. A more detailed analysis of STDP dynamics can be found in Georgene Jalbuena's report of Spike-time Dependent Plasticity.

There exist a number of variations on this plasticity rule. Some adjust synaptic weights based on the spike times of the most recent spike pair, whereas others account for all of the pre-postsynaptic spike pairs. We will not discuss this in-depth: we find that the weight changes associated with accounting for all spike pairs are slightly increased, but with very little difference besides this. Since we scale the degree of weight change with A+A+ and A-A-, we will base our analysis of STDP off of results with weight changes involving only the most recent spike pair.

A variant of STDP that we do consider, known as Multiplicative STDP, proposes a more feasible means of synaptic weight changes by incorporating the current weight into the weight change equation (see [9]). We describe this weight modification scheme as dW=g(Δt)dW=g(Δt), where:

g ( Δ t ) = f ( Δ t ) ( W m a x - W ) : Δ t > 0 f ( Δ t ) W : Δ t 0 g ( Δ t ) = f ( Δ t ) ( W m a x - W ) : Δ t > 0 f ( Δ t ) W : Δ t 0

This version of STDP allows for a more asymptotic approach to the weight bounds. Additionally, it produces a behavior more similar to the experimental results, which showed that more marked changes in synaptic weights occur in the first few laps and become lessened in later laps.

The STDP model is currently implemented in the Double Rotation experiment model. It reproduces the backward shift and results in final place field stability, as found in the results of the experiment. While it has been successful in reproducing experimental results, the model itself is still flawed, as it requires an arbitrary upper limit to be set on the maximum synaptic weight in order to achieve place field stability. As such, STDP may not appear to be the most biologically realistic model for modeling synaptic plasticity. We explore a newer alternative to STDP, coined as Calcium Dependent Plasticity, in the next chapter.

Table 2: STDP Parameters
Parameter Value Description
A + A + -70 mV Weight increase coefficient
A - A - -60 mV Weight decrease coefficient
τ + τ + -54 mV Weight increase decay constant
τ - τ - 5 ms Weight decrease decay constant
W m a x W m a x 1 mS/cm22 Maximum synaptic weight

Calcium Dependent Plasticity: A better model?

We introduce Calcium Dependent Plasticity (CaDP) as a more biologically realistic model for synaptic plasticity in comparison to STDP. The model was developed by Dr. Harel Shouval of the University of Texas Medical School and colleagues at Brown University (see [10], [13]). All of the equations and parameter values introduced in this chapter have been taken or adapted from his 2008 publication [14]. While weight changes in the STDP model were based entirely on the order of firing and the duration of the spike interval, the synaptic weights in the CaDP model incorporate spike-timing as well as a form of rate-dependent plasticity. While STDP sets arbitrary upper bounds to achieve place field stability, CaDP can be accompanied with metaplasticity to obtain a stabilized backward shift [14]. Additionally, CaDP explicitly accounts for the interactions of biological parameters known to be involved with plasticity, including calcium, magnesium, and glutamate receptors.

Plasticity Equations

The CaDP rule is based upon a scheme where calcium levels directly affect the synaptic weights. At low calcium levels, there is no change in weight. At moderate calcium levels, we observe a depression of weights. At high calcium levels, we observe the potentiation of weights. To achieve this effect, we define a function, ΩΩ, to have the above properties:

Ω ( [ C a 2 + ] ) = σ ( [ C a 2 + ] , α 2 , β 2 ) - 0 . 5 × σ ( [ C a 2 + ] , α 1 , β 1 ) Ω ( [ C a 2 + ] ) = σ ( [ C a 2 + ] , α 2 , β 2 ) - 0 . 5 × σ ( [ C a 2 + ] , α 1 , β 1 )

where σσ represents a sigmoid function, which we define as:

σ ( x , a , b ) = e b ( x - a ) 1 + e b ( x - a ) σ ( x , a , b ) = e b ( x - a ) 1 + e b ( x - a )

We set parameters α1α1, α2α2, to control the lower and upper calcium bounds of the LTD region, respectively. β1β1 and β2β2 adjust the concavity of the sigmoid function, where σσ approaches the Heaviside function as ββ. In our analysis of CaDP, we use the parameters (α1,α2,β1,β2)=(0.3,0.5,40,40)(α1,α2,β1,β2)=(0.3,0.5,40,40).

While we might be inclined to base our entire weight change regime on this omega function, we must also realize that fluctuations in synaptic calcium levels would cause any increase in weight caused by high calcium levels to be nullified when calcium decreases into the LTD calcium concentrations. As such, we must also implement a rate function, ηη, such that potentiation due to high calcium levels would outweigh the depression caused when calcium decreases through moderate concentrations and returns to equilibrium levels. Our equation for ηη takes the form:

η ( [ C a 2 + ] ) = p 1 ( [ C a 2 + ] + p 4 ) p 3 ( [ C a 2 + ] + p 4 ) p 3 + ( p 2 ) p 3 η ( [ C a 2 + ] ) = p 1 ( [ C a 2 + ] + p 4 ) p 3 ( [ C a 2 + ] + p 4 ) p 3 + ( p 2 ) p 3

We set the parameters (p1,p2,p3,p4)(p1,p2,p3,p4) to (2,0.5,3,0.00001)(2,0.5,3,0.00001) in our simulations of CaDP. As the behavior of this equation is not intuitive, we depict ηη along with ΩΩ, both as functions of calcium concentrations in figures Figure 5A and Figure 5B.

Figure 5: Plasticity Equations. (A): ηη as a function of Calcium concentration. Notice that ηη monotonically increases with calcium concentrations. (B): ΩΩ as a function of Calcium. Note that LTD occurs when 0.2 < [Ca2+][Ca2+] < 0.5 and LTP occurs when [Ca2+][Ca2+] > 0.5. (C): Weight change as a function of Calcium level.
Figure 5 (CaDPr2.png)

With our two calcium dependent functions, ΩΩ and ηη, we define our synaptic weight change function, depicted in Figure 5C as follows:

d w i d t = k [ Ω ( [ C a 2 + ] i ) × η ( [ C a 2 + ] i ) ] d w i d t = k [ Ω ( [ C a 2 + ] i ) × η ( [ C a 2 + ] i ) ]

where wiwi denotes the synaptic weight at synapse ii. Note that [Ca2+]i[Ca2+]i indicates the postsynaptic Calcium concentration at synapse ii. kk denotes a scaling factor introduced in order to generate the appropriate weight changes. In our comparison of CaDP with STDP in the next chapter, we use a value of kk = 12001200. It should be noted that while STDP weight updates occur following postsynaptic spikes, the CaDP regime constantly updates synaptic weights over time.

Table 3: CaDP Learning Rule Parameters
Parameter Value Description
p 1 p 1 2 Coefficient of Rate Function (ηη)
p 2 p 2 0.5 Rate Function Constant
p 3 p 3 3 Rate Function Constant
p 4 p 4 1 ×× 10-5-5 Rate Function Constant
α 1 α 1 0.3 Lower Bound of Calcium for LTD
α 2 α 2 0.5 Upper Bound of Calcium for LTD
β 1 β 1 40 Rate of Transition to LTD
β 2 β 2 40 Rate of Transition to LTP
k k 1 200 1 200 Weight Change Coefficient

Calcium equations

As the weight changes caused by this model are dependent on Calcium, we also implement a number of equations to regulate the postsynaptic Calcium levels. Calcium level is regulated by the differential equation below:

d [ C a 2 + ] d t = I ( t , t p r e , V ) - [ C a 2 + ] τ C a d [ C a 2 + ] d t = I ( t , t p r e , V ) - [ C a 2 + ] τ C a

Where τCaτCa represents the decay time constant of calcium, which we set to 50 ms. II represents the Calcium influx of the synapse, which we model with the following equation:

I ( t , t p r e , V ) = g N f ( t , t p r e ) H ( V ) I ( t , t p r e , V ) = g N f ( t , t p r e ) H ( V )

Note that this Calcium influx value should not be associated with the synaptic input current, IsynIsyn. gNgN represents the conductance of the available N-Methyl D-Aspartate receptors (NMDARs) at the synapse (not to be confused with the excitatory conductance from our Integrate and Fire model). This can be either kept at a constant value or regulated by Metaplasticity (see "Metaplasticity"). When not using metaplasticity, we use a value of gN=-1×10-3gN=-1×10-3. f(t,tpre)f(t,tpre) is the spike-timing dependent portion of input current, which we define as:

f ( t , t p r e ) = I f f Θ ( t - t p r e ) e ( t p r e - t ) / τ f f + I s f Θ ( t - t p r e ) e ( t p r e - t ) / τ s f f ( t , t p r e ) = I f f Θ ( t - t p r e ) e ( t p r e - t ) / τ f f + I s f Θ ( t - t p r e ) e ( t p r e - t ) / τ s f

This spike-time regime consists of two components: a fast component, with a time constant of τffτff = 50 ms; as well as a slow component, with a time constant of τsfτsf = 200 ms. ΘΘ represents the Heaviside function. Constants IffIff and IsfIsf are regulated such that Iff+Isf=1Iff+Isf=1. In our model, we use IffIff = .7 and IsfIsf = .3. Note that in this equation, the spiking is only explicitly dependent on the presynaptic firing time and not the postsynaptic firing time. Furthermore, note that unlike the STDP rule, the input presynaptic spikes are not added linearly: this spike-time equation only accounts for the most recent presynaptic spike.

H(V)H(V) represents the voltage dependence of the NMDARs, which we model as:

H ( V ) = V - V C a 1 + e - 0 . 062 V / 3 . 57 H ( V ) = V - V C a 1 + e - 0 . 062 V / 3 . 57

VV represents the postsynaptic voltage. Note that this VV differs from our membrane voltage, VmVm, in the respect that VV is specific to each pre/postsynaptic pair, whereas VmVm is only specific to an individual place cell. We define the postsynaptic voltage as V=Vrest+BPAPV=Vrest+BPAP, where VrestVrest is the resting potential and BPAP is the Back-propagating Action Potential, which is described later in the text. VCaVCa is the reversal potential for Calcium, which we set at 130 mV. The term V-VCaV-VCa is a simplified approximation of the driving force for Calcium influx. The term in the denominator is the voltage-dependance of Magnesium block on the NMDA receptor [4]. A depolarized voltage will result in the displacement of Magnesium ions from blocking NMDA receptors, allowing for more Calcium influx. We plot f(t,tpre)f(t,tpre) and H(V)H(V) in Figure 6.

Figure 6: Control of Calcium Influx. (A): ff as a function of time after presynaptic spiking. Presynaptic input is required for ff to be nonzero. (B): HH as a function of synaptic voltage. The magnitude of HH increases with depolarized voltages, peaking around 10 mV.
Figure 6 (fH2.png)

To regulate postsynaptic voltage, we use a Back-propagating Action Potential (BPAP), which increases whenever the postsynaptic cell fires. The BPAP represents the postsynaptic cell's feedback, indicative of how recently the postsynaptic cell has spiked. The magnitude of the BPAP is dependent on the following equation:

B P A P ( t ) = 100 ( I f b e - t / τ f b + I s b e - t / τ s b ) B P A P ( t ) = 100 ( I f b e - t / τ f b + I s b e - t / τ s b )

Similar to ff, the decay of the BPAP is regulated by a fast and a slow component. Upon a postsynaptic spike the BPAP causes the voltage to reach a maximum of 100 mV above the resting potential. We allow a delay of 2 ms between postsynaptic firing and the delivery of the BPAP to the dendrites. The fast component has time constant τfbτfb = 3 ms and the slow component has time constant τsbτsb = 25 ms. IfbIfb and IsbIsb are also chosen such that they add up to one: here, we use IfbIfb = 0.75 and IsbIsb = 0.25. Note that whenever a cell fires, the BPAP is sent through all dendrites, increasing the voltage at all of its presynaptic connections. In this plasticity model, the BPAP provides postsynaptic feedback and drives calcium influx, which allows for the potentiation of synaptic weights. Along with the function ff, the BPAP ensures Calcium dependence on both pre and postsynaptic spike times.

Table 4: Calcium Influx Parameters
Parameter Value Description
τ C a τ C a 50 ms Calcium influx decay constant
g N g N -1×10-3μ-1×10-3μM/(ms mV) NMDAR conductance
I f f I f f 0.7 Proportion of fast decay of ff
I s f I s f 0.3 Proportion of slow decay of ff
τ f f τ f f 50 ms ff fast decay time constant
τ s f τ s f 200 ms ff slow decay time constant
V C a V C a 130 mV Calcium Reversal Potential
I f b I f b 0.75 Proportion of fast BPAP decay
I s b I s b 0.25 Proportion of slow BPAP decay
τ f b τ f b 3 ms Fast BPAP decay time constant
τ s b τ s b 25 ms Slow BPAP decay time constant


To ensure the stabilization of synaptic weights after several laps around the track, we use metaplasticity to limit NMDAR conductance (gNgN) after repeated high-frequency postsynaptic stimulation. Metaplasticity follows a voltage-dependent kinetic model of NMDAR insertion and removal from the synapse as prescribed by the equation below:

d g N d t = a [ k + ( g t - g N ) - k - ( V - V r e s t ) n g N ] d g N d t = a [ k + ( g t - g N ) - k - ( V - V r e s t ) n g N ]

aa is a scaling factor used to control the rate of change in NMDAR conductance. k+k+ is the insertion rate of unused NMDA receptors into the synapse, which we set at 8×10-58×10-5. k-(V-Vrest)nk-(V-Vrest)n is the removal rate of NMDA receptors from the synapse: we use k-k- = 8×10-78×10-7, nn = 2, and VrestVrest as the cell resting potential. Like our voltage-dependent H-function, VV in the NMDAR conductance equation represents the postsynaptic voltage (as opposed to the membrane voltage). gtgt is the maximum value for NMDAR conductance, which we set at -1×10-3-1×10-3. Note that all NMDAR conductance values are negative. We depict equilibrium NMDAR conductances as a function of voltage in Figure 7 below.

Figure 7: Voltage-Dependence of Metaplasticity. Plot of equilibrium conductance values at fixed voltages. Depolarized voltages result in lower conductance values.
Figure 7 (g.png)

The behavior of metaplasticity is aimed to depress the amount of available NMDA receptors when the cell voltage is at consistently high levels. That is, upon increased firing rates, less NMDA receptors would be available for calcium influx, resulting in less synaptic weight change to the point where the synaptic weights stabilize. As such, we expect that the implementation of metaplasticity should allow us to limit the increase in synaptic weights without having to implement a fixed upper weight bound. Note that metaplasticity does not modify the characteristics of LTD under normal firing rates, meaning that we must still implement a fixed lower weight bound in our plasticity model.

Table 5: Metaplasticity Parameters
Parameter Value Description
a a 1 Metaplasticity rate coefficient
k + k + 8 ×× 10-5-5 Rate of NMDAR insertion
k - k - 8 ×× 10-7-7 NMDAR removal rate coefficient
g t g t -1×10-3μ-1×10-3μM/(ms mV) Maximum NMDAR conductance
n n 2 Exponential voltage-dependence
    of NMDAR removal

Spike-Time Dependence of CaDP

In order to analyze the spike-time dependence of the CaDP model, we implement a regime of paired stimulation in which we induce cell firing at prespecified times and monitor the synaptic weights. We use 50 pre-post spike pairs stimulated at a constant rate of 1 Hz.

When plotting synaptic weight against the difference in spike times, we find that the CaDP curve emulates the STDP curve: we observe LTP when presynaptic firing precedes postsynaptic firing (Δt>0Δt>0, where Δt=tpost-tpreΔt=tpost-tpre) and LTD when postsynaptic firing precedes presynaptic firing (Δt0Δt0). However, with the CaDP model, we also get a second region of LTD when the spike interval becomes too large (see Figure 8). The existence of a second depression window has been highly debated and supported by some recent research [7], [12].

Figure 8: Spike-time dependence of CaDP. Plot depicts the final weights between pre/postsynaptic cells after 50 spike pairs. For this plot, we use kk = 1150011500.
Figure 8 (bestCaDPscaled.png)

Calcium dependent plasticity involves a higher order of complexity in its dynamics and synaptic weight modifications in comparison to STDP. When implementing CaDP alongside the Integrate and Fire model, there appear to be redundancies: even though they may be related, we separate synaptic voltage from membrane voltage, NMDAR conductance from synaptic conductance, etc. It still incorporates presynaptic and postsynaptic spike times via the use of the ff function and BPAP, respectively. While we do not discuss it in depth, it also incorporates some aspects of rate-dependent plasticity: higher spiking frequencies result in more potentiation of weights, whereas in STDP, the frequency of spike-pairs is disregarded. We will discuss more of the similarities and differences between the two plasticity models in the next chapter.

Comparison: STDP vs. CaDP

Having introduced both Spike-time Dependent Plasticity and Calcium Dependent Plasticity, we now compare the two models in their ability to explain the phenomena associated with spatial memory. We focus on two of these phenomena: (1) the backward shift of Hippocampal place fields and (2) the final stabilization of place fields.

Experimental Setup Revisited

We must first revisit the experimental setup and simulation methods that we use to compare the two plasticity models. We have a rat run clockwise around a circular track. In our models, we vary the number of simulated laps from 20-30. Each of the 120 place cells used to model the ring have been evenly spaced along the track, numbered in a clockwise fashion, each with three degrees of external input. We set the duration of external stimulus for each place cell at 100 ms per lap. When the rat is within a cell's place field, the cell receives external input at a uniform rate of 50 Hz throughout (it receives synaptic input every 20 ms). The magnitude of external input, which we denote winpwinp, has a set, nonplastic value of 10. This synaptic weight is large enough to ensure cell firing whenever external input is available. Within the 120 cell ring contains bidirectional connections to neighboring cells, as previously depicted in Figure 1 in Chapter 2. Each of these connections is plastic. We set each of these connections to an initial weight winitwinit = 0.5.

STDP Dynamics

Using the parameters we defined in the previous chapters, we run the simulation and track the synaptic weights between cells 1 and 2 on the ring over each lap. A plot of this data is given in Figure 9.

Figure 9: STDP Synaptic Weight Changes
Figure 9 (Sw.png)

We note that the connection from cell 1 to 2 increases in weight each lap and the reverse connection decreases each lap. This occurs because of the clockwise connection's reinforcement by the order of external input: since the rat travels through cell 1's firing field right before cell 2's, the STDP rule favors the strengthening the connection from 1 to 2 and the weakening of the connection from 2 to 1. The weights appear to change in an approximately uniform stepwise fashion each lap until they reach their extrema set by the imposed weight bounds.

As the backward shift of hippocampal place fields is of primary importance to us, we track the firing locations of the place cells during the simulation. We denote the rat's location by its angular position along the track, setting zero degrees to the position where cell 1 first receives external stimulus. We monitor the degree on the track at which the cell first fires during each lap, which we call its firing degree, over the course of the simulation. We plot cell 2's firing degree in Figure 10 below.

Figure 10: STDP Backward Shift
Figure 10 (Sbshift.png)

The decrease in the firing degree of the cell is a result of earlier firing positions along the track and indicative of the backward shift of hippocampal place fields. The earlier firing positions are a result of the strengthening of synaptic weights: stronger weights allow for a greater increase in conductance upon presynaptic firing, allowing for faster depolarization of the postsynaptic cell and earlier firing, resulting in the backward shift.

The second point of interest in the simulation lies in the stabilization of the place fields. We notice a stabilization of place fields which coincides with the synaptic weights reaching their upper bounds. This implies that we do not achieve place field stability without applying an upper weight bound when using STDP, detracting from the feasibility of STDP as a standalone mechanism of synaptic plasticity.

We also analyze multiplicative STDP as an improved variation of STDP. We display the forward and backward synaptic weights as a function of lap in Figure 11.

Figure 11: Multiplicative STDP Weight Changes
Figure 11 (mSw.png)

We observe an asymptotic trend of the weight changes in this regime as opposed to the stepwise form of weight changes in regular, additive STDP. We also depict the place field backward shift in Figure 12.

Figure 12: Multiplicative STDP Backward Shift
Figure 12 (mSbshift.png)

When using multiplicative STDP, we see a faster backward shift in earlier laps due to the faster initial increase in synaptic weight. This shift also slows down significantly in later laps due to the slowed increase in weights, leading to final place field stabilization. This trend appears to agree with the nature of the backward shift in experimental results [9]. While this is a slightly improved fit to experimental data as opposed to STDP, the weight bounds are directly incorporated into the weight modification equations, making it a better approximation of results than an explanation of the mechanisms behind synaptic plasticity.

CaDP Dynamics

Having explained the advantages and disadvantages of STDP, we now look at CaDP and compare its simulation results with STDP. First, we will depict the changes in calcium concentrations in different synapses to better illustrate the mechanisms behind CaDP.

Figure 13: Calcium Dynamics. Dashed lines at 0.2 and 0.5 indicate the LTD and LTP cutoff calcium levels.
Figure 13 (cal1.png)

The plot above shows calcium changes in the forward connection from cell 1 to cell 2 and the two counterclockwise connections associated with cell 1. We observe that the clockwise synapse reaches a calcium level above the LTP threshold due to effective pre-post firing. We also notice that presynaptic firing without postsynaptic response results in LTD calcium levels. Also notice that the initial increase in calcium concentration from baseline levels results from the peak in the ff function as a result from presynaptic spiking, and the subsequent increases of calcium into the LTP region are a result from postsynaptic feedback from the back-propagating action potential. Furthermore, persistent postsynaptic cell firing following presynaptic stimulus causes a sustained LTP calcium concentration and allows for increased potentiation of synaptic weights.

The changes in calcium levels yield the following weight changes and backward shift over the simulation as depicted below in Figure 14.

Figure 14: Modeling Spatial Memory using CaDP. (A) Synaptic Weights over the course of the simulation. (B) Cell 2 Firing Degree during each lap.
Figure 14 (CaDPring2.png)

These plots show that CaDP is able to reproduce the same weight changes and backward shift found in STDP. However, there are a couple of slight differences. The weight changes in STDP result only following the occurence of a postsynaptic spike. However, in CaDP, weights are constantly being changed as a function of calcium levels. Furthermore, we note that the depression of the backward connections occurs in an almost entirely different manner than in STDP: in STDP, LTD was an inherent part of the learning rule whenever a postsynaptic spike preceded a presynaptic spike. In CaDP, LTD results from the absence of or extended time between the postsynaptic spike following presynaptic stimulation. As such, the second LTD window is an essential consequence of the CaDP model in the depression of synaptic weights.

In these plots, we impose the same upper weight bound that we use for STDP, leading to a similar stabilization of place field firing. The removal of this limit will allow for the unbounded increase in synaptic weight, as depicted in Figure 15. The problem of unbounded weight changes still remains when implementing CaDP alone. However, CaDP provides a more sophisticated, biologically accurate method by which the synaptic weight changes can occur. Additionally, we can attempt to curb the backward shift by using more realistic mechanisms, such as metaplasticity, which we explore below.

Figure 15: Unbounded CaDP. Without imposing any methods to curb potentiation, synaptic weights will continue to increase without limit.
Figure 15 (unboundedW.png)

Applying Metaplasticity to CaDP

Now that we have demonstrated the ability of CaDP to replicate the results produced by STDP, we remove the weight bound and attempt to use metaplasticity to stabilize the final synaptic weights and place field distributions. Here we show the effect of metaplasticity on the synaptic weights over the course of the simulation.

Figure 16: Metaplasticity with CaDP stabilizes synaptic weights. Here we depict the stabilization of synaptic weights with the scaling of aa = 1 (left) and aa = 4 (right).
Figure 16 (metaW2.png)

We see that the weights stabilize at a magnitude of around 8-9, showing that metaplasticity can stabilize the final synaptic weights without implementing an upper weight bound. As depicted in Figure 16, an increased rate of metaplasticity causes slower potentiation/depression of weights. Note that metaplasticity only limits potentiation of weights: for depressed weights, a lower bound of zero must still be applied to avoid negative weights. The weights reach equilibrium values once the firing rate has reached a level where any increase in weight would cause an increase in firing rate that would cause the removal of NMDA receptors to outpace the insertion rate, decreasing the available NMDARs and halting any additional calcium influx and synaptic weight change. These fluctuations in NMDA receptor availability are depicted in the NMDAR conductance plot in Figure 17.

Figure 17: NMDAR conductance in Metaplasticity. Metaplasticity involves slow insertion and removal of NMDAR receptors. Increased firing causes removal of NMDAR receptors in lap 20, reaching a new equilibrium conductance.
Figure 17 (mCaDPg.png)

Note that the NMDAR conductance decreases during the periods of input to the postsynaptic cell. The decrease in NMDAR conductance slightly decreases the amount of calcium influx, limiting the potentiation/depression of weights. We witness a sharper decline in conductance upon the overlap of place fields and the increased firing rates. Once the cells fire at a rate fast enough to equal or outpace the reinsertion of NMDARs, the conductance stabilizes, allowing for a stabilization of synaptic weights.

Figure 18: Unbounded Backward Shift. Plot depicts the number of spikes for cell 60 on each degree of the track during each lap. Note consistent firing within place field until lap 20, when place field expands to cover the entire track.
Figure 18 (spikeplot.png)

While metaplasticity suffices in providing a means of final weight stability, with our current set of parameters we are unable to achieve place field stability. As displayed by the matrix of firing degrees for cell 60 (shown above in Figure 18), after continued potentiation of weights, the cell's place field will rapidly expand to span the entire track. This is a result of the high firing rate at which metaplasticity takes effect: the upper bound on firing rate is reached only once the synaptic weights have become strong enough to independently stimulate each other, resulting in continuous firing regardless of external input/position and place fields spread across the track. More tuning of metaplasticity will be needed in order to obtain the desired final place field stability. This will most likely be achieved by increasing the rate of NMDAR removal or decreasing the rate of NMDAR insertion, which should decrease the maximum firing rate and halt backward shift to effectively achieve place field stability.

After evaluating additive and multiplicative STDP alongside CaDP in explaining the phenomena associated with spatial memory in hippocampal place cells, we find that CaDP provides the most reasonable explanation for the mechanisms of synaptic plasticity. In CaDP, the mechanisms of potentiation and depression of weights are based upon biologically-supported pathways, whereas STDP does not explicitly draw connections between its weight changes and the biological mechanisms involved. In particular, the stabilization of synaptic weights is reasonably explained by coupling metaplasticity with CaDP, where STDP requires the use of an unrealistic upper weight limit to stabilize its place fields. The additional parameters involved with the model allow for more variations in the behavior of the system under different calcium LTD/LTP bounds, conductances, etc. One downside from a computational standpoint, especially when modeling a large network, lies in the additional parameters that are required to model the plasticity compared to STDP, which requires fewer parameters and yields similar results. However, if more biological accuracy and a wider range of tunability is desired, CaDP is the model of choice for synaptic plasticity.

Conclusions/Future Work

In this report we discussed the modeling of hippocampal place cells using two different plasticity models: Spike-time Dependent Plasticity and Calcium Dependent Plasticity. We have described the equations behind the plasticity models and their specific effects on the cell network interactions. We have discussed some of the differences between the two models, which include CaDP's rate-dependent plasticity as well as its inclusion of a second LTD window. We demonstrated that both models account for the backward shift of place fields observed experimentally. We also find that by coupling metaplasticity with CaDP, we can achieve final stability of synaptic weights; whereas STDP relies on weight bounds to achieve this effect.

While implementing metaplasticity successfully stabilizes synaptic weights, our simplified model does not exhibit final place field stability. Further work will be needed to tune the parameters involved with metaplasticity in order to obtain final place field stability. Once we are able to achieve the desired effects with metaplasticity, a possible area of interest would be to develop a means of coupling metaplasticity with STDP and comparing the results to that of CaDP with metaplasticity.

Other areas of interest include augmenting the 120-cell ring model to contain overlapping place fields at the beginning of the simulation and observing how this may affect the final weight distribution. Another modification to this model would be to implement a Gaussian distribution of place cell input firing rates instead of a uniform distribution with constant input rate. The original synaptic weight distribution could also be modified such that the 120 cell network contained connections to all other cells instead of using the simplified ring architecture. It can also be set to have randomized initial weights as well to make the initial state of the place cells more realistic. Once these modifications have been well studied, it could then be incorporated into a larger model of the hippocampal neural network, involving inhibitory connections as well as grid cell and head-direction cell inputs.

The backward shift and the stabilization of place fields have been closely associated with the development of spatial memory. Modeling and understanding the biochemical and biophysical processes behind synaptic plasticity and how they explain experimental results will aid us in better comprehending the mechanisms behind spatial memory. Further progress in this field may provide us with the knowledge to better understand not only how we develop a memory of our environment but also how plasticity mechanisms in the hippocampus are involved in diseases such as Alzheimer's or Epilepsy.


First and foremost, I would like to thank my mentor, Katie Ward, for all the help and guidance that she has given me this summer. I would also like to thank Dr. Cox for his support and encouragement. My thanks also goes out to Georgene Jalbuena for assisting me in learning LaTeX. Finally, I would like to thank Rice University's VIGRE program for sponsoring this research.


  1. Bear, M.F. and Connors, B.W. and Paradiso, M.A. (2007). Neuroscience: Exploring the brain. (third). Lippincott Williams & Wilkins.
  2. Gabbiani, F. and Cox, S.J. (2010). Mathematics for Neuroscientists. Elsevier Academic Press.
  3. Hebb, D.O. (1949). The Organization of Behavior; A Neuropsychological Theory.
  4. Jahr, C.E. and Stevens, C.F. (1990). Voltage dependence of NMDA-activated macroscopic conductances predicted by single-channel kinetics. The Journal of Neuroscience, 10(9), 3178.
  5. Lee, I. and Yoganarasimha, D. and Rao, G. and Knierim, J.J. (2004). Comparison of population coherence of place cells in hippocampal subfields CA1 and CA3. Nature, 430(6998), 456–459.
  6. Mehta, M.R. and Barnes, C.A. and McNaughton, B.L. (1997). Experience-dependent, asymmetric expansion of hippocampal place fields. Proceedings of the National Academy of Sciences, 94(16), 8918.
  7. Nishiyama, M. and Hong, K. and Mikoshiba, K. and Poo, M. and Kato, K. (2000). Calcium stores regulate the polarity and input specificity of synaptic modification. Nature, 408(6812), 584–588.
  8. O'Keefe, J. and Nadel, L. (1978). The hippocampus as a cognitive map.(Clarendon, Oxford). Oxford: Clarendon.
  9. Rubin, J. and Lee, D.D. and Sompolinsky, H. (2001). Equilibrium properties of temporally asymmetric Hebbian plasticity. Physical Review Letters, 86(2), 364–367.
  10. Shouval, H.Z. and Bear, M.F. and Cooper, L.N. (2002). A unified model of NMDA receptor-dependent bidirectional synaptic plasticity. Proceedings of the National Academy of Sciences, 99(16), 10831.
  11. Song, S. and Miller, K.D. and Abbott, L.F. (2000). Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. nature neuroscience, 3, 919–926.
  12. Wittenberg, G.M. and Wang, S.S.H. (2006). Malleability of spike-timing-dependent plasticity at the CA3–CA1 synapse. The Journal of neuroscience, 26(24), 6610.
  13. Yeung, L.C. and Shouval, H.Z. and Blais, B.S. and Cooper, L.N. (2004). Synaptic homeostasis and input selectivity follow from a calcium-dependent plasticity model. Proceedings of the National Academy of Sciences of the United States of America, 101(41), 14943.
  14. Yu, X. and Shouval, H.Z. and Knierim, J.J. (2008). A biophysical model of synaptic plasticity and metaplasticity can account for the dynamics of the backward shift of hippocampal place fields. Journal of neurophysiology, 100(2), 983.

Collection Navigation

Content actions


Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...


Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks