next up previous contents index
Next: 5.3 Escape noise Up: 5. Noise in Spiking Previous: 5.1 Spike train variability


5.2 Statistics of spike trains

In this section, we introduce some important concepts for the statistical description of neuronal spike trains. A central notion will be the interspike interval distribution which is discussed in the framework of a generalized input-dependent renewal theory. We start in Section 5.2.1 with the definition of renewal systems and turn then in Section 5.2.2 to interval distributions. The relation between interval distributions and neuronal models will be the topic of Sections 5.3 and 5.5.

5.2.1 Input-dependent renewal systems

We consider a single neuron such as an integrate-and-fire or SRM unit. Let us suppose that we know the last firing time $ \hat{{t}}$ < t of the neuron and its input current I. In formal spiking neuron models such as the SRM, the membrane potential u is then completely determined, i.e.,

u(t|$\displaystyle \hat{{t}}$) = $\displaystyle \eta$(t - $\displaystyle \hat{{t}}$) + $\displaystyle \int_{0}^{\infty}$$\displaystyle \kappa$(t - $\displaystyle \hat{{t}}$, sI(t - s) ds , (5.1)

cf. Eq. (4.24). In particular, for the integrate-and-fire model with membrane time constant $ \tau_{m}^{}$ and capacity C we have

u(t|$\displaystyle \hat{{t}}$) = ur exp$\displaystyle \left(\vphantom{-{t-\hat{t}\over \tau}}\right.$ - $\displaystyle {t-\hat{t}\over \tau}$$\displaystyle \left.\vphantom{-{t-\hat{t}\over \tau}}\right)$  + $\displaystyle {1\over C}$$\displaystyle \int_{0}^{{t-\hat{t}}}$ exp$\displaystyle \left(\vphantom{{-{s\over \tau}} }\right.$$\displaystyle {-{s\over \tau}}$$\displaystyle \left.\vphantom{{-{s\over \tau}} }\right)$ I(t - s) ds , (5.2)

cf. Eq. (4.10). In general, part or all of the input current I could arise from presynaptic spikes. Here we simply assume that the input current I is a known function of time.

Given the input and the firing time $ \hat{{t}}$ we would like to predict the next action potential. In the absence of noise, the next firing time t(f) of a neuron with membrane potential (5.1) is determined by the threshold condition u = $ \vartheta$. The first threshold crossing occurs at

t(f) = min$\displaystyle \big\{$t > $\displaystyle \hat{{t}}$ | u(t|$\displaystyle \hat{{t}}$)$\displaystyle \ge$$\displaystyle \vartheta$$\displaystyle \big\}$ . (5.3)

In the presence of noise, however, we are no longer able to predict the exact firing time of the next spike, but only the probability that a spike occurs. The calculation of the probability distribution of the next firing time for arbitrary time-dependent input I is one of the major goals in the theory of noisy spiking neurons.

Equations. (5.1) and (5.2) combined with a (stochastic) spike generation procedure are examples of input-dependent renewal systems. Renewal processes are a class of stochastic point processes that describe a sequence of events (spikes) in time (Cox, 1962; Papoulis, 1991). Renewal systems in the narrow sense (stationary renewal processes), presuppose stationary input and are defined by the fact that the state of the system, and hence the probability of generating the next event, depends only on the `age' t - $ \hat{{t}}$ of the system, i.e., the time that has passed since the last event (last spike). The central assumption of renewal theory is that the state does not depend on earlier events (i.e., earlier spikes of the same neuron). The aim of renewal theory is to predict the probability of the next event given the age of the system.

Here we use the renewal concept in a broader sense and define a renewal process as a system where the state at time t, (and hence the probability of generating an event at t), depends both on the time that has passed since the last event (i.e., the firing time $ \hat{{t}}$) and the input I(t'), $ \hat{{t}}$ < t' < t, that the system received since the last event. Input-dependent renewal systems are also called modulated renewal processes (Reich et al., 1998), non-stationary renewal systems (Gerstner, 1995,2000b), or inhomogeneous Markov interval processes (Kass and Ventura, 2001). The aim of a theory of input-dependent renewal systems is to predict the probability of the next event, given the timing $ \hat{{t}}$ of the last event and the input I(t') for $ \hat{{t}}$ < t' < t. Example: Light bulb failure as a renewal system

A generic example of a (potentially input-dependent) renewal system is a light bulb. The event is the failure of the bulb and its subsequent exchange. Obviously, the state of the system only depends on the age of the current bulb, and not on that of any previous bulb that has already been exchanged. If the usage pattern of the bulbs is stationary (e.g., the bulb is switched on during 10 hours each night) then we have a stationary renewal process. If usage is irregular (higher usage in winter than in summer, no usage during vacation), the aging of the bulb will be more rapid or slower depending on how often it is switched on and off. We can use input-dependent renewal theory if we keep track of all the times we have turned the switch. The input in this case are the switching times. The aim of renewal theory is to calculate the probability of the next failure given the age of the bulb and the switching pattern.

5.2.2 Interval distribution

The estimation of interspike interval (ISI) distributions from experimental data is a common method to study neuronal variability given a certain stationary input. In a typical experiment, the spike train of a single neuron (e.g., a neuron in visual cortex) is recorded while driven by a constant stimulus. The stimulus might be an external input applied to the system (e.g., a visual contrast grating moving at constant speed); or it may be an intracellularly applied constant driving current. The spike train is analyzed and the distribution of intervals sk between two subsequent spikes is plotted in a histogram. For a sufficiently long spike train, the histogram provides a good estimate of the ISI distribution which we denote as P0(s); cf. Fig. 5.1A. We will return to the special case of stationary input in subsection 5.2.4.

Figure 5.1: A. Stationary interval distribution. A neuron is driven by a constant input (top). A histogram of the interspike intervals s1, s2,... can be used to estimate the interval distribution P0(s) (bottom). B. Input-dependent interval distribution. A neuron, stimulated by the current I(t) (top), has emitted a first spike at $ \hat{{t}}$. The interval distribution PI(t|$ \hat{{t}}$) (bottom) gives the probability density that the next spike occurs after an interval t - $ \hat{{t}}$.
\par\hbox{{\bf A} \hspace{65mm} {\bf B}} \hbox{\hspace{5mm}
} \vspace{0mm}

We now generalize the concept of interval distributions to time-dependent input. We concentrate on a single neuron which is stimulated by a known input current I(t) and some unknown noise source. We suppose that the last spike occurred at time $ \hat{{t}}$ and ask the following question. What is the probability that the next spike occurs between t and t + $ \Delta$t, given the spike at $ \hat{{t}}$ and the input I(t') for t' < t? For $ \Delta$t$ \to$ 0, the answer is given by the probability density of firing PI(t|$ \hat{{t}}$). Hence, $ \int_{{t_1}}^{{t_2}}$PI(t|$ \hat{{t}}$) dt is the probability to find a spike in the segment [t1, t2], given that the last spike was at $ \hat{{t}}$ < t1. The normalization of PI(t|$ \hat{{t}}$) is

$\displaystyle \int_{{\hat{t}}}^{\infty}$PI(t | $\displaystyle \hat{{t}}$) dt = 1 - pinactI  (5.4)

where pinactI denotes the probability that the neurons stays inactive and will never fire again. For excitatory input and a sufficient amount of noise the neuron will always emit further spikes at some point. We therefore assume in the following that pinactI vanishes.

The lower index I of PI(t|$ \hat{{t}}$) is intended to remind us that the probability density PI(t|$ \hat{{t}}$) depends on the time course of the input I(t') for t' < t. Since PI(t|$ \hat{{t}}$) is conditioned on the spike at $ \hat{{t}}$, it can be called a spike-triggered spike density. We interpret PI(t | $ \hat{{t}}$) as the distribution of interspike intervals in the presence of an input current I. In the following, we will refer to PI as the input-dependent interval distribution; see Fig. 5.1B. For renewal systems with stationary input PI(t|$ \hat{{t}}$) reduces to P0(t - $ \hat{{t}}$).

5.2.3 Survivor function and hazard

The interval distribution PI(t|$ \hat{{t}}$) as defined above is a probability density. Thus, integration of PI(t|$ \hat{{t}}$) over time yields a probability. For example, $ \int_{{\hat{t}}}^{t}$PI(t'|$ \hat{{t}}$) dt' is the probability that a neuron which has emitted a spike at $ \hat{{t}}$ fires the next action potential between $ \hat{{t}}$ and t. Thus

SI(t|$\displaystyle \hat{{t}}$) = 1 - $\displaystyle \int_{{\hat{t}}}^{t}$PI(t'|$\displaystyle \hat{{t}}$) dt' (5.5)

is the probability that the neuron stays quiescent between $ \hat{{t}}$ and t. SI(t|$ \hat{{t}}$) is called the survivor function: it gives the probability that the neuron `survives' from $ \hat{{t}}$ to t without firing.

The survivor function SI(t|$ \hat{{t}}$) has an initial value SI($ \hat{{t}}$|$ \hat{{t}}$) = 1 and decreases to zero for t$ \to$$ \infty$. The rate of decay of SI(t|$ \hat{{t}}$) will be denoted by $ \rho_{I}^{}$(t|$ \hat{{t}}$) and is defined by

$\displaystyle \rho_{I}^{}$(t|$\displaystyle \hat{{t}}$) = - $\displaystyle {\frac{{{\text{d}}}}{{{\text{d}}t}}}$SI(t|$\displaystyle \hat{{t}}$) / SI(t|$\displaystyle \hat{{t}}$) . (5.6)

In the language of renewal theory, $ \rho_{I}^{}$(t|$ \hat{{t}}$) is called the `age-dependent death rate' or `hazard' (Cox, 1962; Cox and Lewis, 1966).

Integration of Eq. (5.6) yields the survivor function

SI(t|$\displaystyle \hat{{t}}$) = exp$\displaystyle \left[\vphantom{ - \int_{\hat{t}}^t \rho_I(t'\vert\hat{t}) \, {\text{d}}t' }\right.$ - $\displaystyle \int_{{\hat{t}}}^{t}$$\displaystyle \rho_{I}^{}$(t'|$\displaystyle \hat{{t}}$) dt'$\displaystyle \left.\vphantom{ - \int_{\hat{t}}^t \rho_I(t'\vert\hat{t}) \, {\text{d}}t' }\right]$ . (5.7)

According to the definition of the survivor function in Eq. (5.5), the interval distribution is given by

PI(t|$\displaystyle \hat{{t}}$) = - $\displaystyle {\frac{{{\text{d}}}}{{{\text{d}}t}}}$SI(t|$\displaystyle \hat{{t}}$) = $\displaystyle \rho_{I}^{}$(t|$\displaystyle \hat{{t}}$SI(t|$\displaystyle \hat{{t}}$) , (5.8)

which has a nice intuitive interpretation: In order to emit its next spike at t, the neuron has to survive the interval ($ \hat{{t}}$, t) without firing and then fire at t. The survival probability is SI(t|$ \hat{{t}}$) and the hazard of firing a spike at time t is $ \rho_{I}^{}$(t|$ \hat{{t}}$) which explains the two factors on the right-hand side of Eq. (5.8). Inserting Eq. (5.7) in (5.8), we obtain an explicit expression for the interval distribution in terms of the hazard:

PI(t|$\displaystyle \hat{{t}}$) = $\displaystyle \rho_{I}^{}$(t|$\displaystyle \hat{{t}}$) exp$\displaystyle \left[\vphantom{ - \int_{\hat{t}}^t \rho_I(t'\vert\hat{t}) \, {\text{d}}t' }\right.$ - $\displaystyle \int_{{\hat{t}}}^{t}$$\displaystyle \rho_{I}^{}$(t'|$\displaystyle \hat{{t}}$) dt'$\displaystyle \left.\vphantom{ - \int_{\hat{t}}^t \rho_I(t'\vert\hat{t}) \, {\text{d}}t' }\right]$ . (5.9)

On the other hand, given the interval distribution we can derive the hazard from

$\displaystyle \rho_{I}^{}$(t|$\displaystyle \hat{{t}}$) = - $\displaystyle {P_I(t\vert\hat{t}) \over S_I(t\vert\hat{t})}$ = - $\displaystyle {P_I(t\vert\hat{t}) \over 1 - \int_{\hat{t}}^t P_I(t'\vert\hat{t}) \, {\text{d}}t'}$ . (5.10)

Thus, each of the three quantities $ \rho_{I}^{}$(t|$ \hat{{t}}$), PI(t|$ \hat{{t}}$), and SI(t|$ \hat{{t}}$) is sufficient to describe the statistical properties of an input-dependent renewal system. For stationary renewal systems, Eqs. (5.5)-(5.10) hold with the replacement
PI(t|$\displaystyle \hat{{t}}$) $\displaystyle \longrightarrow$ P0(t - $\displaystyle \hat{{t}}$) (5.11)
SI(t|$\displaystyle \hat{{t}}$) $\displaystyle \longrightarrow$ S0(t - $\displaystyle \hat{{t}}$) (5.12)
$\displaystyle \rho_{I}^{}$(t|$\displaystyle \hat{{t}}$) $\displaystyle \longrightarrow$ $\displaystyle \rho_{0}^{}$(t - $\displaystyle \hat{{t}}$) . (5.13)

Eqs. (5.5) - (5.10) are standard results of renewal theory (Perkel et al., 1967a; Gerstein and Perkel, 1972; Cox, 1962; Perkel et al., 1967b; Cox and Lewis, 1966). Example: From interval distribution to hazard function

Let us suppose that we have found under stationary experimental conditions an interval distribution that can be approximated as

P0(s) = $\displaystyle \left\{\vphantom{ \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} ...
...(s-\Delta^{\rm abs})^2} & {\rm for} & s > \Delta^{\rm abs} \end{array} }\right.$$\displaystyle \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} & s \le \Delta^{\r...
...0\over 2}(s-\Delta^{\rm abs})^2} & {\rm for} & s > \Delta^{\rm abs} \end{array}$ (5.14)

with a constant a0 > 0; cf. Fig. 5.2A. From Eq. (5.10), the hazard is found to be

$\displaystyle \rho_{0}^{}$(s) = $\displaystyle \left\{\vphantom{ \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} ...
...(s-\Delta^{\rm abs}) & {\rm for} & s>\Delta^{\rm abs} \, . \end{array} }\right.$$\displaystyle \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} & s \le \Delta^{\r...
...\  a_0\, (s-\Delta^{\rm abs}) & {\rm for} & s>\Delta^{\rm abs} \, . \end{array}$ (5.15)

Thus, during an interval $ \Delta^{{\rm abs}}_{}$ after each spike the hazard vanishes. We may interpret $ \Delta^{{\rm abs}}_{}$ as the absolute refractory time of the neuron. For s > $ \Delta^{{\rm abs}}_{}$ the hazard increases linearly, i.e., the longer the neuron waits the higher its probability of firing. In Section 5.3, the hazard (5.15) will be motivated by a non-leaky integrate-and-fire neuron subject to noise.

Figure 5.2: A. Interval distribution P0(s) (top), survivor function S0(s) (middle) for a hazard function (bottom) defined by $ \rho_{0}^{}$(s) = a0 (s - $ \Delta^{{\rm abs}}_{}$$ \Theta$(s - $ \Delta^{{\rm abs}}_{}$) with a0 = 0.01 ms-2 and $ \Delta^{{\rm abs}}_{}$ = 2ms. B. Similar plots as in A but for a hazard function defined by $ \rho_{0}^{}$(s) = $ \nu${1 -  exp[- $ \lambda$ (s - $ \Delta^{{\rm abs}}_{}$)]} $ \Theta$(s - $ \Delta^{{\rm abs}}_{}$) with $ \nu$ = 0.1kHz, $ \lambda$ = 0.2kHz, and $ \Delta^{{\rm abs}}_{}$ = 2ms.
{\bf A}
\end{minipage} \end{center} Example: From hazard functions to interval distributions

Interval distributions and hazard functions have been measured in many experiments. For example, in auditory neurons of the cat driven by stationary stimuli, the hazard function $ \rho_{0}^{}$(t - $ \hat{{t}}$) increases, after an absolute refractory time, to a constant level (Goldberg et al., 1964). We approximate the time course of the hazard function as

$\displaystyle \rho_{0}^{}$(s) = $\displaystyle \left\{\vphantom{ \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} ...
...,(s-\Delta^{\rm abs})}] & {\rm for} & s > \Delta^{\rm abs} \end{array} }\right.$$\displaystyle \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} & s \le \Delta^{\r...
...-\lambda\,(s-\Delta^{\rm abs})}] & {\rm for} & s > \Delta^{\rm abs} \end{array}$ (5.16)

with parameters $ \Delta^{{\rm abs}}_{}$,$ \lambda$, and $ \nu$; Fig. 5.2B. In Section 5.3 we will see how the hazard (5.16) can be related to neuronal dynamics. Given the hazard function, we can calculate the survivor function and interval distributions. Application of Eq. (5.7) yields

S0(s) = $\displaystyle \left\{\vphantom{ \begin{array}{*{2}{c@{\qquad}}c} 1 & {\rm for} ...
...\nu\,\rho_0(s)/\lambda} & {\rm for} & s > \Delta^{\rm abs} \end{array} }\right.$$\displaystyle \begin{array}{*{2}{c@{\qquad}}c} 1 & {\rm for} & s < \Delta^{\rm ...
...)} \, e^{\nu\,\rho_0(s)/\lambda} & {\rm for} & s > \Delta^{\rm abs} \end{array}$ (5.17)

The interval distribution is given by P0(s) = $ \rho_{0}^{}$(sS0(s). Interval distribution, survivor function, and hazard are shown in Fig. 5.2B. Example: Poisson process

Let us compare the hazard functions of the two previous examples to the hazard of a homogeneous Poisson process that generates spikes stochastically at a fixed rate $ \nu$. Since different spikes are independent, the hazard of a Poisson process is constant $ \rho_{0}^{}$(s) $ \equiv$ $ \nu$. In particular, there is no dependence of the hazard upon the last or any earlier spike. From Eq. (5.8) we find the survivor function S0(s) = exp[- $ \nu$ s]. The interval distribution is exponential

P0(s) = $\displaystyle \nu$ e-$\scriptstyle \nu$ s    for s > 0 . (5.18)

Interval distribution and survivor function of a Poisson neuron with constant rate $ \nu$ are plotted in Fig. 5.3A. The most striking feature of Fig. 5.3A is that the interval distribution has its maximum at s = 0 so that extremely short intervals are most likely. In contrast to a Poisson process, real neurons show refractoriness so that the interval distribution P0(s) vanishes for s$ \to$ 0

A simple modification of the Poisson process allows us to incorporate absolute refractoriness. We define a hazard function

$\displaystyle \rho_{0}^{}$(s) = $\displaystyle \left\{\vphantom{ \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} ...
... \Delta^{\rm abs} \\  r & {\rm for} & s > \Delta^{\rm abs} \end{array} }\right.$$\displaystyle \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} & s < \Delta^{\rm abs} \\  r & {\rm for} & s > \Delta^{\rm abs} \end{array}$ . (5.19)

We call a process with hazard function (5.19) a Poisson neuron with absolute refractoriness. It generates a spike train with an interval distribution

P0(s) = $\displaystyle \left\{\vphantom{ \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} ...
...Delta^{\rm abs})\right] & {\rm for} & s > \Delta^{\rm abs} \end{array} }\right.$$\displaystyle \begin{array}{*{2}{c@{\qquad}}c} 0 & {\rm for} & s < \Delta^{\rm ...
...-r \,(s-\Delta^{\rm abs})\right] & {\rm for} & s > \Delta^{\rm abs} \end{array}$ ; (5.20)

see Fig. 5.3B. We may compare the hazard function of the Poisson neuron with absolute refractoriness with the more realistic hazard of Eq. (5.16). The main difference is that the hazard in Eq. (5.19) jumps from the state of absolute refractoriness to a constant firing rate, whereas in Eq. (5.16) the transition is smooth.

Figure 5.3: Interval distribution P0(s) (top), survivor function S0(s) (middle), and hazard function (bottom) for a Poisson neuron (A) and a Poisson neuron with absolute refractoriness ( $ \Delta^{{\rm abs}}_{}$ = 5ms) (B).
{\bf A}
\end{minipage} \end{center}

5.2.4 Stationary renewal theory and experiments

Renewal theory is usually associated with stationary input conditions. The interval distribution P0 can then be estimated experimentally from a single long spike train. The applicability of renewal theory relies on the hypothesis that a memory back to the last spike suffices to describe the spike statistics. In particular, there should be no correlation between one interval and the next. In experiments, the renewal hypothesis, can be tested by measuring the correlation between subsequent intervals. Under some experimental conditions, correlations are small indicating that a description of spiking as a stationary renewal process is a good approximation (Goldberg et al., 1964).

The notion of stationary input conditions is a mathematical concept that cannot be easily translated into experiments. With intracellular recordings under in vitro conditions, constant input current can be imposed and thus the renewal hypothesis can be tested directly. Under in vivo conditions, the assumption that the input current to a neuron embedded in a large neural system is constant (or has stationary statistics) is questionable; see (Perkel et al., 1967a,b) for a discussion. While the externally controlled stimulus can be made stationary (e.g., a grating drifting at constant speed), the input to an individual neuron is out of control.

Let us suppose that, for a given experiment, we have checked that the renewal hypothesis holds to a reasonable degree of accuracy. From the experimental interval distribution P0 we can then calculate the survivor function S0 and the hazard $ \rho_{0}^{}$ via Eqs. (5.5) and  (5.10); see the examples in subsection 5.2.2. If some additional assumptions regarding the nature of the noise are made, the form of the hazard $ \rho_{0}^{}$(t|$ \hat{{t}}$) can be interpreted in terms of neuronal dynamics. In particular, a reduced hazard immediately after a spike is a signature of neuronal refractoriness (Goldberg et al., 1964; Berry and Meister, 1998).

In case of a stationary renewal process, the interval distribution P0 contains all the statistical information, in particular mean firing rate, autocorrelation function and noise spectrum can be derived. Mean firing rate

To arrive at an expression for the mean firing rate, we start with the definition of the mean interval,

$\displaystyle \langle$s$\displaystyle \rangle$ = $\displaystyle \int_{0}^{\infty}$s P0(s) ds . (5.21)

The mean firing rate has been defined in Chapter 1.4 as $ \nu$ = 1/$ \langle$s$ \rangle$. Hence,

$\displaystyle \nu$ = $\displaystyle \left[\vphantom{\int_0^\infty s\, P_0(s) \, {\text{d}}s \,}\right.$$\displaystyle \int_{0}^{\infty}$s P0(s) ds $\displaystyle \left.\vphantom{\int_0^\infty s\, P_0(s) \, {\text{d}}s \,}\right]^{{-1}}_{}$ = $\displaystyle \left[\vphantom{ \int_0^\infty S_0(s) \, {\text{d}}s }\right.$$\displaystyle \int_{0}^{\infty}$S0(s) ds$\displaystyle \left.\vphantom{ \int_0^\infty S_0(s) \, {\text{d}}s }\right]^{{-1}}_{}$ . (5.22)

The second equality sign follows from integration by parts using P0(s) = - dS0(s)/ds; cf. Eq. (5.5). Autocorrelation function

Let us consider a spike train Si(t) = $ \sum_{f}^{}$$ \delta$(t - ti(f)) of length T. The firing times ti(f) might have been measured in an experiment or else generated by a neuron model. We suppose that T is sufficiently long so that we can formally consider the limit T$ \to$$ \infty$. The autocorrelation function Cii(s) of the spike train is a measure for the probability to find two spikes at a time interval s, i.e.

Cii(s) = $\displaystyle \langle$Si(tSi(t + s)$\displaystyle \rangle_{t}^{}$ , (5.23)

where $ \langle$ . $ \rangle_{t}^{}$ denotes an average over time t,

$\displaystyle \langle$f (t)$\displaystyle \rangle_{t}^{}$ = $\displaystyle \lim_{{T\to\infty}}^{}$$\displaystyle {1\over T}$$\displaystyle \int_{{-T/2}}^{{T/2}}$f (t) dt . (5.24)

We note that the right-hand side of Eq. (5.23) is symmetric so that Cii(- s) = Cii(s) holds.

The calculation of the autocorrelation function for a stationary renewal process is the topic of the next section. Noise spectrum

The power spectrum (or power spectral density) of a spike train is defined as $ \mathcal {P}$($ \omega$) = limT$\scriptstyle \to$$\scriptstyle \infty$$ \mathcal {P}$T($ \omega$), where $ \mathcal {P}$T is the power of a segment of length T of the spike train,

$\displaystyle \mathcal {P}$T($\displaystyle \omega$) = $\displaystyle {1\over T}$$\displaystyle \left\vert\vphantom{ \int_{-T/2}^{T/2} S_i(t) \, e^{-i\omega \,t} \, {\text{d}}t }\right.$$\displaystyle \int_{{-T/2}}^{{T/2}}$Si(te-i$\scriptstyle \omega$ t dt$\displaystyle \left.\vphantom{ \int_{-T/2}^{T/2} S_i(t) \, e^{-i\omega \,t} \, {\text{d}}t }\right\vert^{2}_{}$ , (5.25)

The power spectrum $ \mathcal {P}$($ \omega$) of a spike train is equal to the Fourier transform $ \hat{{C}}_{{ii}}^{}$($ \omega$) of its autocorrelation function (Wiener-Khinchin Theorem). To see this, we use the definition of the autocorrelation function

$\displaystyle \hat{{C}}_{{ii}}^{}$($\displaystyle \omega$) = $\displaystyle \int_{{-\infty}}^{\infty}$$\displaystyle \langle$Si(tSi(t + s)$\displaystyle \rangle$ e-i$\scriptstyle \omega$ s ds    
  = $\displaystyle \lim_{{T\to\infty}}^{}$$\displaystyle {1\over T}$$\displaystyle \int_{{-T/2}}^{{T/2}}$ Si(t$\displaystyle \int_{{-\infty}}^{\infty}$ Si(t + se-i$\scriptstyle \omega$ s ds dt    
  = $\displaystyle \lim_{{T\to\infty}}^{}$$\displaystyle {1\over T}$$\displaystyle \int_{{-T/2}}^{{T/2}}$ Si(te+i$\scriptstyle \omega$ t dt $\displaystyle \int_{{-\infty}}^{\infty}$Si(s'e-i$\scriptstyle \omega$ s' ds' . (5.26)

In the limit of T$ \to$$ \infty$, Eq. (5.25) becomes identical to (5.26) so that the assertion follows. The power spectral density of a spike train during spontaneous activity is called the noise spectrum of the neuron (Bair et al., 1994; Edwards and Wakefield, 1993). As we will see in the next subsection, the noise spectrum of a stationary renewal process is intimately related to the interval distribution P0(s).

5.2.5 Autocorrelation of a stationary renewal process

Noise is a limiting factor to all forms of information transmission and in particular to information transmission by neurons. An important concept of the theory of signal transmission is the signal-to-noise ratio. A signal that is transmitted at a certain frequency $ \omega$ should be stronger than (or at least of the same order of magnitude as) the noise at the same frequency. For this reason, the noise spectrum $ \mathcal {P}$($ \omega$) of the transmission channel is of interest. In this section we calculate the noise spectrum of a stationary renewal process. As we have seen above, the noise spectrum of a neuron is directly related to the autocorrelation function of its spike train. Both noise spectrum and autocorrelation function are experimentally accessible (Bair et al., 1994; Edwards and Wakefield, 1993).

Let $ \nu_{i}^{}$ = $ \langle$Si$ \rangle$ denote the mean firing rate (expected number of spikes per unit time) of the spike train. Thus the probability of finding a spike in a short segment [t, t + $ \Delta$t] of the spike train is $ \nu$ $ \Delta$t. For large intervals s, firing at time t + s is independent from whether or not there was a spike at time t. Therefore, the expectation to find a spike at t and another spike at t + s approaches for s$ \to$$ \infty$ a limiting value $ \lim_{{s\to \infty}}^{}$$ \langle$Si(tSi(t + s)$ \rangle$ = $ \lim_{{s\to \infty}}^{}$Cii(s) = $ \nu_{i}^{2}$. It is convenient to subtract the baseline value and introduce a `normalized' autocorrelation,

C0ii(s) = Cii(s) - $\displaystyle \nu_{i}^{2}$ , (5.27)

with $ \lim_{{s\to \infty}}^{}$Cii0(s) = 0. Fourier transform of Eq. (5.27) yields

$\displaystyle \hat{{C}}_{{ii}}^{}$($\displaystyle \omega$) = $\displaystyle \hat{{C}}_{{ii}}^{0}$($\displaystyle \omega$) + 2$\displaystyle \pi$$\displaystyle \nu_{i}^{2}$ $\displaystyle \delta$($\displaystyle \omega$) . (5.28)

Thus $ \hat{{C}}_{{ii}}^{}$($ \omega$) diverges at $ \omega$ = 0; the divergence is removed by switching to the normalized autocorrelation. In the following we will calculate $ \hat{{C}}_{{ii}}^{}$($ \omega$) for $ \omega$$ \ne$ 0.

In the case of a stationary renewal process, the autocorrelation function is closely related to the interval distribution P0(s). This relation will now be derived. Let us suppose that we have found a first spike at t. To calculate the autocorrelation we need the probability density for a spike at t + s. Let us construct an expression for Cii(s) for s > 0. The correlation function for positive s will be denoted by $ \nu_{i}^{}$ C+(s) or

C+(s) = $\displaystyle {1\over \nu_i}$ Cii(s$\displaystyle \Theta$(s) . (5.29)

The factor $ \nu_{i}^{}$ in Eq. (5.29) takes care of the fact that we expect a first spike at t with rate $ \nu_{i}^{}$. C+(s) gives the conditional probability density that, given a spike at t, we will find another spike at t + s > t. The spike at t + s can be the first spike after t, or the second one, or the nth one; see Fig. 5.4. Thus for s > 0
C+(s) = P0(s) + $\displaystyle \int_{0}^{\infty}$P0(s'P0(s - s') ds'  
    + $\displaystyle \int_{0}^{\infty}$$\displaystyle \int_{0}^{\infty}$P0(s'P0(s''P0(s - s' - s'') ds' ds'' + ... (5.30)


C+(s) = P0(s) + $\displaystyle \int_{0}^{\infty}$P0(s'C+(s - s') ds'  (5.31)

as can be seen by inserting Eq. (5.30) on the right-hand side of (5.31).

Figure 5.4: A. The autocorrelation of a spike train describes the chance to find two spikes at a distance s, independent of the number of spikes that occur in between. B. Fourier transform of the autocorrelation function Cii of a Poisson neuron with absolute refractoriness ( $ \Delta^{{\rm ax}}_{}$ = 5 ms) and constant stimulation ($ \nu$ = 100Hz).
{\bf A}

Due to the symmetry of Cii, we have Cii(s) = $ \nu$ C+(- s) for s < 0. Finally, for s = 0, the autocorrelation has a $ \delta$ peak reflecting the trivial autocorrelation of each spike with itself. Hence,

Cii(s) = $\displaystyle \nu_{i}^{}$$\displaystyle \left[\vphantom{ \delta(s) + C_+(s) + C_+(-s) }\right.$$\displaystyle \delta$(s) + C+(s) + C+(- s)$\displaystyle \left.\vphantom{ \delta(s) + C_+(s) + C_+(-s) }\right]$ . (5.32)

In order to solve Eq. (5.31) for C+ we take the Fourier transform of Eq. (5.31) and find

$\displaystyle \hat{{C}}_{+}^{}$($\displaystyle \omega$) = $\displaystyle {\hat{P}_0(\omega) \over 1 - \hat{P}_0(\omega)}$ , (5.33)

Together with the Fourier transform of Eq. (5.32), $ \hat{{C}}_{{ii}}^{}$ = $ \nu_{i}^{}$ [1 + 2 Re{C+($ \omega$)}], we obtain

$\displaystyle \hat{{C}}_{{ii}}^{}$($\displaystyle \omega$) = $\displaystyle \nu_{i}^{}$ Re$\displaystyle \left\{\vphantom{ { 1 + \hat{P}_0(\omega) \over 1 - \hat{P}_0(\omega)} }\right.$$\displaystyle {1 + \hat{P}_0(\omega) \over 1 - \hat{P}_0(\omega)}$$\displaystyle \left.\vphantom{ { 1 + \hat{P}_0(\omega) \over 1 - \hat{P}_0(\omega)} }\right\}$    for    $\displaystyle \omega$$\displaystyle \ne$0 . (5.34)

For $ \omega$ = 0, the Fourier integral over the right-hand side of Eq. (5.30) diverges, since $ \int_{0}^{\infty}$P0(s)ds = 1. If we add the diverging term from Eq. (5.28), we arrive at

$\displaystyle \hat{{C}}_{{ii}}^{}$($\displaystyle \omega$) = $\displaystyle \nu_{i}^{}$ Re$\displaystyle \left\{\vphantom{ { 1 + \hat{P}_0(\omega) \over 1 - \hat{P}_0(\omega)} }\right.$$\displaystyle {1 + \hat{P}_0(\omega) \over 1 - \hat{P}_0(\omega)}$$\displaystyle \left.\vphantom{ { 1 + \hat{P}_0(\omega) \over 1 - \hat{P}_0(\omega)} }\right\}$ +2$\displaystyle \pi$ $\displaystyle \nu_{i}^{2}$$\displaystyle \delta$($\displaystyle \omega$) (5.35)

This is a standard result of stationary renewal theory (Cox and Lewis, 1966) which has been repeatedly applied to neuronal spike trains (Bair et al., 1994; Edwards and Wakefield, 1993). Example: Stationary Poisson process

In Section 5.2.3 we have defined the Poisson neuron as a stationary renewal process with constant hazard $ \rho_{0}^{}$(t - $ \hat{{t}}$) = $ \nu$. In the literature, a Poisson process is often defined via its autocorrelation

Cii(s) = $\displaystyle \nu$ $\displaystyle \delta$(s) + $\displaystyle \nu^{2}_{}$ (5.36)

We want to show that Eq. (5.36) follows from Eq. (5.30).

Since the interval distribution of a Poisson process is exponential [cf. Eq. (5.18)], we can evaluate the integrals on the right-hand side of Eq. (5.30) in a straightforward manner. The result is

C+(s) = $\displaystyle \nu$ e-$\scriptstyle \nu$ s$\displaystyle \big[$1 + $\displaystyle \nu$ s + $\displaystyle {1\over 2}$($\displaystyle \nu$ s)2 + ...$\displaystyle \big]$ = $\displaystyle \nu$ . (5.37)

Hence, with Eq. (5.32), we obtain the autocorrelation function (5.36) of a homogeneous Poisson process. The Fourier transform of Eq. (5.36) yields a flat spectrum with a $ \delta$ peak at zero:

$\displaystyle \hat{{C}}_{{ii}}^{}$($\displaystyle \omega$) = $\displaystyle \nu$ +2$\displaystyle \pi$ $\displaystyle \nu^{2}_{}$ $\displaystyle \delta$($\displaystyle \omega$) . (5.38)

The result could have also been obtained by evaluating Eq. (5.35). Example: Poisson process with absolute refractoriness

We return to the Poisson neuron with absolute refractoriness defined in Eq. (5.19). Apart from an absolute refractory time $ \Delta^{{\rm abs}}_{}$, the neuron fires with rate r. For $ \omega$$ \ne$ 0, Eq. (5.35) yields the autocorrelation function

$\displaystyle \hat{{C}}_{{ii}}^{}$($\displaystyle \omega$) = $\displaystyle \nu$ $\displaystyle \left\{\vphantom{ 1 + 2\,{\nu^2\over \omega^2}\, [1 - \cos(\omega...
...ta^{\rm abs})] + 2 {\nu\over \omega} \, \sin(\omega\,\Delta^{\rm abs}) }\right.$1 + 2 $\displaystyle {\nu^2\over \omega^2}$ [1 - cos($\displaystyle \omega$ $\displaystyle \Delta^{{\rm abs}}_{}$)] + 2$\displaystyle {\nu\over \omega}$ sin($\displaystyle \omega$ $\displaystyle \Delta^{{\rm abs}}_{}$)$\displaystyle \left.\vphantom{ 1 + 2\,{\nu^2\over \omega^2}\, [1 - \cos(\omega ...
...})] + 2 {\nu\over \omega} \, \sin(\omega\,\Delta^{\rm abs}) }\right\}^{{-1}}_{}$ , (5.39)

cf. Fig. (5.4)B. In contrast to the stationary Poisson process Eq. (5.36), the noise spectrum of a neuron with absolute refractoriness $ \Delta^{{\rm abs}}_{}$ > 0 is no longer flat. In particular, for $ \omega$$ \to$ 0, the noise level is decreased by a factor [1 + 2($ \nu$ $ \Delta^{{\rm abs}}_{}$) + ($ \nu$ $ \Delta^{{\rm abs}}_{}$)2]-1. Eq. (5.39) and generalizations thereof have been used to fit the power spectrum of, e.g., auditory neurons (Edwards and Wakefield, 1993) and MT neurons (Bair et al., 1994).

Can we understand the decrease in the noise spectrum for $ \omega$$ \to$ 0? The mean interval of a Poisson neuron with absolute refractoriness is $ \langle$s$ \rangle$ = $ \Delta^{{\rm abs}}_{}$ + r-1. Hence the mean firing rate is

$\displaystyle \nu$ = $\displaystyle {r\over 1 + \Delta^{\rm abs}\,r}$ . (5.40)

For $ \Delta^{{\rm abs}}_{}$ = 0 we retrieve the stationary Poisson process Eq. (5.2.3) with $ \nu$ = r. For finite $ \Delta^{{\rm abs}}_{}$ the firing is more regular than that of a Poisson process with the same mean rate $ \nu$. We note that for finite $ \Delta^{{\rm abs}}_{}$ > 0, the mean firing rate remains bounded even if r$ \to$$ \infty$. The neuron fires then regularly with period $ \Delta^{{\rm abs}}_{}$. Because the spike train of a neuron with refractoriness is more regular than that of a Poisson neuron with the same mean rate, the spike count over a long interval, and hence the spectrum for $ \omega$$ \to$ 0, is less noisy. This means that Poisson neurons with absolute refractoriness can transmit slow signals more reliably than a simple Poisson process.

next up previous contents index
Next: 5.3 Escape noise Up: 5. Noise in Spiking Previous: 5.1 Spike train variability
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.