In this section, we introduce some important concepts for the statistical description of neuronal spike trains. A central notion will be the interspike interval distribution which is discussed in the framework of a generalized input-dependent renewal theory. We start in Section 5.2.1 with the definition of renewal systems and turn then in Section 5.2.2 to interval distributions. The relation between interval distributions and neuronal models will be the topic of Sections 5.3 and 5.5.
We consider a single neuron such as an integrate-and-fire or SRM unit. Let us suppose that we know the last firing time < t of the neuron and its input current I. In formal spiking neuron models such as the SRM, the membrane potential u is then completely determined, i.e.,
Given the input and the firing time we would like to predict the next action potential. In the absence of noise, the next firing time t(f) of a neuron with membrane potential (5.1) is determined by the threshold condition u = . The first threshold crossing occurs at
Equations. (5.1) and (5.2) combined with a (stochastic) spike generation procedure are examples of input-dependent renewal systems. Renewal processes are a class of stochastic point processes that describe a sequence of events (spikes) in time (Cox, 1962; Papoulis, 1991). Renewal systems in the narrow sense (stationary renewal processes), presuppose stationary input and are defined by the fact that the state of the system, and hence the probability of generating the next event, depends only on the `age' t - of the system, i.e., the time that has passed since the last event (last spike). The central assumption of renewal theory is that the state does not depend on earlier events (i.e., earlier spikes of the same neuron). The aim of renewal theory is to predict the probability of the next event given the age of the system.
Here we use the renewal concept in a broader sense and define a renewal process as a system where the state at time t, (and hence the probability of generating an event at t), depends both on the time that has passed since the last event (i.e., the firing time ) and the input I(t'), < t' < t, that the system received since the last event. Input-dependent renewal systems are also called modulated renewal processes (Reich et al., 1998), non-stationary renewal systems (Gerstner, 1995,2000b), or inhomogeneous Markov interval processes (Kass and Ventura, 2001). The aim of a theory of input-dependent renewal systems is to predict the probability of the next event, given the timing of the last event and the input I(t') for < t' < t.
A generic example of a (potentially input-dependent) renewal system is a light bulb. The event is the failure of the bulb and its subsequent exchange. Obviously, the state of the system only depends on the age of the current bulb, and not on that of any previous bulb that has already been exchanged. If the usage pattern of the bulbs is stationary (e.g., the bulb is switched on during 10 hours each night) then we have a stationary renewal process. If usage is irregular (higher usage in winter than in summer, no usage during vacation), the aging of the bulb will be more rapid or slower depending on how often it is switched on and off. We can use input-dependent renewal theory if we keep track of all the times we have turned the switch. The input in this case are the switching times. The aim of renewal theory is to calculate the probability of the next failure given the age of the bulb and the switching pattern.
The estimation of interspike interval (ISI) distributions from experimental data is a common method to study neuronal variability given a certain stationary input. In a typical experiment, the spike train of a single neuron (e.g., a neuron in visual cortex) is recorded while driven by a constant stimulus. The stimulus might be an external input applied to the system (e.g., a visual contrast grating moving at constant speed); or it may be an intracellularly applied constant driving current. The spike train is analyzed and the distribution of intervals sk between two subsequent spikes is plotted in a histogram. For a sufficiently long spike train, the histogram provides a good estimate of the ISI distribution which we denote as P0(s); cf. Fig. 5.1A. We will return to the special case of stationary input in subsection 5.2.4.
We now generalize the concept of interval distributions to time-dependent input. We concentrate on a single neuron which is stimulated by a known input current I(t) and some unknown noise source. We suppose that the last spike occurred at time and ask the following question. What is the probability that the next spike occurs between t and t + t, given the spike at and the input I(t') for t' < t? For t 0, the answer is given by the probability density of firing PI(t|). Hence, PI(t|) dt is the probability to find a spike in the segment [t1, t2], given that the last spike was at < t1. The normalization of PI(t|) is
The lower index I of PI(t|) is intended to remind us that the probability density PI(t|) depends on the time course of the input I(t') for t' < t. Since PI(t|) is conditioned on the spike at , it can be called a spike-triggered spike density. We interpret PI(t | ) as the distribution of interspike intervals in the presence of an input current I. In the following, we will refer to PI as the input-dependent interval distribution; see Fig. 5.1B. For renewal systems with stationary input PI(t|) reduces to P0(t - ).
The interval distribution PI(t|) as defined above is a probability density. Thus, integration of PI(t|) over time yields a probability. For example, PI(t'|) dt' is the probability that a neuron which has emitted a spike at fires the next action potential between and t. Thus
The survivor function SI(t|) has an initial value SI(|) = 1 and decreases to zero for t. The rate of decay of SI(t|) will be denoted by (t|) and is defined by
Integration of Eq. (5.6) yields the survivor function
Let us suppose that we have found under stationary experimental conditions an interval distribution that can be approximated as
Interval distributions and hazard functions have been measured in many experiments. For example, in auditory neurons of the cat driven by stationary stimuli, the hazard function (t - ) increases, after an absolute refractory time, to a constant level (Goldberg et al., 1964). We approximate the time course of the hazard function as
Let us compare the hazard functions of the two previous examples to the hazard of a homogeneous Poisson process that generates spikes stochastically at a fixed rate . Since different spikes are independent, the hazard of a Poisson process is constant (s) . In particular, there is no dependence of the hazard upon the last or any earlier spike. From Eq. (5.8) we find the survivor function S0(s) = exp[- s]. The interval distribution is exponential
A simple modification of the Poisson process allows us to incorporate absolute refractoriness. We define a hazard function
Renewal theory is usually associated with stationary input conditions. The interval distribution P0 can then be estimated experimentally from a single long spike train. The applicability of renewal theory relies on the hypothesis that a memory back to the last spike suffices to describe the spike statistics. In particular, there should be no correlation between one interval and the next. In experiments, the renewal hypothesis, can be tested by measuring the correlation between subsequent intervals. Under some experimental conditions, correlations are small indicating that a description of spiking as a stationary renewal process is a good approximation (Goldberg et al., 1964).
The notion of stationary input conditions is a mathematical concept that cannot be easily translated into experiments. With intracellular recordings under in vitro conditions, constant input current can be imposed and thus the renewal hypothesis can be tested directly. Under in vivo conditions, the assumption that the input current to a neuron embedded in a large neural system is constant (or has stationary statistics) is questionable; see (Perkel et al., 1967a,b) for a discussion. While the externally controlled stimulus can be made stationary (e.g., a grating drifting at constant speed), the input to an individual neuron is out of control.
Let us suppose that, for a given experiment, we have checked that the renewal hypothesis holds to a reasonable degree of accuracy. From the experimental interval distribution P0 we can then calculate the survivor function S0 and the hazard via Eqs. (5.5) and (5.10); see the examples in subsection 5.2.2. If some additional assumptions regarding the nature of the noise are made, the form of the hazard (t|) can be interpreted in terms of neuronal dynamics. In particular, a reduced hazard immediately after a spike is a signature of neuronal refractoriness (Goldberg et al., 1964; Berry and Meister, 1998).
In case of a stationary renewal process, the interval distribution P0 contains all the statistical information, in particular mean firing rate, autocorrelation function and noise spectrum can be derived.
To arrive at an expression for the mean firing rate, we start with the definition of the mean interval,
Let us consider a spike train Si(t) = (t - ti(f)) of length T. The firing times ti(f) might have been measured in an experiment or else generated by a neuron model. We suppose that T is sufficiently long so that we can formally consider the limit T. The autocorrelation function Cii(s) of the spike train is a measure for the probability to find two spikes at a time interval s, i.e.
The calculation of the autocorrelation function for a stationary renewal process is the topic of the next section.
The power spectrum (or power spectral density) of a spike train is defined as () = limTT(), where T is the power of a segment of length T of the spike train,
|()||= Si(t) Si(t + s) e-i s ds|
|= Si(t) Si(t + s) e-i s ds dt|
|= Si(t) e+i t dt Si(s') e-i s' ds' .||(5.26)|
Noise is a limiting factor to all forms of information transmission and in particular to information transmission by neurons. An important concept of the theory of signal transmission is the signal-to-noise ratio. A signal that is transmitted at a certain frequency should be stronger than (or at least of the same order of magnitude as) the noise at the same frequency. For this reason, the noise spectrum () of the transmission channel is of interest. In this section we calculate the noise spectrum of a stationary renewal process. As we have seen above, the noise spectrum of a neuron is directly related to the autocorrelation function of its spike train. Both noise spectrum and autocorrelation function are experimentally accessible (Bair et al., 1994; Edwards and Wakefield, 1993).
Let = Si denote the mean firing rate (expected number of spikes per unit time) of the spike train. Thus the probability of finding a spike in a short segment [t, t + t] of the spike train is t. For large intervals s, firing at time t + s is independent from whether or not there was a spike at time t. Therefore, the expectation to find a spike at t and another spike at t + s approaches for s a limiting value Si(t) Si(t + s) = Cii(s) = . It is convenient to subtract the baseline value and introduce a `normalized' autocorrelation,
In the case of a stationary renewal process, the autocorrelation function is closely related to the interval distribution P0(s). This relation will now be derived. Let us suppose that we have found a first spike at t. To calculate the autocorrelation we need the probability density for a spike at t + s. Let us construct an expression for Cii(s) for s > 0. The correlation function for positive s will be denoted by C+(s) or
Due to the symmetry of Cii, we have Cii(s) = C+(- s) for s < 0. Finally, for s = 0, the autocorrelation has a peak reflecting the trivial autocorrelation of each spike with itself. Hence,
In Section 5.2.3 we have defined the Poisson neuron as a stationary renewal process with constant hazard (t - ) = . In the literature, a Poisson process is often defined via its autocorrelation
Since the interval distribution of a Poisson process is exponential [cf. Eq. (5.18)], we can evaluate the integrals on the right-hand side of Eq. (5.30) in a straightforward manner. The result is
We return to the Poisson neuron with absolute refractoriness defined in Eq. (5.19). Apart from an absolute refractory time , the neuron fires with rate r. For 0, Eq. (5.35) yields the autocorrelation function
Can we understand the decrease in the noise spectrum for 0? The mean interval of a Poisson neuron with absolute refractoriness is s = + r-1. Hence the mean firing rate is
© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.