# Dayan & Abbott Week 2

## Spike Trains and Firing Rates (cont.)

1. The brain is big complicated and messy.
2. We can only measure/simulate very simplified versions.
3. Almost always treat action potentials as identical even though they vary in duration/amplitude/shape.
4. Different ways of talking about spike sequences and firing rates:
1. $\rho(t)$ = neural response function = list of times of action potentials.
2. $r$ = spike-count rate = frequency of firing over extended period calculated from one trial.
3. $r(t)$ = time-dependent firing rate = average over many trials for the same stimulus chance to fire at time $t$. In other words, rate that changes over small periods of time. Must be calculated over multiple trials.
4. $\langle r \rangle$ = average firing rate = frequency of firing over extended period averaged over multiple trials.
5. Figure 1.4 shows different methods for calculating r(t).
6. Would be really good to generate a map labelling known stimulus with the region in the brain that responds to them.
7. From Figure 1.5 interesting that curve shows Gaussian response to $s_{max}$. What is the benefit over binary?
8. Error bars in Fig 1.9 seem small compared to interpretation of Figures on left.
9. Are the following two different interpretations of Fig 1.10c significantly different and equally plausible? a) Neuron is passing different information when it fires in rapid succession. b) Conditions for the neuron firing twice withing 5ms are arbitrary and only F1.10a is important.

## Homogeneous Poisson Processes

#### Standard Questions/Notes

• To simplify things we can assume that the history is less important
• Point process: Firing could depend on all history.
• Renewal Process: Firing depends on history since the last firing.
• Poisson Process: No dependence at all on previous events.
• Homogeneous Poisson Process: firing rate is independent of time. Just fitting random spikes over the time period with probability dependent on the known firing rate.

#### Technical Questions/Notes

Python script for generating and plotting homogenous Poisson distribution with given length (s) and firing rate (Hz):
import sys
from numpy import *
import matplotlib.pyplot as plt

#Initialise
T = 100  #length of trial
r = 10 #rate of firing in Hz
if (len(sys.argv) == 3):
T = int(sys.argv)
r = int(sys.argv)
print "Using given values. Length = %is, Rate = %iHz" % (T, r)
else:
print "Using default values. Length = %is, Rate = %iHz" % (T, r)

#Generate Data
intervals = []
while (sum(intervals) < T): #add random intervals using Poisson distribution until over length of trial
intervals.append(-log(random.rand())/r)
intervals.pop() #remove last one as it's over the length of the trial

#Output Results
print "Number of spikes: %i (expected approx. %i)" % (len(intervals), r*T)
#Coefficient of variation = deviation of interval / average interval
CV = std(intervals) / mean(intervals)
print "Coefficient of Variation (should be approx 1) = %f" % CV

sp_times = cumsum(intervals) #ordered array of each spike time.

#Plotting the generated spike chain.
y_axis =  * len(sp_times)
plt.bar(sp_times, y_axis, width=(1/(float(T*r))))
plt.show()


The results show the problems with not modelling the refractory period with many spikes very close when rate is set high.

Python script for checking generation leads to Gaussian distribution for number of spikes over the time period:

import sys
from numpy import *
import matplotlib.pyplot as plt

#Initialise
T = 10  #length of trial
r = 10 #rate of firing in Hz
trials = 10000
if (len(sys.argv) >= 3):
T = int(sys.argv)
r = int(sys.argv)
if (len(sys.argv) >= 4):
trials = int(sys.argv)
print "Using given values. Length = %is, Rate = %iHz for %i trials" % (T, r, trials)
else:
print "Using default values. Length = %is, Rate = %iHz for %i trials" % (T, r, trials)

#Generate Data
no_spikes = []
while (len(no_spikes) < trials):
if(len(no_spikes) % 10 == 0):
print '{0}{1}{2}\r'.format(len(no_spikes)/float(trials)*100, "%", " completed..."),
intervals = []
while (sum(intervals) < T): #add random intervals using Poisson distribution until over length of trial
intervals.append(-log(random.rand())/r)
no_spikes.append(len(intervals)-1)

#Output Results
print "Average number of spikes: %f (expected approx. %i)" % (mean(no_spikes), r*T)

density = []
for i in range(0, max(no_spikes)+1):
density.append(no_spikes.count(i))
print density

plt.plot(density, 'o')
plt.show()


Sample output: 