Dayan & Abbott Week 3 (Chapter 2 - Section 2.3)

Recap

  1. Many different ways of capturing the informational content of a neuron based on its spike train. Each one throws away some possibly important information.
  2. Spike-triggered average stimulus captures the selectivity of a neuron to dynamic stimuli.
  3. Retina, LGN, V1 can study responses of neurons. Generally V2 too complex and non-linear. If a lot of V1 behaviour is dependent on feedback from V2 then surely it would be just as complicated?

Estimating Firing Rates

  1. Reverse correlation methods used to construct a model that includes the effects of the stimulus over extended period of time.
  2. Assume firing rate at $t$ is a weighted sum of the values of the stimulus at earlier times.
  3. Are we constrained to only stimulus that we can quantify? Is the brain also constrained in this way?

The Early Visual System

  1. Figure 2.4: Rod and cone receptors not spiking.
  2. Smoother representation of light intensity, sometimes inverted.
  3. Why is this smoother representation only suitable in the retina? (Distances are small)
  4. Retinal Ganglion spiking behaviour.
  5. From Figure still, G2 fires when light is on, G1 fires when light is off.
  6. Optic Chiasm where optic nerves cross. But some do not cross. The net result is for the right cerebral hemisphere to sense and process left hemispheric vision, and vice versa.
  7. Neurons have receptive fields in retina + LGN and primary visual cortex.
  8. Illumination outside receptive field can not generate a response directly, although they can affect responses to stimuli (Is this the PP extended line example?)
  9. Retinal Ganglion + LGN respond best to circular spots of light surrounded by darkness or dark spots surrounded by light.
  10. Primary visual cortex many respond best to bars or boundaries. (Gabor patches)
  11. Static images are not very effective at evoking visual responses. (Are we just going to read PP into everything now?) This is also at retina/LGN level?

The Retinotopic Map

  1. Retinotopic map is the transformation from visual topology to corresponding locations on the cortical surface.
  2. Lots of interesting Figures at the bottom of this page.
  3. The visual world is mapped onto the cortical surface in a topographic manner. This feels like common-sense. Should it be surprising (striking)?
  4. Eccentricity = distance from fixation point. Azimuth = angle from horizontal.
  5. Pixel-by-pixel light intensities are not useful parametrisation of a visual image because neurons adapt to the overall level of illumination.
  6. Instead use difference between two points divided by the background level.

The Nyquist Frequency

  1. A visual example of temporal aliasing.
  2. In the retina aliasing is caused by density of photoreceptors and is spatial not temporal.

Misc Questions (not from the book)

  1. What do glial cells do?
  2. What happens when a neuron abandons a connection? Does that part of the axon form a new one somewhere?
  3. During refractory time of a neuron a 3GHz processor can perform more than ten million operations. Not close to what 80 billion neurons can do in parallel, but still interesting.