Kinds of Intelligence: Research Project Overview (Work in Progress Page)
From Consciousenss to Active PredNet

Consciousness involves Self Awareness

As with all sentences about consciousness this is both controversial and debatable. I personally find this statement, that consciousness involves self awareness, somewhat self-evident. I think everything below still follows if the statement is just that some parts of consciousness involve self awareness.

The link between consciousness and self awareness can be found in Hofstadter's work on strange loops.

Self Awareness requires Self Models

If this strange loop is operating over a 'self' and this 'self' exists in some way then it must exist in a self model.

Self Models are formed by Active Inference

If the predictive processing model of the mind is correct, then the natural place to look for self models is where action meets input. This is the point where a model of the world will be most efficient if it encodes any repeatable transformations over the input data caused by actions.

There are several different ways in which the term active inference is used in the literature:

  1. Proprioceptive error minimisation.
  2. Exteroceptive error minimisation.
  3. Active Exploration.
The first of these is directly linked to self models, though due to the direct nature of the feedback it may not necessarily have involved interacting with the environment and as such is not modelling the situated self. The second of these requires involves predictions about how actions change the incoming sensory data via changing perspeictive of (or directly changing) the environment.

Active Inference Minimises Prediction Error (AI version)

PredNet (Lotter, Kreiman, Cox) takes ideas from predictive coding and implements them in a deep net to predict the next image in a video sequence. This works well for generative tasks where learning occurs over large data sets but does not require any actuators on the agents part.

Active Inference Maximises Empowerment (AI version)

It is not clear if simply minimising prediction error is enough to drive behaviour in an embedded system. Simply rewarding the lowest prediction error could lead to dark-room behaviour where the agent finds a boring unchanging space in the environment and stays there.

A promising direction for an alternative intrinsic reward is empowerment. Empowerment is an information-theoretic measure of the coupling between an agents inputs (sensors) and outputs (actuators) [Klyubin et al 2005]. An agent attempting to maximise empowerment is attempting to maximise the amount of information they have about the environment based on the actions they can perform.

Other possibilities include maximising rate-of-change-of error minimisation and introducing homeostatic goals to the system. It is possible that some combination of approaches is required.