A followup to the non-techincal interactive KL-Divergence and Gradient Descent tutorials showing gradient descent on the final 6 variable example from the initial post.
Integrated Information Theory (IIT) starts from five axioms, aimed to capture the essential aspects of every possible experience. These five axioms are then used to construct an information-theoretic theory of consciousness based on the physical cause-effect properties of a system. I propose to explore what happens when the axioms are translated to a level much closer to the original starting point, the representational level. This first installment looks at the axioms of IIT that we will take as the starting point for our theory, and suggests some modifications.
In previous posts I tried to 'save' our everday intuitions about experience. In this post I introduce the reverse playing card experiment, and my direct experience sampling results, both of which show my previous conclusions to be wrong. I then introduce my current attempt to think about the problem, through Dennettian Predictive Processing.
I present a classic experiment and show a nice example of the lack of precision in our peripheral vision. I argue that this doesn't show that we are commonly mistaken about the contents of consciousness, but just that the contents contain generally include high-level predictions of the world.
Is your refrigerator light on when the door is closed? How can you ever know? Perhaps consciousness works in the same way. Are you conscious when you're not specifically noticing it? How can you ever know? This post includes a tool for discussing this question that also works well as a mindfulness reminder.
Douglas Hofstadter thinks you are a strange loop. I think he's right. Unfortunately, the two best examples of strange loops are Gödel's incompleteness theorem and the human brain, neither of which are particularly easy to understand. In this post I break down the key components of strange loops (without too much logic). In the following weeks I will cover the ways in which we are strange loops, and how we can use modern AI to build them.
AI improves along the dimension that we use to measure it. If we use a human-inspired definition of intelligence to determine our measures of success, we should expect more human-like AI. If we use a machine-oriented definition of intelligence, we should expect less human-like AI. I analyse the two different definitions of intelligence and conclud that, whether wise or foolish, we are currently walking the path towards human-like AI.
In the Benevolent Artificial Anti-Natalism scenario it is imagined that a superintelligence, being not susceptible to existence bias, might realise that human suffering is inevitable and use its powers to compassionately prevent the human race from continued existence.