Artificial Minds

We’re getting wrapped around ourselves trying to describe the consciousness program. We need to start simply and iterate.

The basic outline of a basic consciousness is looping, applying Popper’s epistemology and evaluating with successively more complex test cases until we’re satisfied with what we’re interacting with. At that point it can start to be involved in the growth of its mind and help us make corrections to the errors we introduced in its initial prototype. The choice of programming language is not an issue nor is the language in which you communicate with the mind.

The first problem with communicating with the mind will be real time concerns. We won’t be satisfied if it takes too long or communicates too quickly so we have to artificially constrain the communication channel and allow it to use the additional time and resources (if any) to pre-process and do additional testing on its theories.

The next problem will be establishing an internal and external cadance. The external cadence will determine the real time communication speed and the internal cadence will be used for evaluating runaway processes and various other uses of timing within the evaluation of ideas.

We will need to be slow to introduce evolutionary algorithms to take over for the parts we’re too lazy to implement. Those implementations should only be seen as temporary until the actual explanation of what is being achieved can be refactored into an explicit algorithm.

You don’t know what actions will do. You have to just try some and see what happens. You might pick any of cry kick, look around. All you know is that you want to balance your satisfaction vector but at this point you can only pick at random.


You consider kick but kick has a score of 0 (they all do, you know nothing yet). You consider others, all the same. With no clear winner this continues until the specified limit on iterations. So it’s a random tie break. One is picked.


Say it’s “kick”, so you kick. You might expect this to just make you tired but you have a lot of energy, are bored and so this is actually quite delightful. This must be recorded as a memory. It’s an association between having chosen to kick and kicking having a delightful effect. This memory vector goes back into the next step.


This time it’s no contest, all the other available actions are still zeros and kick is known to be delightful +0.5. Evolution will pick “kick” but it will also pick combinations of kicking plus other things since the others are zero, including them doesn’t hurt. It will also consider a double-kick to be double delightful. It will actually do a bunch of kicks, as many as it can cram in. Part of the fitness function prefers shorter solutions over long ones so it will settle on several instead of thousands.


Now reality will present a conflict. While kicking once while bored is delightful, doing 10 kicks is very tiring especially when you’re already delighted. 1 kick was delightful, 10 was tiring is a conflict between the information in our memory vector. A problem to be solved in the next pass.


What tools does a mind need to analyze this conflict? Maybe there isn’t enough data yet. Maybe you’re just confused and left in a state of confusion. The other zero options start to look more attractive over 10x Kicks, but a single kick still looks delightful.