Behavior Generation in Bots

Having a shared workspace, where sensory and motor flows converge, facilitates the implementation of the multiple feedback loops required for adapted and effective behavior.

The winning simple behavior is continuously confronted to new options generated in the physical layer, thus providing a mechanism for interrupting behaviors in progress as soon as they are no longer considered the best option.

In general terms, the activation or inhibition of perception and behavior generation processes is modulated by CERA according to the implemented cognitive model of consciousness.

In other words, behaviors are assigned an activation level according to their distance to the active context in terms of the available sensorimotor space. Only the most active action is the one executed at the end of each “cognitive cycle.”

Distance to a given context is calculated based on sensory criteria like relative location and time.

For instance, if we have two actions: Action A: “shoot to the left” and Action B: “shoot to the right”, and an active context pointing to the left side of the bot (because there is an enemy there), action A will be most likely selected for execution, and action B will be either discarded or kept in the execution queue (while it is not too old).

Figure 6 shows a schematic representation of typical feedback loops produced in the CERA architecture.

These loops are closed when the consequences of actions are perceived by the bot, triggering adaptive responses at different levels.

Figure 6. Different feedback loops produced in the CERA-CRANIUM.

Curve (a) in Figure 6 represents the feedback loop produced when an instinctive reflex is triggered. Figure 6 curve (b) corresponds to a situation in which a mission-specific behavior is being performed unconsciously.

Finally, curve (c) symbolizes the higher-level control loop, in which a task is being performed consciously.

These three types of control loops are not mutually exclusive; in fact, the same percepts will typically contribute to simultaneous loops taking place at different levels.

CRANIUM workspaces are not passive short-term memory mechanisms. Instead, their operation is affected by a number of workspace parameters that influence the way the pandemonium works. These parameters are set by commands sent to physical and mission-specific layers from the CERA core layer. In other words, while CRANIUM provides the mechanism for specialized functions to be combined and thus generate meaningful representations, CERA establishes a hierarchical structure and modulates the competition and collaboration processes according to the model of consciousness specified in the core layer. This mechanism closes the feedback loop between the core layer and the rest of the architecture: core layer input (perception) is shaped by its own output (workspace modulation), which in turn determines what is perceived.

The CC-Bot2 Implementation

In the following table some of the main specialized processors implemented in CC-Bot2 are briefly described (note that a number of processors performing the very same task but using different techniques might coexist in the same workspace).

Specialized ProcessorLayerTask
AttackDetectorPhysicalTo detect conditions compatible with enemy attacks (health level decreasing, enemy fire, etc.).
AvoidObstaclePhysicalTo generate a simple avoiding obstacle behavior.
BackupReflexPhysicalTo generate a simple backup movement in response to an unexpected collision.
ChasePlayerMissionTo generate a complex chasing player behavior.
EnemyDetectorPhysicalTo detect the presence of an enemy based on given conditions, like previous detection of an attack and presence of other players using their weapons.
GazeGeneratorPhysicalTo generate a simple gaze movement directed towards the focus of attention.
JumpObstaclePhysicalTo generate a simple jump movement in order to avoid an obstacle.
KeepEnemiesFarMissionTo generate a complex run away movement in order to maximize the distance to detected enemies.
LocationReachedPhysicalTo detect if bot has reached the spatial position marked as goal location.
MoveLookingPhysicalTo generate a complex movement combining gaze and locomotion.
MoveToPointPhysicalTo generate a simple movement towards a given location.
ObstacleDetectorPhysicalTo detect the presence of an obstacle (which might prevent the bot to follow her path).
RandomNavigationPhysicalTo generate a complex random wandering movement.
RunAwayFromPlayersMissionTo generate a complex movement to run away from certain players.
SelectBestWeaponMissionTo select the best weapon currently available.
SelectEnemyToShootMissionTo decide who is the best enemy to attack to.

In our current implementation, specialized processors are created programmatically (see sample code below), and they are also assigned dynamically to their corresponding CERA layer. It is our intention to create a more elegant mechanism for the programmer to define the processors layout (configuration text file or even a GUI).

// ** ATTACK DETECTOR * Generates a BeingDamaged percept
// every time the health level decreases
_CeraPhysical.RegisterProcessor(new AttackDetector());

// ** OBSTACLE DETECTOR ** Generates a Obstacle single percept
// if there is any obstacle in the direction of the movement
_CeraPhysical.RegisterProcessor(new ObstacleDetector());

// ** EMEMY DETECTOR ** Generates a Enemy Attacking
// complex percept every time the bot is being damaged
// and possible culprit/s are detected.
_CeraMission.RegisterProcessor(new EnemyDetector());

Conscious-Robots Bot in action

The following is an excerpt of a typical flow of percepts that ultimately generates the bot’s behavior (see Figure 7):

  1. The processor EnemyDetector detects a new enemy, and creates a new “enemy detected” percept.
  2. The “enemy detected” percept is in turn received by the SelecEnemyToShoot processor, which is in charge of selecting the enemy to shoot. When an enemy is selected, the corresponding fire action is generated.
  3. Two processors receive the fire action, one in charge of aiming at the enemy and shoot, and other that creates new movement actions to avoid enemy fire.
  4. As the new movement actions have more priority than actions triggered by other processors, like the RandomMove processor, these actions are more likely to be executed.

This is a very simple example that how the bot works.

However, it is usual to have much more complex scenarios in which several enemies are attacking the bot simultaneously, and the selected target might be any of them. In these cases, the attention mechanism plays a key role.

CERA-CRANIUM implements an attention mechanism based on active contexts. Percepts that are closer to the currently active context are more likely to be selected and further processed.

This helps maintaining more coherent sequences of actions.

Future Work

CC-Bot2 is actually a partial implementation of the CERA-CRANIUM model. Our Machine Consciousness model includes much more cognitive functionality that is unimplemented so far.

It is our aim to enhance the current implementation with new features like a model of emotions, episodic memory, different types of learning mechanisms, and even a model of the self.

After a hard work, we expect CC-Bot3 to be a much more human-like bot. We also plan to use the same design for other games like TORCS or Mario.

Although CC-Bot2 could not completely pass the Turing test, it achieved the highest humanness rating (31.8%). As of today, the Turing test level intelligence has never been achieved by a machine.

There is still a long way to go in order to build artificial agents that are clever enough to parallel human behavior. Nevertheless, we think we are working in a very promising research line to achieve this ambitious goal.


We wish to thank Alex J. Champandard for his helpful suggestions and comments on the drafts of this article.