Most of current efforts in the development of believable bots — bots that behave like human players — are based on classical AI techniques.
These techniques are based on relatively old principles, which nevertheless are being progressively improved or wisely adapted increasing their performance in order to satisfy new game requirements.
Taking a different perspective, the approach that we adopted for the design of our bot (CC-Bot2) was rather opposed to this trend.
Specifically, we implemented a computational model of the Global Workspace Theory (Baars, 1988), a kind of shared memory space where different agents — that we call specialized processors — can collaborate and compete with each other dynamically (see Figure 1).
We believe that applying new techniques from the field of Machine Consciousness might also provide good results, even in the short term.
In this article, we briefly describe the design of CC-Bot2, the winning Unreal Tournament bot developed by the Conscious-Robots team for the third edition of the 2K BotPrize.
The BotPrize competition is a version of the Turing test adapted to the domain of FPS video games (Hingston, 2009).
The ultimate goal of the contest is to develop a computer game bot able to behave the same way humans do. Furthermore, a bot would be considered to pass the Turing test (in this particular domain) if it is undistinguishable from human players.
CERA-CRANIUM Cognitive Architecture and CC-Bot2
As a result of our research line on Machine Consciousness we have developed a new cognitive architecture called CERA-CRANIUM (Arrabales et al. 2009), which has been the basis for the development of CC-Bot2 (CERA-CRANIUM Bot 2). CERA-CRANIUM is a cognitive architecture, designed to control autonomous agents, like physical mobile robots or Unreal Tournament bots, and based on a computational model of consciousness. The main inspiration of CERA-CRANIUM is the Global Workspace Theory (Baars, 1988). CC-Bot2 is a Java implementation of the CERA-CRANIUM architecture specifically developed for the 2K BotPrize competition.
CERA-CRANIUM consists of two main components (see Figure 2):
- CERA, a control architecture structured in layers, and
- CRANIUM, a tool for the creation and management of high amounts of parallel processes in shared workspaces.
As we explain below, CERA uses the services provided by CRANIUM with the aim of generating a highly dynamic and adaptable perception processes orchestrated by a computational model of consciousness.
Basically, in terms of controlling a bot, CERA-CRANIUM provides a mechanism to synchronize and orchestrate a number of different specialized processors that run concurrently. These processors can be of many kinds, usually they are detectors for given sensory conditions, like the “player approaching detector” processor, or they are behavior generators, like the “run away from that bully” processor.
CERA is a layered cognitive architecture designed to implement a flexible control system for autonomous agents.
Current definition of CERA is structured in four layers (see Figure 3): sensory-motor services layer, physical layer, mission-specific layer, and core layer.
As in classical robot subsumption architectures, higher layers are assigned more abstract meaning; however, the definition of layers in CERA is not directly associated with specific behaviors.
Instead, they manage any specialized processors that operate on the sorts of representations that are handled at that particular level, i.e. physical layer deals with data representations closely related to raw sensory data, while the mission layer deals with more high-level task-oriented representations.
CERA sensory-motor services layer comprises a set of interfacing and communication services which implement the required access to both sensor readings and actuator commands.
These services provide the physical layer with a uniform access interface to agent’s physical (or simulated) machinery.
In the case of CC-Bot2, the CERA sensory-motor layer is basically an adaptation layer to Pogamut 3.
CERA physical layer encloses agent’s sensors and actuators low-level representations.
Additionally, according to the nature of acquired sensory data, the physical layer performs data preparation and preprocessing.
Analogous mechanisms are implemented at this level with actuator commands, making sure for instance that command parameters are within safety limits.
The representation we have used for sensory data and commands in CC-Bot2 physical layer is, in most of the cases, actually that of Pogamut 3, like “player appeared in my field of view” or “I am being damaged”.
CERA mission-specific layer produces and manages elaborated sensory-motor content related to both agent’s vital behaviors and particular missions (in the case of a deathmatch game the mission is relatively clear and simple).
At this stage single contents acquired and preprocessed by the physical layer are combined into more complex pieces of content, which have some specific meaning related to agent’s goals (like “this player is my enemy” or “enemy x is attacking me”).
The mission-specific layer can be modified independently of the other CERA layers according to assigned tasks and agent’s needs for functional integrity.
CERA core layer, the highest control level in CERA, encloses a set of modules that perform higher cognitive functions. The definition and interaction between these modules can be adjusted in order to implement a particular cognitive model.
In the case of CC-Bot2, the core layer contains the code for the attention mechanism (many other modules could be added in the future).
The main objective of these core modules is to regulate the way CERA lower layers work (the way specialized processors run and interact with each other).
Physical and mission-specific layers are characterized by the inspiration on cognitive theories of consciousness, where large sets of parallel processes compete and collaborate in a shared workspace in the search of a global solution.
Actually, a CERA controlled agent is endowed with two hierarchically arranged workspaces that operate in coordination with the aim to find two global and interconnected solutions: one is related to perception and the other is related to action. In short, CERA has to provide an answer for the following questions continuously:
- What must be the next content of agent’s conscious perception?
- What must be the next action to execute?
Typical agent control architectures are focused on the second question while neglecting the first one. Here we argue that a proper mechanism to answer the first question is required in order to successfully answer the second question in a human-like fashion.
Anyhow, both questions have to be answered taking into account safety operation criteria and the mission assigned to the agent.
Consequently, CERA is expected to find optimal answers that will eventually lead to human-like behavior.
As explained below, CRANIUM is used for the implementation of the workspaces that fulfill the needs established by the CERA architecture.
CRANIUM provides a subsystem in which CERA can execute many asynchronous but coordinated concurrent processes. In the CC-Bot2 implementation (Java), CRANIUM is based on a task dispatcher that dynamically creates a new execution thread for each active processor. A CRANIUM workspace can be seen as a particular implementation of a pandemonium, where daemons compete with each other for activation. Each of these daemons or specialized processors is designed to perform a specific function on certain types of data. At any given time the level of activation of a particular processor is calculated based on a heuristic estimation of how much it can contribute to the global solution currently sought in the workspace. The concrete parameters used for this estimation are established by the CERA core layer. As a general rule, CRANIUM workspace operation is constantly modulated by commands sent from the CERA core layer.
In CC-Bot2 we use two separated but connected CRANIUM workspaces integrated within the CERA architecture. The lower level workspace is located in the CERA physical layer, where specialized processors are fed with data coming from CERA sensor services (Pogamut). The second workspace, located in the CERA mission-specific layer, is populated with higher-level specialized processors that take as input either the information coming from the physical layer or information produced in the workspace itself (see Figure 4). The perceptual information flow is organized in packages called single percepts, complex percepts, and mission percepts.
In addition to the bottom-up flow involving perception processes, a top-down flow takes place simultaneously in the same workspaces in order to generate bot’s actions.
Physical layer and mission-specific layer workspaces include single actions (directly translated into Pogamut commands), simple behaviors, and mission behaviors (see Figure 5).
One of the key differences between CERA-CRANIUM bottom-up and top-down flows is that while percepts are being iteratively composed in order to obtain more complex and meaningful representations, high level behaviors are iteratively decomposed until a sequence of atomic actions is obtained.
Top-down flow could be considered, to some extent, to be equivalent to behavior trees, in the sense that behaviors are associated to given contexts or scopes.
However, the way CERA-CRANIUM selects the next action is quite different, as current active context is periodically updated by the CERA Core layer.
At the same time, the active context is calculated based on input from the sensory bottom-up flow.
Having an active context mechanism implies that out of the set of possible actions that could be potentially executed; only the one which is located closer to the active context will be selected for execution. In the next subsection, we describe how the behavior of the agent is generated using this approach.