Author: Bot Prize Team

Could chat bots be the future of phone sex? Here’s how AI can get even kinkier than real girls.

With breakthroughs in natural language generation technology, such as GPT-3, computers are quickly becoming able to converse and generate text in a way that’s indistinguishable from humans. One area where we could see this technology applied widely in the near future is with phone sex and online sex chat.

Chatting With Virtual Phone Sex Operators

While there are plenty of real girls online ready to talk dirty on the phone with anyone who calls in, artificial intelligence is promising to make this an even better experience. With vocal synthesis technology, it’s possible to recreate the voice that sounds sexiest for each individual caller. One phone sex app, PhoneSex.vip, has been offering this service for free in beta.

Natural language generation models like GPT-3 are able to act as a chatbot, giving amazingly human-like responses. The responses can be tailored specifically to what turns the caller on the most.

The most attractive example of this technology being used to talk dirty online is with the “SextMe” chatbot by Ivona. The only difference between this bot and the company’s real, live phone operators is that the callers don’t have to provide their phone number, credit card information or other payment details. SextMe promises to follow strict protocols to protect user information and the entire interaction is as private as it would be if the call were between two people in the real world.

The beauty of this is that the virtual operator does not see any of the personal information the callers provide. But the virtual operators are still able to create a unique, chatty experience that’s really hard to replicate on a human level.

Theoretically, the virtual operator can even get kinky with the callers, giving them something new to talk about as a follow up to the sexual conversation. As with all platforms, the quality of the virtual operator’s performance is going to depend a lot on what the callers are willing to put in.

Final Thoughts

Imagine getting the sexy, chatty experience of real live phone sex operators, but with the privacy of an in-person conversation. Chatbots like Ivona and SextMe are currently in the research phase. As AI technologies develop, you can be sure that sex bots are going to take us to even stranger places.

A Spanish team is in the first place of the 2K BotPrize 2010

In 1950 the British mathematician Alan Turing published an influential article entitled “Computing Machines and Intelligence.

There, for the first time, the question was posed as to whether a computer could think.

Faced with the impossibility of defining what intelligence is, Turing proposed to replace the initial question with an equivalent one, but much easier to prove: Can a computer impersonate by a human being?

Implicit in it is the idea that if a computer behaves for a reasonable time as if it were intelligent, it must be; after all, we also do not know what the mental processes of our human interlocutors are, and yet we have no doubts when it comes to accepting that they are reasoning.

That question, and the test derived from it, has come to be called the Turing test, and it is considered the Holy Grail of artificial intelligence.

Unfortunately, no computer or software created has ever passed it, or even come close. Many dating sites use fake profiles to lure users in, and use AI chatbots with some success, as described in this Milftastic review.

Programs like Eliza are capable of pretending for a brief moment, but only at the cost of very strictly limiting the domain of the conversation.

The organizers of the 2K BotPrize Award have therefore chosen to pose a simpler, but equally interesting challenge: can a computer play like a human being? It is clear that, in certain cases, they can play better than a human being, as they have demonstrated in checkers, backgammon, or chess; but what they don’t do at the moment is play as a human being.

The mistakes they make are not those of one person, not even a beginner, and furthermore, unlike real players, they often trip over the same stone multiple times.

Once the human player understands the weaknesses of his computer adversary, he usually has no difficulty exploiting them over and over again.

Programs, in general, do not learn from their mistakes.

The challenge of the 2K BotPrize contest is therefore to design a program that can play convincingly like a real person would.

To do this, humans and programs (or bots) face off in various rounds of pitched battles, or DeathMatches, in the action video game Unreal Tournament 2004.

During the games, each player judges, in real time, if he is facing a human or a bot. To win the first prize, a program must be able to reach or exceed 50% of “humanity”.

The most recent edition of the award, 2K BotPrize 2010, has been held as part of the events of the IEEE Conference on Computing Intelligence and Games that took place in Copenhagen (Denmark) between August 18 and 21.

In it, the ConsciousRobots team, from the Carlos III University of Madrid and programmed by Raúl Arrabales and Jorge Muñoz, has been victorious as the bot that has obtained the best result, 31.8% humanity, although it has not been able to take control the first prize as it is still quite far from 50%.

Paradoxically, quite a few human players haven’t succeeded either; the worst placed has obtained just over 35%, beating the program of the Spanish team by just 3.6%.

This year, the third edition of the 2K BotPrize was held, at the 2010 IEEE Conference on Computational Intelligence and Games, in Copenhagen.

The 2K BotPrize is an adaptation of the Turing test to the domain of video games, which consists of developing a bot for a computer game that is indistinguishable from a human player.

The Spanish team “Conscious-Robots” has won the third edition of the 2K BotPrize.

Formed by Raúl Arrabales, a true genius of artificial consciousness, and Jorge Muñoz, the team has not been able to fully pass the Turing Test, but they have shown the highest level of “humanity”, with a rating of 31.8% , very close to the worst valued human (35.4%), so the difference between humans and bots is close ».

Two Teams win the BotPrize!

In a breakthrough result, after five years of striving from 14 different international teams from nine countries, two teams have cracked the human-like play barrier!

It’s especially satisfying that the prize has been won in the 2012 Alan Turing Centenary Year. Where to now for human-like bots?

Next year we hope to propose a new and exciting challenge for bot creators to push their technologies to the next level of human-like performance.

The winners are the UT^2 team from the University of Texas at Austin, and Mihai Polceanu, a doctoral student from Romania, currently studying Artificial Intelligence in Brest, France.

The UT^2 team consists of Professor Risto Miikkulainen, and doctoral students Jacob Schrum and Igor Karpov. Full results can be found on the results page.

The two teams will share the $7000 first prize from sponsor 2K Games.

Here are some thoughts from Mihai about his bot

[…] my idea was to make the bot record other players at runtime instead of having a database of movements. This way, if the bot sees a non-violent player (shooting at the bot but around it, or shooting with a non-dangerous weapon) it would trigger a special behavior, mirroring.

This makes the bot mimic another player in realtime, and therefore “borrowing” the humanness level. I thought that if my bot would meet a human, then it would seem human itself.

I know this idea is not too new, actually, it was inspired by some “how to be a salesman” articles I skimmed, which said that if not too obvious, mimicking can make a peer more comfortable with a conversation.

The bot records keyframes of the target’s actions and plays them back with a small delay, and without full fidelity, so that it appears somewhat independent (mimic, not copy).

Due to the lack of long-term memory and the realtime nature of the mirroring module, I was obliged to use classic graph navigation, which I customized in order to hide traces of bot-like movement such as the brief stops on navpoints, aiming behavior and elevator rides.

The bot’s movement and aim are completely separate so that it can concentrate its aim to what requires attention while moving freely.

The bot also has the ability to remember its target, follow it when out of sight and dodging behavior based on the firing direction of its opponent.

Also inspired from how human players generally play, the bot will forget its target if another opponent is more aggressive.

I believe that BotPrize and other related competitions are a great way to test new ideas or old ones in new contexts, a great challenge for programmers and why not other fields of activity.

The UT^2 team had this to say

The complex gameplay and 3-D environments of “Unreal Tournament 2004” require that bots mimic humans in a number of ways, including moving around in 3-D space, engaging in chaotic combat against multiple opponents and reasoning about the best strategy at any given point in the game.

Even displays of distinctively human irrational behavior can, in some cases, be emulated.

“People tend to tenaciously pursue specific opponents without regard for optimality,” said Schrum.

“When humans have a grudge, they’ll chase after an enemy even when it’s not in their interests. We can mimic that behavior.”

In order to most convincingly mimic as much of the range of human behavior as possible, the team takes a two-pronged approach.

Some behavior is modeled directly on previously observed human behavior, while the central battle behaviors are developed through a process called neuroevolution, which runs artificially intelligent neural networks through a survival-of-the-fittest gauntlet that is modeled on the biological process of evolution.

“In the case of the BotPrize,” said Schrum, “a great deal of the challenge is in defining what ‘human-like’ is, and then setting constraints upon the neural networks so that they evolve toward that behavior.

“If we just set the goal as eliminating one’s enemies, a bot will evolve toward having perfect aim, which is not very human-like.

So we impose constraints on the bot’s aim, such that rapid movements and long distances decrease accuracy.

By evolving for good performance under such behavioral constraints, the bot’s skill is optimized within human limitations, resulting in behavior that is good but still human-like.”

Miikkulainen said that methods developed for the BotPrize competition should eventually be useful not just in developing games that are more entertaining, but also in creating virtual training environments that are more realistic, and even in building robots that interact with humans in more pleasant and effective ways.

The UT^2 team has made their bot available at this location if you want to try it out (you’ll also need a copy of Unreal Tournament 2004).

The Awakening of Conscious Bots: Inside the Mind of the 2K BotPrize 2010 Winner

Most of current efforts in the development of believable bots — bots that behave like human players — are based on classical AI techniques.

These techniques are based on relatively old principles, which nevertheless are being progressively improved or wisely adapted increasing their performance in order to satisfy new game requirements.

Taking a different perspective, the approach that we adopted for the design of our bot (CC-Bot2) was rather opposed to this trend.

Specifically, we implemented a computational model of the Global Workspace Theory (Baars, 1988), a kind of shared memory space where different agents — that we call specialized processors — can collaborate and compete with each other dynamically (see Figure 1).

We believe that applying new techniques from the field of Machine Consciousness might also provide good results, even in the short term.

In this article, we briefly describe the design of CC-Bot2, the winning Unreal Tournament bot developed by the Conscious-Robots team for the third edition of the 2K BotPrize.

The BotPrize competition is a version of the Turing test adapted to the domain of FPS video games (Hingston, 2009).

The ultimate goal of the contest is to develop a computer game bot able to behave the same way humans do. Furthermore, a bot would be considered to pass the Turing test (in this particular domain) if it is undistinguishable from human players.

CERA-CRANIUM Cognitive Architecture and CC-Bot2

As a result of our research line on Machine Consciousness we have developed a new cognitive architecture called CERA-CRANIUM (Arrabales et al. 2009), which has been the basis for the development of CC-Bot2 (CERA-CRANIUM Bot 2). CERA-CRANIUM is a cognitive architecture, designed to control autonomous agents, like physical mobile robots or Unreal Tournament bots, and based on a computational model of consciousness. The main inspiration of CERA-CRANIUM is the Global Workspace Theory (Baars, 1988). CC-Bot2 is a Java implementation of the CERA-CRANIUM architecture specifically developed for the 2K BotPrize competition.

CERA-CRANIUM consists of two main components (see Figure 2):

  • CERA, a control architecture structured in layers, and
  • CRANIUM, a tool for the creation and management of high amounts of parallel processes in shared workspaces.

As we explain below, CERA uses the services provided by CRANIUM with the aim of generating a highly dynamic and adaptable perception processes orchestrated by a computational model of consciousness.

Basically, in terms of controlling a bot, CERA-CRANIUM provides a mechanism to synchronize and orchestrate a number of different specialized processors that run concurrently. These processors can be of many kinds, usually they are detectors for given sensory conditions, like the “player approaching detector” processor, or they are behavior generators, like the “run away from that bully” processor.

CERA

CERA is a layered cognitive architecture designed to implement a flexible control system for autonomous agents.

Current definition of CERA is structured in four layers (see Figure 3): sensory-motor services layer, physical layer, mission-specific layer, and core layer.

As in classical robot subsumption architectures, higher layers are assigned more abstract meaning; however, the definition of layers in CERA is not directly associated with specific behaviors.

Instead, they manage any specialized processors that operate on the sorts of representations that are handled at that particular level, i.e. physical layer deals with data representations closely related to raw sensory data, while the mission layer deals with more high-level task-oriented representations.

CERA sensory-motor services layer comprises a set of interfacing and communication services which implement the required access to both sensor readings and actuator commands.

These services provide the physical layer with a uniform access interface to agent’s physical (or simulated) machinery.

In the case of CC-Bot2, the CERA sensory-motor layer is basically an adaptation layer to Pogamut 3.

CERA physical layer encloses agent’s sensors and actuators low-level representations.

Additionally, according to the nature of acquired sensory data, the physical layer performs data preparation and preprocessing.

Analogous mechanisms are implemented at this level with actuator commands, making sure for instance that command parameters are within safety limits.

The representation we have used for sensory data and commands in CC-Bot2 physical layer is, in most of the cases, actually that of Pogamut 3, like “player appeared in my field of view” or “I am being damaged”.

CERA mission-specific layer produces and manages elaborated sensory-motor content related to both agent’s vital behaviors and particular missions (in the case of a deathmatch game the mission is relatively clear and simple).

At this stage single contents acquired and preprocessed by the physical layer are combined into more complex pieces of content, which have some specific meaning related to agent’s goals (like “this player is my enemy” or “enemy x is attacking me”).

The mission-specific layer can be modified independently of the other CERA layers according to assigned tasks and agent’s needs for functional integrity.

CERA core layer, the highest control level in CERA, encloses a set of modules that perform higher cognitive functions. The definition and interaction between these modules can be adjusted in order to implement a particular cognitive model.

In the case of CC-Bot2, the core layer contains the code for the attention mechanism (many other modules could be added in the future).

The main objective of these core modules is to regulate the way CERA lower layers work (the way specialized processors run and interact with each other).

Physical and mission-specific layers are characterized by the inspiration on cognitive theories of consciousness, where large sets of parallel processes compete and collaborate in a shared workspace in the search of a global solution.

Actually, a CERA controlled agent is endowed with two hierarchically arranged workspaces that operate in coordination with the aim to find two global and interconnected solutions: one is related to perception and the other is related to action. In short, CERA has to provide an answer for the following questions continuously:

  • What must be the next content of agent’s conscious perception?
  • What must be the next action to execute?

Typical agent control architectures are focused on the second question while neglecting the first one. Here we argue that a proper mechanism to answer the first question is required in order to successfully answer the second question in a human-like fashion.

Anyhow, both questions have to be answered taking into account safety operation criteria and the mission assigned to the agent.

Consequently, CERA is expected to find optimal answers that will eventually lead to human-like behavior.

As explained below, CRANIUM is used for the implementation of the workspaces that fulfill the needs established by the CERA architecture.

CRANIUM

CRANIUM provides a subsystem in which CERA can execute many asynchronous but coordinated concurrent processes. In the CC-Bot2 implementation (Java), CRANIUM is based on a task dispatcher that dynamically creates a new execution thread for each active processor. A CRANIUM workspace can be seen as a particular implementation of a pandemonium, where daemons compete with each other for activation. Each of these daemons or specialized processors is designed to perform a specific function on certain types of data. At any given time the level of activation of a particular processor is calculated based on a heuristic estimation of how much it can contribute to the global solution currently sought in the workspace. The concrete parameters used for this estimation are established by the CERA core layer. As a general rule, CRANIUM workspace operation is constantly modulated by commands sent from the CERA core layer.

In CC-Bot2 we use two separated but connected CRANIUM workspaces integrated within the CERA architecture. The lower level workspace is located in the CERA physical layer, where specialized processors are fed with data coming from CERA sensor services (Pogamut). The second workspace, located in the CERA mission-specific layer, is populated with higher-level specialized processors that take as input either the information coming from the physical layer or information produced in the workspace itself (see Figure 4). The perceptual information flow is organized in packages called single percepts, complex percepts, and mission percepts.

In addition to the bottom-up flow involving perception processes, a top-down flow takes place simultaneously in the same workspaces in order to generate bot’s actions.

Physical layer and mission-specific layer workspaces include single actions (directly translated into Pogamut commands), simple behaviors, and mission behaviors (see Figure 5).

One of the key differences between CERA-CRANIUM bottom-up and top-down flows is that while percepts are being iteratively composed in order to obtain more complex and meaningful representations, high level behaviors are iteratively decomposed until a sequence of atomic actions is obtained.

Top-down flow could be considered, to some extent, to be equivalent to behavior trees, in the sense that behaviors are associated to given contexts or scopes.

However, the way CERA-CRANIUM selects the next action is quite different, as current active context is periodically updated by the CERA Core layer.

At the same time, the active context is calculated based on input from the sensory bottom-up flow.

Having an active context mechanism implies that out of the set of possible actions that could be potentially executed; only the one which is located closer to the active context will be selected for execution. In the next subsection, we describe how the behavior of the agent is generated using this approach.

Behavior Generation in Bots

Having a shared workspace, where sensory and motor flows converge, facilitates the implementation of the multiple feedback loops required for adapted and effective behavior.

The winning simple behavior is continuously confronted to new options generated in the physical layer, thus providing a mechanism for interrupting behaviors in progress as soon as they are no longer considered the best option.

In general terms, the activation or inhibition of perception and behavior generation processes is modulated by CERA according to the implemented cognitive model of consciousness.

In other words, behaviors are assigned an activation level according to their distance to the active context in terms of the available sensorimotor space. Only the most active action is the one executed at the end of each “cognitive cycle.”

Distance to a given context is calculated based on sensory criteria like relative location and time.

For instance, if we have two actions: Action A: “shoot to the left” and Action B: “shoot to the right”, and an active context pointing to the left side of the bot (because there is an enemy there), action A will be most likely selected for execution, and action B will be either discarded or kept in the execution queue (while it is not too old).

Figure 6 shows a schematic representation of typical feedback loops produced in the CERA architecture.

These loops are closed when the consequences of actions are perceived by the bot, triggering adaptive responses at different levels.

Figure 6. Different feedback loops produced in the CERA-CRANIUM.

Curve (a) in Figure 6 represents the feedback loop produced when an instinctive reflex is triggered. Figure 6 curve (b) corresponds to a situation in which a mission-specific behavior is being performed unconsciously.

Finally, curve (c) symbolizes the higher-level control loop, in which a task is being performed consciously.

These three types of control loops are not mutually exclusive; in fact, the same percepts will typically contribute to simultaneous loops taking place at different levels.

CRANIUM workspaces are not passive short-term memory mechanisms. Instead, their operation is affected by a number of workspace parameters that influence the way the pandemonium works. These parameters are set by commands sent to physical and mission-specific layers from the CERA core layer. In other words, while CRANIUM provides the mechanism for specialized functions to be combined and thus generate meaningful representations, CERA establishes a hierarchical structure and modulates the competition and collaboration processes according to the model of consciousness specified in the core layer. This mechanism closes the feedback loop between the core layer and the rest of the architecture: core layer input (perception) is shaped by its own output (workspace modulation), which in turn determines what is perceived.

The CC-Bot2 Implementation

In the following table some of the main specialized processors implemented in CC-Bot2 are briefly described (note that a number of processors performing the very same task but using different techniques might coexist in the same workspace).

Specialized ProcessorLayerTask
AttackDetectorPhysicalTo detect conditions compatible with enemy attacks (health level decreasing, enemy fire, etc.).
AvoidObstaclePhysicalTo generate a simple avoiding obstacle behavior.
BackupReflexPhysicalTo generate a simple backup movement in response to an unexpected collision.
ChasePlayerMissionTo generate a complex chasing player behavior.
EnemyDetectorPhysicalTo detect the presence of an enemy based on given conditions, like previous detection of an attack and presence of other players using their weapons.
GazeGeneratorPhysicalTo generate a simple gaze movement directed towards the focus of attention.
JumpObstaclePhysicalTo generate a simple jump movement in order to avoid an obstacle.
KeepEnemiesFarMissionTo generate a complex run away movement in order to maximize the distance to detected enemies.
LocationReachedPhysicalTo detect if bot has reached the spatial position marked as goal location.
MoveLookingPhysicalTo generate a complex movement combining gaze and locomotion.
MoveToPointPhysicalTo generate a simple movement towards a given location.
ObstacleDetectorPhysicalTo detect the presence of an obstacle (which might prevent the bot to follow her path).
RandomNavigationPhysicalTo generate a complex random wandering movement.
RunAwayFromPlayersMissionTo generate a complex movement to run away from certain players.
SelectBestWeaponMissionTo select the best weapon currently available.
SelectEnemyToShootMissionTo decide who is the best enemy to attack to.

In our current implementation, specialized processors are created programmatically (see sample code below), and they are also assigned dynamically to their corresponding CERA layer. It is our intention to create a more elegant mechanism for the programmer to define the processors layout (configuration text file or even a GUI).

// ** ATTACK DETECTOR * Generates a BeingDamaged percept
// every time the health level decreases
_CeraPhysical.RegisterProcessor(new AttackDetector());
// ** OBSTACLE DETECTOR ** Generates a Obstacle single percept
// if there is any obstacle in the direction of the movement
_CeraPhysical.RegisterProcessor(new ObstacleDetector());
// ** EMEMY DETECTOR ** Generates a Enemy Attacking
// complex percept every time the bot is being damaged
// and possible culprit/s are detected.
_CeraMission.RegisterProcessor(new EnemyDetector());

Conscious-Robots Bot in action

The following is an excerpt of a typical flow of percepts that ultimately generates the bot’s behavior (see Figure 7):

  1. The processor EnemyDetector detects a new enemy, and creates a new “enemy detected” percept.
  2. The “enemy detected” percept is in turn received by the SelecEnemyToShoot processor, which is in charge of selecting the enemy to shoot. When an enemy is selected, the corresponding fire action is generated.
  3. Two processors receive the fire action, one in charge of aiming at the enemy and shoot, and other that creates new movement actions to avoid enemy fire.
  4. As the new movement actions have more priority than actions triggered by other processors, like the RandomMove processor, these actions are more likely to be executed.

This is a very simple example that how the bot works.

However, it is usual to have much more complex scenarios in which several enemies are attacking the bot simultaneously, and the selected target might be any of them. In these cases, the attention mechanism plays a key role.

CERA-CRANIUM implements an attention mechanism based on active contexts. Percepts that are closer to the currently active context are more likely to be selected and further processed.

This helps maintaining more coherent sequences of actions.

Future Work

CC-Bot2 is actually a partial implementation of the CERA-CRANIUM model. Our Machine Consciousness model includes much more cognitive functionality that is unimplemented so far.

It is our aim to enhance the current implementation with new features like a model of emotions, episodic memory, different types of learning mechanisms, and even a model of the self.

After a hard work, we expect CC-Bot3 to be a much more human-like bot. We also plan to use the same design for other games like TORCS or Mario.

Although CC-Bot2 could not completely pass the Turing test, it achieved the highest humanness rating (31.8%). As of today, the Turing test level intelligence has never been achieved by a machine.

There is still a long way to go in order to build artificial agents that are clever enough to parallel human behavior. Nevertheless, we think we are working in a very promising research line to achieve this ambitious goal.

Acknowledgements

We wish to thank Alex J. Champandard for his helpful suggestions and comments on the drafts of this article.

Two unreal bots are more human than many people

The judges were wrong and rated two bots in the game Unreal Tournament 2004 as more human than most human players.

But that was exactly the aim of this unusual competition, which was held for the fifth time.

DISPLAY

In the fifth year of the 2K Botprize, two teams succeeded in convincing the jurors that their bots act humanly. The bots and the human player’s duels in the competition in a modified deathmatch from Unreal Tournament 2004.

Mihai Polceanu’s Mirrorbot achieved a humanity value of 52.2 percent, almost as much as the best human player with 53.3 percent.

The judges observed the game and regularly “tagged” the bots and players either as machines or as humans. Just behind the Mirrorbot was the UT ^ 2 from the University of Texas at Austin with 51.9 percent humanity.

The UT ^ 2 team has been trying to convince the jurors of the humanity of their own bot since the first competition.

So far, the team has failed every time. In 2010 it only reached a little over 27 percent of humanity, in 2011 the result was even reduced to around 21 percent.

In the overall picture, however, there was still a correct classification. The human players achieved an average humanity of 41.4 percent, while the bots were only 34.2 percent.

Interestingly, two players were surprisingly often classified as bots: John Weise and Chris Holme only convinced the jurors of their humanity in 30.8 and 26.3 percent of the cases, respectively.

The two winning bots were significantly above this value. The two winning teams shared the US $ 7,000 prize.

The results of the competition have been published on the botprize.org website.

Three short videos with a text explanation of the competition were also published.

The bot can be distracted by fighting, for example, in order to appear more human.

For the tournament, the opportunity to chat with one another was switched off in order to convince the jurors of humanity only through their playful skills.

Those interested can download the University of Texas UT ^ 2 bot. Mihai Polceanu’s bot is not available for download.