Post Reply 
 
Thread Rating:
  • 3 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Why an AI for Outwitters is hard to do
04-05-2012, 07:44 AM (This post was last modified: 05-05-2012 04:42 AM by Kamikaze28.)
Post: #1
Star Why an AI for Outwitters is hard to do
Artificial Intelligence is not easy. You might think it is because you are used to seeing it in so many places nowadays: your iPhone (Siri), endless computer and console games and even TV (if you watched IBM's Watson take on leading Jeopardy!-champions). You may have seen science fiction movies or read novels where machines surpassed humans in their ability to think and act.
We are far away from the science fiction idea of AI, but board games and games in general have always been a very popular discipline in the field of AI research. Mostly because they are fun and interesting, but also because they offer a wide array of problems to tackle and solve.
I would like to look at Outwitters from the vantage point of an AI creator to give you a little glimpse behind the scenes of AI. This will inevitably lead into some background on Artificial Intelligence but fear not brave readers, I will try my best to keep it simple and entertaining.

When talking about an Artificial Intelligence it is usually called an 'agent'. Agents basically do three things to achieve their goal:
  1. They observe their environment.
  2. They "think".
  3. They act on their environment.
I intentionally put the second point in quotes, we will get to that later. For now, let us look at the first point.
Environments come in many shapes and flavors but in the field of AI there is a small list of features that characterize each environment exactly. For Outwitters, this list of features would look like this:
  • partially observable - there is fog of war
  • strategic - the next state is only dependent upon the current state, the action of the agent and the action of its opponent
  • episodic - the agent experiences the environment in little chunks, turns in our case
  • static - the environment does not change while the agent is "thinking"
  • discrete - units can only be in cells and nowhere else
  • multi-agent - the human opponent is viewed as another agent
Now that we can characterize our environment, let us talk about the observation by the agent. You might think it would be sufficient to present the agent with the state of the game after the humans turn, but this would cut out a significant amount of information, namely what happened during the humans turn. Let me demonstrate this with a picture.

[Image: AI_1.png]

The Adorable runner moved there to destroy my Scallywag runner, even though I cannot see it after this turn, I know for certain, that it must still be there (unless there is a Mobi involved). The Adorable heavy on the other hand could have moved after my runner was destroyed, but based on this observation, I can estimate where he is likely going to be in the next turn. The agent has to integrate all of this information to be competitive because any skilled player would do the same thing.
This already touches on one of the major problems for my AI - dealing with uncertainty. It would be easy to create a simple reflex based agent ("if you see this, then do that", skill level: lobotomized cockroach) but these kinds of agents are trivial to defeat. What we need is an agent that can deal with the fog of war. To do this, the agent would have to understand the basic rules of the game as well as all the special units abilities. There are mathematical models to handle uncertainty, but believe me when I tell you that they are a nightmare of their own to wrap your head around.
Similar to the situation above, the agent would discover new information during its own turn. By spawning a unit or moving a unit, the fog of war changes and new enemy units may become visible. Uncertainty then becomes certainty and the whole situation changes.

Next, we come to the "thinking" part. The reason I am using quotation marks all the time is because AIs do not think, they follow a predefined procedure to arrive at a specific action based on their observation and their own state, if they have one. Whether or not this could be called "thinking" is a different question entirely. How does an AI do this in the case of a game? Quite straightforward: it simulates all possible moves that it could make and all possible moves its opponent could make in response and so forth and then picks the one turn (set of moves) that promises the highest chance of winning. Now we are presented with two problems:
  • How many moves/turns do we simulate?
  • How do we score the states in regards to the agents chance of winning?
The first question answers itself if you think about it. You can not wait forever for the agent to come to a decision and the agent does not have unlimited computing power and memory to simulate the game to its fullest extent. The second question is really tricky and there is no one right answer, but you can spend as much time as you want fiddling around and trying different approaches. For instance, there are two very different ways to win: destroy the base or destroy all units and block your opponents spawn point(s). Which of these goals should the agent strive for? What is more important: capturing an enemy flag or keeping your own? Is it worth to build a runner and send him into unknown territory to spy on the opponent? All those questions and more need to be answered before you can evaluate any state.
To give you an example: Deep Blue, the famous chess computer which defeated the reigning chess world champion Garry Kasparov in 1997, checked 200 million positions per seconds and simulated 12 moves ahead (in special cases even 40 moves ahead).
Now that is chess - Outwitters is a whole different story. When thinking about game simulation you can imagine a tree with the current state as the root/trunk and all possible states that you can reach from the given state as branches and so on. An important feature of this tree is the branching factor, that is to say how many states can come out of a given state. For chess, this branching factor is around 35. This is due to the rules of chess: during your turn you have to move exactly one piece. That is all that you do in chess.
In Outwitters, you not only move with units, but also do stuff with them or build new units. And worst of all (from the AIs point of view) you can do more than one move in a turn - or do nothing. Similarly, the opponent can do very many moves in a turn or very little. This will put Outwitters branching factor somewhere in the hundreds. What is so bad about this is that the broader the tree, the less moves ahead you can simulate in the same amount of time. The less moves the agent can look ahead the less sophisticated its long-term strategies will be and therefore the easier it is to beat it. If we taught Deep Blue to play Outwitters, thinking 12 moves in advance is like 2 turns (assuming Deep Blue could cope with the higher branching factor). That would result in an AI of novice difficulty.
Last but not least, we have to talk about tuning the difficulty of an AI. If I did manage to get all the above problems solved, then I would have a nearly unbeatable AI. How do you diminish its skill level? Throw in random bad moves? That would produce some quite hilarious results. The answer would be to use different evaluation functions in the simulations and given how much work this would have been the first time ... yeah, go figure.

To sum up: I am not saying, it is impossible to create an AI for Outwitters, but I am very convinced that it would be a very difficult task to accomplish. It would also take a very long time and a ton of work.

I hope you enjoyed this read and if you have any questions feel free to post them and I will try my best to answer them.

I am in no way affiliated with or authorized by One Man Left Studios, LLC.
Any information on Outwitters I present is founded on personal experience, public knowledge or the Outwitters Beta Test.
Visit this user's website Find all posts by this user
Quote this message in a reply
04-06-2012, 11:47 PM
Post: #2
RE: Why an AI for Outwitters is hard to do
Plus I would have to create a single player button. I already made enough buttons, you guys.
Visit this user's website Find all posts by this user
Quote this message in a reply
04-10-2012, 08:22 AM
Post: #3
RE: Why an AI for Outwitters is hard to do
(04-06-2012 11:47 PM)oneadamleft Wrote:  Plus I would have to create a single player button. I already made enough buttons, you guys.

LOL. Point taken.

Honestly, this would be so OneManLeft style, if there is an FAQ:
"Why isn't there a single player mode"
"I made enough buttons, you lonely poor human being"

-Don't play big maps against bernaferrari if your ego is important to you.

-Don't play any maps against Josecuervo if your ego is important to you.
Find all posts by this user
Quote this message in a reply
04-12-2012, 05:12 AM (This post was last modified: 04-12-2012 05:14 AM by Harti.)
Post: #4
RE: Why an AI for Outwitters is hard to do
I never programmed AI myself but I'd like to add an idea.

I think it would be possible to create an AI with certain heuristics and/or predefined playstyle. Especially with the (very) limited amount of actions a player can take, it should be much easier to decide on certain turns.
The AI will most likely not pass the Turing test (or, well, maybe it will), neither will it be competitive but it would be a decent enough training partner for new players.

This is how it might work:
Each field on the map could have an integer value stating
- what units could reach and attack one on this space
- and for how many damage
- and how much wit that would cost (taking AI's current wit as a measure)
(- if you can retaliate with Snipers or something)
- and if there are enemy production facilities nearby.

If the current thinking unit's health could not sustain the damage, it is not likely to do the turn (or it asks if a Medic can push it a little). Turns would be:

- reach+kill opposing Special unit
- heal unit (if damage is high enough)
- reach+attack opposing base
- reach an opposing unoccupied wit flag
- reach a neutral unoccupied wit flag
- reach+kill unit on neutral wit flag (and get it yourself)
- reach one's own unoccupied wit flag (for defending purposes)
- reach+kill Sniper/Medic units
- reach+kill Heavy units
- reach+kill Soldiers
- kill Runners nearby
- etc...

Each unit (n is usually below 10) is looking for the topmost heuristic to make a match. If there is a match, it will think about what's happening afterwards. It will calculate how much wit the unit itself "totally cost", i.e. recruitment cost (if any), add the amount of wit for movement, add 1 wit for attacking and then subtract the player's unit "total cost". If it's a positive outcome, the unit performs the turn. If it's negative, it may perform the turn nonetheless (if there's a good reason for it, such as killing Special units *cough*) or not.

The unit building process could also run with pretty basic heuristics. If there's not enough visible fields, the AI will decide to spawn a Runner. If there's no Medic and units are injured, it will build a Medic. If there are more Soldiers visible on the player's side than on AI's side, it will build Soldiers or a Heavy. If there's a Heavy coming close, build a Sniper. If there's nothing important to do, save up 1-2 wit.

You could as well give the bot the attribute "rush a lot", causing it to permanently throw everything at the opponent there is.



Oh well. I don't see that this can be implemented during this year but I think it would be really cool to have an offline campaign or something. Big Grin

jesusfuentesh Wrote:  Harti is like the silent lion. He never says any word, but when so, he was just waiting for his victim haha

[Image: sig.png]
Find all posts by this user
Quote this message in a reply
04-12-2012, 06:28 AM
Post: #5
RE: Why an AI for Outwitters is hard to do
What you are describing here is a reflex-based agent: "if this, then do that.". You can cobble together a heuristic all you want but it will still remain a predictable and uncreative opponent which can be defeated by knowing its reflexes.
Case in point: you could guide your agent into ambushes all day long, it will always fall for it.
Furthermore you have to evaluate everything after every move or spawn since you could discover previously unknown enemy units which would change the outcome of your "thinking" process.

Here's the next problem in your approach:

(04-12-2012 05:12 AM)Harti Wrote:  - reach+kill opposing Special unit
- heal unit (if damage is high enough)
- reach+attack opposing base
- reach an opposing unoccupied wit flag
- reach a neutral unoccupied wit flag
- reach+kill unit on neutral wit flag (and get it yourself)
- reach one's own unoccupied wit flag (for defending purposes)
- reach+kill Sniper/Medic units
- reach+kill Heavy units
- reach+kill Soldiers
- kill Runners nearby
- etc...
Your heuristic basically has to put all of these possible actions in an order of importance (preferably based on the current game state). You haven't even listed all possible "sub-goals" and with these 11 examples, there are already 55 comparisons to be checked by possibly every unit on your field.

The reflex based agent will not plan ahead, not even for the next turn. It will not anticipate possible enemy movements based on previous observations.

My estimation is that even new players would be able to defeat this AI by the 5th game if not earlier. And after that, they can consistently defeat it by exactly reproducing the same moves as before, since the AI will react exactly the same way. All you would get out of this is a tutorial after approximately months of development. It's just not worth it in my opinion.

Fun fact: the conversation-AI equivalent of your idea would be ELIZA.

Sorry to disappoint you, but I hope you found the mental exercise worthwhile Smile

I am in no way affiliated with or authorized by One Man Left Studios, LLC.
Any information on Outwitters I present is founded on personal experience, public knowledge or the Outwitters Beta Test.
Visit this user's website Find all posts by this user
Quote this message in a reply
04-12-2012, 09:05 AM
Post: #6
RE: Why an AI for Outwitters is hard to do
That was very insightful. Thanks!
Even with random numbers it may always be the same.

jesusfuentesh Wrote:  Harti is like the silent lion. He never says any word, but when so, he was just waiting for his victim haha

[Image: sig.png]
Find all posts by this user
Quote this message in a reply
04-12-2012, 09:10 AM
Post: #7
RE: Why an AI for Outwitters is hard to do
(04-12-2012 09:05 AM)Harti Wrote:  Even with random numbers it may always be the same.

Making the overall heuristic *fuzzy* with some finely tuned random weights would remove the predictability to some extent, but it would retain all the other flaws.

I am in no way affiliated with or authorized by One Man Left Studios, LLC.
Any information on Outwitters I present is founded on personal experience, public knowledge or the Outwitters Beta Test.
Visit this user's website Find all posts by this user
Quote this message in a reply
05-04-2012, 04:59 PM
Post: #8
RE: Why an AI for Outwitters is hard to do
I suppose that as far as programming the ai to predict uncertainty, people can't really predict that either, so you could program it to do what a human might do; try to maneuver in your own strategic way. for example, the ai could predict where the enemy units could move to a certain extent, so it could try to do it's own strategy, like surround a possible area the enemy is in, spreading out it's units, or retreating to reinforcements. An A.I. for a game like this could just have a few sets of default go-to strategies to use in the case that it has no idea what the enemy is doing... at least that's what i think. Wanting to be a video game programmer myself (although i have no experience programming so i can't even begin to do so :P), i feel like making an A.I. might be the hardest part of making the game.
Find all posts by this user
Quote this message in a reply
05-04-2012, 05:16 PM
Post: #9
RE: Why an AI for Outwitters is hard to do
(05-04-2012 04:59 PM)Solan Wrote:  I suppose that as far as programming the ai to predict uncertainty, people can't really predict that either, so you could program it to do what a human might do; try to maneuver in your own strategic way. for example, the ai could predict where the enemy units could move to a certain extent, so it could try to do it's own strategy, like surround a possible area the enemy is in, spreading out it's units, or retreating to reinforcements. An A.I. for a game like this could just have a few sets of default go-to strategies to use in the case that it has no idea what the enemy is doing... at least that's what i think. Wanting to be a video game programmer myself (although i have no experience programming so i can't even begin to do so Tongue), i feel like making an A.I. might be the hardest part of making the game.

If the exploration of the game state works as intended, you won't actually have to define these go-to strategies, they'll come as a bonus because they are the most promising actions to take given the current state.
I would also disagree with your statement, that humans don't keep track of uncertainties. Novice players may not do so, advanced players may do it subconciously and high skilled players will probably do it conciously. It's an integral part of being human to handle uncertainties, we've evolved this ability to survive, actually.
And to your last point: I completely agree that AI is probably the hardest part in a game like Outwitters, which is why OML completely omitted it in favor of the league system and matchmaking.

I am in no way affiliated with or authorized by One Man Left Studios, LLC.
Any information on Outwitters I present is founded on personal experience, public knowledge or the Outwitters Beta Test.
Visit this user's website Find all posts by this user
Quote this message in a reply
05-04-2012, 07:33 PM
Post: #10
RE: Why an AI for Outwitters is hard to do
hmm, didn't quite mean that humans don't keep track of uncertainties, more that humans can't completely predict what the opponent is doing... to a certain extent. What I meant was more that just like people can only guess what the enemy is doing, the A.I.s can be programmed with a set pattern on general things the enemy could be doing (like saving up, preparing for an attack, or being defensive) and have a general counter to what they are doing (like getting defensive in an attack is coming, saving up if...they're being defensive?). I feel like I can't quite explain my thoughts thoroughly, but I think that's clearer than what I said before.
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread:
2 Guest(s)

Return to TopReturn to Content