|
Post by Slartucker on Feb 1, 2008 15:52:24 GMT -5
Hah! No. I started a whole bunch of projects in December, thinking I had lots of time to spare. Now I have little even before those projects. Alas...
|
|
|
Post by Rycchus on Feb 3, 2008 13:04:26 GMT -5
I believe the idea is to treat it as a sequential move game, with you moving first. (I'm assuming from the way you phrased your question that you know how to do sequential move games.) If you're minimaxing, then you're basically assuming your opponent knows what you're going to play and will play against that - which is the same as you playing first and then him playing.
Sorry if this is obvious/no help.
|
|
|
Post by azurekuhn on Feb 7, 2008 20:38:29 GMT -5
I believe the idea is to treat it as a sequential move game, with you moving first. (I'm assuming from the way you phrased your question that you know how to do sequential move games.) If you're minimaxing, then you're basically assuming your opponent knows what you're going to play and will play against that - which is the same as you playing first and then him playing. Sorry if this is obvious/no help. Unfortunately, this method isn't considered the correct strategy for the game. The idea is to maximize your average payoff, which isn't actually obtained by pretending that it's sequential. Rather, it is figured by deciding on a mixed strategy of playing each of your options with a certain probability. (Think of a bolt/counter situation, where the right strategy is for the attacker to usually dummy and the defender to usually counter). All this is rather unfortunate, since the sequential idea is quite easy. *pout, pout, complain*. *okay, not really*. This page, students.cs.byu.edu/~cs670ta/Lectures/Minimax2.html, gives a description of the situation where you have 2 choices which makes sense to me. The basic idea is to assume you use strategy A with probability p and B with probability 1-p. You create a linear function of p for each of your opponent's possible strategies, then find the minimal intersection of those lines. Where I'm stuck now is that various sources note that the more general problem with more than 2 choices involves more complicated linear programming which none of these sources care to get into because it's beyond their scope. Hrmph. I'll keep looking for a decent source. Perhaps my advisor from school knows - he's an AI computer scientist, although his specialization is in natural language processing. Maybe we could try to interpret the chat that goes on during games instead... ;p
|
|
|
Post by Rycchus on Feb 8, 2008 6:50:43 GMT -5
I read this and spent a lot of time thinking about the differences between "minimising your maximum loss" (how I see it, and how it is usually described) and "maximising your average payoff" (your description). My conclusion is that a good player will do something in between - if you take each example literally, you can find a situation in which sticking rigidly to one or the other is clearly suboptimal.
Anyway, I think that the difference is subtle enough that to make an AI that "minimises your maximum loss" would work in almost all cases. Of course, to do this, you assume that "after" your move your opponent plays whatever would be worst for you - and then you choose the corresponding "least-bad" option on your part.
However, whilst I can understand the Warlocks and the mathematics (i.e. the theory), the coding and implementation (i.e. the practical) is lost on me so I'm not sure I can help you with that. I was just repeating what I'd heard others say elsewhere - from a theoretical point of view, it makes sense to me, and if it's easier to code with a negligable difference in output? Seems like the right way to go.
|
|
|
Post by Dubber on Feb 9, 2008 12:34:06 GMT -5
Meta comment here.
None of what you guys are discussing is new ground. You seem to be working further into the details of how each option won't work. At least to my ignorant eyes anyway
Caveat: I am not an AI geek/programmer
|
|