26 December 2007

Xmas

I'm down in Saint Louis for Xmas right now. On the way down I met this girl, Martha, and thought it was so totally random but then I remembered that's basically how I met Leila (s/bus/airplane/). Have I really been meeting random people everywhere all this time and only just now, now that I've got a system in place for this sort of thing, am I remembering them? I can't tell if my memory is getting better or if I'm getting better at not relying on it. If there were a way to test it....

Since Friday's shenanigans at the office I've been getting back into poker, even to the point that I'm currently designing a kind of poker server that multiple clients will be able to connect to at once. Here's the twist: I want to attract other programmers to write plug-ins for it so that I can see what the bot programming community is up to with poker. Historically, bots have not been very good at this sort of thing. They're not good at reading humans on such little information—meanwhile humans can read other humans very well because, well, we're all humans. That's a big advantage, particularly when a human can read someone's face. There are microscopic movements someone's face makes when making decisions regarding money. Certainly a computer could be trained to pick up on these, too, but I think that would be pretty weird literally building a poker robot rather than just a wee online bot.

I think this is my new hobby, though, until I can get an apartment with a garage so I can go back to building stuff. I want to write simple bots and monitors if for no other purpose than to keep my AI skills sharp. In this case, I think my favoured approach is going to be a neural network with an input set for each player which will carry a normalized signal from that player's current bet. For those not in the know, a back-propagation neural network is basically a computer simulation (and an inaccurate one, but authenticity is easily trumped by effectiveness) of a connected network of brain cells. Each layer is triggered by the inputs of the previous layer, multiplied by weights. Back-propagation is where, upon a result, the weights are adjusted layer by layer so that the result next time will be closer to the desired one.

Since humans could intentionally play crazy during the first few hands to un-train the net for later, it seems like a pretty bad idea to back-propagate mid-game, but perhaps that would be safe as long as the human players can be trusted not to exploit it too much. Also, the more established the neural network's paths, the harder it would be for a human to exploit its mid-game learning, yet to a certain extent it would be able to adapt its gameplay to each player's style. I've learned in the past that artificial neural networks are best kept small, though. My best one ever was actually not a neural network at all but a single perceptron (one "neuron" of a neural net) with a couple hundred inputs and meta-inputs. I did this mostly as an expedient, since back-propagation is way harder to program than perceptron learning, but it ended up being rock-solid. I'm not even really sure if, at the time, back-propagation was an option since this was for a class.

Oddly enough, a single perceptron with a sigmoid function to introduce nonlinearity into the threshold computation might suffice. At any rate, I am much more experienced using perceptrons than I am neural networks, and I know a lot more tricks with them. The beauty of the way I'm programming this is that I can try both and have them battle against me and against each other! The notes are in my private wiki now . . . I suppose I should get going and start writing the monitor. Wish me luck, space cadets, and I'll see y'all at the tables.

3 comments:

Monkey said...

MAX- IF u R in Town call me asap-party Tomorrow night! Lost ur # Ceil

Fiola said...

play crazy like all in before anything?
you best keep that bot away from me.

leila said...

Only the cool kids take public transportation