Time to Fold, Humans: Computer beat 11 pro poker players using 'intuition'
Time to Fold, Humans: Computer beat 11 pro poker players using 'intuition'

Time to Fold, Humans: Computer beat 11 pro poker players using “intuition”

Researchers at the University of Alberta are cracking away at the complexities of artificial intelligence with their new “DeepStack” system, which can not only play a round of poker with you, but walk away with all of your money.

Poker might not sound fundamentally different than any of the other games in which A.I. systems are already proficient. But it is. The reason comes down to how much information is available to the system. In a game like chess or checkers, it’s all out there on the board—the computer knows exactly as much as the other player (the strategy comes from predicting how the other player will move next given this information).

But poker is a lot harder. “Poker is a game of imperfect information, where players’ private cards give them asymmetric information about the state of game,” researchers write in a study, published in the journal Science on Thursday. That means, to be successful, an A.I. system would have to be able to process much more complex situations, with less known information, than systems playing other games. The researchers estimate that DeepStack had to make 10^160 decisions—comparable to a game of Go, which demands 10^170. The main goal, the researchers write, was to prevent the system from being exploited by its cagey human opponent.

After training the A.I. with 10 million randomly generated poker games, the researchers tested DeepStack’s abilities on professional poker players. They played nearly 45,000 games; based on a complex scoring system, the A.I. system was found to have defeated 10 of the 11 players who played at least 3,000 games.

In the study, the researchers write that a system like DeepStack could be useful as a tool for military defense of “strategic resources” and to help doctors determine which medical treatments are best for a patient. Because DeepStack is designed to avoid being exploited, an A.I. system like it could be best suited to help humans plan for worst-case scenarios, like an enemy attack or a rare medical complication.

To do this, however, you’d need to be able to know all the available actions and possible outcomes in advance, Michael Bowling, a computing science professor at the University of Alberta and one of the study authors, told Vocativ via email. Training such a system would require a lot of example real-life scenarios and neural networks that can, as Bowling put it, “capture your intuition about future decisions.”

All of this can be a little tricky when real-life events are so complex. Some researchers, such as Milind Tambe at the University of Southern California, are already turning to game theory (AI’s philosophical underpinning) as a way to address security threats. Bowling, who has collaborated with Tambe in the past, says security situations are “probably the ripest for these sorts of techniques.” Though that group has already had a number of simulations of security situations and historical data to evaluate their systems, Bowling says, they’re not quite sure how they will use DeepStack to improve them.

Agencies/Canadajournal




  • About News

    Web articles – via partners/network co-ordinators. This website and its contents are the exclusive property of ANGA Media Corporation . We appreciate your feedback and respond to every request. Please fill in the form or send us email to: [email protected]

    Check Also

    China: Organic molecule remnants found in dinosaur fossils

    China: Organic molecule remnants found in dinosaur fossils

    Organic molecule remnants found in nuclei of 125-million-year-old dinosaur cells. A team of scientists from …

    Leave a Reply