Stock FAQs

libratus stock price

by Ottis Little Published 2 years ago Updated 2 years ago
image

Is libratus the best poker AI?

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans. And it wasn't just any game of poker. Kim, 28, is among the best players in the world. The machine, built by two computer science researchers at Carnegie Mellon, is an artificially intelligent system that runs on a Pittsburgh supercomputer.

How did libratus get so successful?

During the competition, the creators of Libratus were coy about how the system worked---how it managed to be so successful, how it mimicked human intuition in a way no other machine ever had. But as it turns out, this AI reached such heights because it wasn't just one AI.

Is libratus the future of AI?

It's not just that AI spans many technologies. Humans are so often in the mix, too, actively improving, running, or augmenting the AI. Libratus is indeed a milestone, displaying a breed of AI that could play a role with everything from Wall Street trading to cybersecurity to auctions and political negotiations.

What is the use of neural networks in libratus?

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself.

image

From Zero to Hero in 2 Years

Two years ago a team from Carnegie Mellon University developed a computer program with the goal to beat the best players in Heads-Up No-Limit-Hold’em, one of the more complex poker variants.

Who Was Playing?

Dong Kim , Jason Les , Jimmy Chou and Daniel McAulay - four distinguished and well-versed poker players - represented the humans in this challenge.

Special Rules to Reduce Luck

This challenge lasted for 120,000 hands – 30,000 per player - and ran from January 11-30. For each hand the player and AI started with 20,000 chips with the blinds at 50/100.

Results

After 20 days and 120,000 hands played, the result was shockingly unambiguous: Libratus beat each player and won at a rate of $14.72 per hand.

Maybe the AI Was Just Lucky?

While the rules of the challenge were set to reduce the luck factor as much as possible, chance still plays a big role in the results of each hand – even with mirrored hands and even with the elimination of all-in luck.

How Does Libratus Work?

Basically the Libratus AI is just a huge set of strategies which define how to play in a certain situation. Two examples of such strategies (not necessarily related to the actual game play of Libratus):

Complexity is Limited

How can a computer beat seemingly strong poker players? For most players poker is a game of reads , guts , deception and intuition.

How many days did Kim play Texas Hold'em?

And for twenty straight days, they played no-limit Texas Hold 'Em, an especially complex form of poker in which betting strategies play out ...

Is Libratus a milestone?

It's not just that AI spans many technologies. Humans are so often in the mix, too, actively improving, running, or augmenting the AI. Libratus is indeed a milestone, displaying a breed of AI that could play a role with everything from Wall Street trading to cybersecurity to auctions and political negotiations.

Where did Dong Kim play poker?

For almost three weeks, Dong Kim sat at a casino in Pittsburgh and played poker against a machine. But Kim wasn't just any poker player. This wasn't just any machine. And it wasn't just any game of poker. Kim, 28, is among the best players in the world. The machine, built by two computer science researchers at Carnegie Mellon, ...

Did Libratus use neural networks?

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself.

Who built the game "We don't tell it how to play"?

We don't tell it how to play," says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. "It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.". But that was just the first stage.

Libratus versus humans

Pitting artificial intelligence (AI) against top human players demonstrates just how far AI has come. Brown and Sandholm built a poker-playing AI called Libratus that decisively beat four leading human professionals in the two-player variant of poker called heads-up no-limit Texas hold'em (HUNL).

Abstract

No-limit Texas hold’em is the most popular form of poker. Despite artificial intelligence (AI) successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle.

Abstraction and equilibrium finding: Building a blueprint strategy

One solution to the problem of imperfect information is to simply reason about the entire game as a whole, rather than just pieces of it. In this approach, a solution is precomputed for the entire game, possibly using a linear program ( 10) or an iterative algorithm ( 17 – 21 ).

Nested safe subgame solving

Although purely abstraction-based approaches have produced strong AIs for poker ( 25, 30, 32, 41 ), abstraction alone has not been enough to reach superhuman performance in HUNL.

Self-improvement

The third module of Libratus is the self-improver. It enhances the blueprint strategy in the background. It fills in missing branches in the blueprint abstraction and computes a game-theoretic strategy for those branches. In principle, one could conduct all such computations in advance, but the game tree is way too large for that to be feasible.

Experimental evaluation

To evaluate the strength of the techniques used in Libratus, we first tested the overall approach of the AI on scaled-down variants of poker before proceeding to tests on full HUNL. These moderate-sized variants consisted of only two or three rounds of betting rather than four, and, at most, three bet sizes at each decision point.

Conclusions

Libratus presents an approach that effectively addresses the challenge of game-theoretic reasoning under hidden information in a large state space. The techniques that we developed are largely domain independent and can thus be applied to other strategic imperfect-information interactions, including nonrecreational applications.

image
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 1 2 3 4 5 6 7 8 9