I resisted the temptation to tag this “go”.
The article says there should be a paper at Nature, but it doesn’t appear to be available yet.
The paper is up now. Nature.com.
I guess that explains why Mark Zuckerberg posted this morning about the Go AI team he keeps by his desk.
You can also find a pdf of the paper and a couple of videos on the official site
Note that it beat the top European player, not the top player, and there is a large difference in skill between the two.
I’m (only) a 6k player myself and that has taken a couple years haphazard study.
Official Go ratings range from 30K downward to 1K (kue) and then upward from 1d to 9d (dan) Amateur and then from 1d to 9d professional. The amateur kue and dan ratings rank how many stones a given player can give another as a handicap. As I understand it, the professional ratings are “honorary” and don’t directly reflect anything beside having won an award - for professional performance, ELO ratings akin to chess ratings, are the more accurate ratings (professionals generally don’t play with handicaps). Previously, the highest rated program was crazy stone which is 6d amateur on the KGS go server. The person they defeated would be 9d amateur but still not the highest rated professional go by a substantial margin based on elo ratings. So this reflects a jump of 3-4+ stones in strength for Go algorithms. Which is to say that while this is indeed an unexpected jump in strength, computer Go was already doing quite well and the previously best programs could beat 90%+ of amateurs.
Someone on hn recommend this helpful site:
As I understand it, the professional ratings are “honorary” and don’t directly reflect anything beside having won an award
Professional ranks are a reasonable proxy for skill level. A detailed resource of this topic is here. Brushing nuance aside the general rule of thumb is that one rank difference for an amateur is worth one handicap stone in a game, but one rank difference for a professional is worth approximately 1/3rd of a stone. It’s a rough rule of thumb and there are exceptions. I agree that international ELO ratings are probably a more reliable way of comparing different players abilities, although international ELO is not really part of the culture of go.
Comparing Fan Hui (2p) vs Lee Sedol (9p) by this rule of thumb we can estimate 3 1/3 stones of difference. My point is that 3-4 stones of difference at the professional level is an exponentially harder chasm to cross than the same difference at the amateur level. For yourself, consider the ease of progressing from 18k to 15k, and compare that with the ease of progressing from 9k to 6k.
Semi-related, is there an framework somewhere for building/testing your own Go AI to against? I’ve briefly looked before, but all I can find are Go servers that look like they were last updated in 1998. I imagine I’m just looking in the wrong place, since it seems like Go is a hot topic to train AI against. Does everyone just roll their own game engine/server to test against?
You most likely want to integrate with KGS, which is a server where mostly humans play each other.
Oooh, I feel silly now. I skimmed KGS, but it seemed to be for humans only. I dug deeper after your recommendation and it seems there is a computer-go room with more details. Thanks!
No problem! Getting oriented in a community is nontrivial. :) Good luck with your project, if you decide to spend time on it!
Besides playing on KGS, which is great for getting games against humans, here are a few other resources.
CGOS has traditionally been the place to get a lot of test games against other computer opponents, sadly since the original developer (Don Dailey) passed away it has been less stable and hence less used. Recently Hiroshi Yamashita set it up on another server and it seems to be getting some traffic at (http://www.yss-aya.com/cgos/).
Nick Wedd also holds a monthly computer tournament on KGS.
Finally the computer go community generally hangs out on the computer go mailing list (it can also be accessed through gmane).
Awesome, thanks for this!
there are some open source bot pachi is one of the strongest the gogui offer the twogtp for automating testing
“What if the universe,” Calo says, “is just a giant game of Go?”
The justification for the sensationalizing word is presumably that it happened in October and we are only now hearing about it.