I looked over the doc when it went around earlier. Might as well link to my comment elsewhere.
They start out saying “our primary goal with the garbage collector is polish”.
It looks like some things will cut typical/peak pauses–e.g., rescanning stacks before mark termination might avoid rare longer pauses when a chunk of data is missed in the concurrent phase of collection and only caught during STW, and other changes may move some work out of the (already short) stop-the-world phase. And it looks like much of it is work they really want to do for the sake of clean design and code.
I’d note they’re careful to distinguish between latency and throughput, e.g. the presentation was titled “Latency Problem Solved.” They didn’t choose to announce big increases in throughput as a goal for 1.6, or talk about moving to generational anytime soon. The language in their blog post (“prioritizing simplicity,” the bit referring to ‘enterprise’ GC suggesting C#/Java generational collectors, and the reference to this as a collector for the next decade) makes me think maybe generational’s not coming in, say, 1.7 either.
Like all Kremlinology this is a bit goofy, but, tl;dr, I think you should expect some things that will improve pause times and consistency, some cleanup that helps the project move forward, but not like 90% slashed off the (background) CPU-sec used by the collector. Worth noting the status quo is pretty decent for a lot of Web-app-y/Internet-service-y stuff, even memory hogs–comment linked above examples of big pauses in presumably big-heap’d apps becoming very short.