You got my attention, but the lack of information started to lose it.
It’s fast! How fast? This fast! Great. Well, probably great – is that compared to a sql pg_dump, or a compressed, or what? Oh, wait, that’s restore. So probably a binary dump. Compressed? Unknown.
Is it more space-intensive? No idea.
Is it scalable? No idea.
How does it work? No idea.
Reading the code, it turns out it’s using postgresql’s internal template database capability, which performs a full copy. It’s a clever use of that facility, but not very scalable (in time or space). Still, if you’re primarily using it for development on mostly-empty databases that need schema work, then it sounds like a great idea. Though there’s a reason it’s compared to pg_restore instead of pg_dump – it’s still doing a single-threaded full copy on snapshot (while on restore it’s doing a rename).
In mysql, it’s copying table by table, including full inserts, so it’s both more complicated, and likely to take longer (though, again, on restore it’s all renames). It looks like that support is still in progress. I’d advise adding “ENABLE KEYS”, and considering the sticky problem of triggers. I’m not sure I’d advise using this with mysql before those problems are handled.
TL; DR: Don’t use this on your 20TB production DBs. It will well for developers with schema-only postgres DBs.
I wish someone could fix the (presumed) typo in the title. It’s giving me nervous ticks!