Webdevs not realizing their websites are loading slowly because their network is too fast is a problem, but I disagree that the solution is to give the devs crappier machines. It would be better to make them test using a proxy that simulates bad connections by delaying or dropping packets.
I’ve always thought this was best addressed with more specific targets. Something like the site loading, or the app performing operation X, in Y seconds on some particular computer/configuration. The developers can know what if they exceed the target, they have to work on performance until they can meet it, and if they’re below it, they’re okay and can work on features and bugfixes instead. Meanwhile, engineering and business sides can evaluate the business goal impact and engineering resources impact associated with any particular change in that number.