You can run a test to do a large download, enable tcpdump capture and verify manually if you’d really like but all of the verification that the code was doing what we expect it to do is not run on every test.
The agents fully control the whole machine they are running on and use netem+tc (or winshaper on windows or dummynet on mac) and throttle the whole machine to whatever parameters are provided. There’s no dependency on the browsers doing anything and the OS-level packet traffic-shaping is well understood and has been around for a long time.
As far as the networks themselves being able to keep up with the speeds and not add any extra latency, all of the test locations run on gigabit+ datacenter connections except for LA and Dulles which are both still 300+Mbps.
The RTT latency is usually pretty easy to see in the connection setup time for requests to a CDN like Akamai. It is usually just a few milliseconds above the configured settings (which is expected because the configured settings are for the last-mile and there is still some network path overhead).