Artificial "Global" Test Agents?

This may seem like a silly question, but I’m still trying to ramp up on understanding how WPT works, let alone private instances of it :slight_smile:

Since my website is still under development, I need to host my own private instance of WPT. That said, I really like having the ability to test from different geographical locations through the public instance of WPT.

Now, it would be ideal if I could stand up test agents in different physical geographical locations but that’s out of the question. The best I could do would be to have VMs in Seattle get behind proxies in Europe and Asia, but that would severely limit the effectiveness of my performance results.

Is there a means to simulate the performance characteristics of hitting a site from a different geographical location, similar to how traffic shaping is used to simulate different connection types?

Any help is much appreciated!


Don’t use a proxy for performance testing (even if it is local). Browsers behave differently when going through a proxy. Trying to route a proxy remotely is even worse because you’ll double the latency (and still have the full latency for resources served close to the real location).

If you can poke firewall holes or otherwise allow outside access to the web server where you run WPT your best option would be to use the EC2 AMI’s I set up. You can spin testers up just for the time you want to do testing and then shut them down and the small spot instances are usually < $0.10 per hour. Here is the info on using the EC2 instances:

The EC2 AMIs are available in all of the regions they support (California, Virginia, Ireland, Singapore and Tokyo)

Hmm. Unfortunately, I won’t be able to use Amazon’s service in my company due to cost and other reasons.

I was hoping to simulate the conditions programmatically. Is there any possibility for that? What are the driving factors for performance characteristic impacts across continents? Just the latency of reaching the server? We’re going to use a CDN for some content, but not all, so I expect minor impact to non-North American users when pulling the handful (~10) of site images we’re hosting locally.

For example, if I test from Dulles, VA on IE7 and from Delhi, India on IE7, it’s 2-3 times longer on most metrics.



Yes, latency is the driving factor for global performance. If ALL of your site was not using a CDN and was all served from the same server then it would be a trivial matter of increasing the latency in the connectivity configuration. When you start mixing in CDN-served content and 3rd-party content it gets exponentially more difficult.

I wouldn’t say it would be impossible to model but the effort involved would be significantly more that what it would cost to just deploy VM’s (even if it’s not on EC2). You’d have to identify the servers that the page needs to talk to, ensure that they always resolve to the same IP’s, determine the latency characteristics for each from various regions globally and then configure dummynet with different latency pipes for each of them. That’s also assuming all of the services are trivial and don’t have back-end connections of their own with varying latency by region.

I’ve looked at it several times and at the end of the day the results would not have been close enough to the real world performance to justify the effort and we were better off deploying testing agents where we needed them.