(Huge) Performance gaps between different Amazon Instance locations

We are using private amazon instances for running automated daily benchmarks for several websites. In december we switched our test-agent location from Ireland to Frankfurt (to get more realistic latencies from germany).

We expected slightly better results, since tested websites are hosted in germany. But instead our test results (except time to first byte) are up to 7 seconds slower!

(See attached image):

  • To Dec 07: t2.micro / Ireland / ami-8b2c2fff
  • From Dec 07: t2.micro / Frankfurt / ami-54291f49
  • From Jan 07: m3.medium / Frankfurt / ami-54291f49
  • From Jan 12: (testing various instances…)

[attachment=471]

I’ve read, that t2.mirco instances are not sufficient / recommended for test agents, but it seems that performance does not have an impact on t2.micro instances hosted in Ireland. (Maybe this is because ami-54291f49 ist 32 bit?)

Obviously we’d like to stay with micro instances, if possible. But is there also a 32 bit AMI (if that is the reason) for Frankfurt?

And why is the TTFB on same level and only fullyLoaded, docComplete and speedIndex differ from location to location?

t2.micro’s are too small, I normally go with m3.mediums - this is what the Server AMI fire’s up by default I believe.

For my own WPT installs I allow 2 vCPUs and 2GB of RAM for each Windows VM, in work I think we allow 2 vCPUs and 1.5GB RAM per VM

TTFB stays roughly the same across all agents as that’s related to the time it takes for the round trip to the server, and the server to generate the page, the other metrics are dependant on how long the browser takes to process the page and this will be CPU / memory dependent.

What does the CPU chart look like across the various tests - I’d expect it to be maxed out on the t2’s

Thanks andy! I,ll have a closer look at the CPU usage the next days.

But why are results so different when using a t2.micro in Ireland and a t2.micro in Frankfurt?