What?!! No really. Here's my doomsday rationale
- A scary proportion of the world’s energy is already spent powering data centres (~2%)
- Cloud data centre capacity is growing fast but is inefficiently used (average ~10% resource utilisation).
- Infrastructure as code makes it easy to overprovision in the cloud. If machine creation can be scripted it can be reproduced. You can create 100 machines as easily as 1.
So,’ time the 2% of energy currently being used in data centres could be 4% or higher. That’s a lot of power stations. And a lot of overprovisioning.
Why do we Overprovision in the Cloud?
Because we can.
DevOps has given us the power and it’s not like we have a lot of choice. Autoscaling is not magic - you can’t scale up in real time because it takes at least a few minutes to bring up a new VM. So, if you cannot predict the future (and if you can, your remuneration package must be good) then to handle unexpected demand you have to overprovision and keep that extra capacity sitting around hot-but-idle. And then thank goodness you have that choice. In the old days we just fell over ;-)
Cloud+devops gives us the option of overprovisioning to avoid failure. So we do overprovision and that’s not a crazy judgement. It you fall over your company might fail. Overprovisioning is just money.
What does philosophy have to say?
My favourite 18th century German philosopher Kant would say “what if everyone overprovisioned their infrastructure?”
The answer is higher energy usage in a world where energy generation is mostly CO2-producing fossil fuels. Hmm. Kant would say not ideal. It’s a shame to be sitting in a cloud of CO2, but it’s particularly galling if that’s just to keep data centre capacity idle.
What can we do?
Hope for AWS to save us!
VMs and cloud infrastructure don’t help with server utilisation. It feels like they should but the data suggest that in practise they don’t. They probably make it worse. However, there are lots of new technologies coming along that do help: containers, microscaling, orchestrators & serverless architectures (potentially).
Just look at Google, they use all of these technologies to achieve server utilisation of around 70%, which is >5 times what the rest of us manage. If we were all achieving that then maybe devops wouldn't help destroy the world after all.