So yesterday (8th May, 2021), a large percentage of the internet just disappeared and became inaccessible. The problem turned out to be one service provider that all the sites affected were using, a CDN or Content Delivery Network called Fastly.
It is Fastly’s job to get content to the user quicker than if the user hit the intended destination directly but as has been shown, when there is a problem the number of sites that go down is massive.
Now, the problem has been blamed on a software bug that was triggered by one of Fastly’s customers changing some settings within their account. This action has brought the question up again about relying on a few companies to run huge amounts of the internet’s core infrastructure. When one customer of a service, making an innocent change, can take out a large chunk of the internet for an hour there is a problem.
While software bugs are common, problems like this do show the possible over reliance on a few key big companies. What I find even more amusing, personally, is these companies that claim “no one single point of failure” then you see vast quantities of websites disappearing due to a problem with a single point that took out more than it should have.
What is the answer though?
Be First to Comment