Something weird and unexpected happened lately: the big Amazon cloud failed. Anyone on any media is talking about it, and everyone is communicating just this sense of surprise.
Wait a second and let’s ask to ourselves: why is it weird and unespected that AWS failed? The cloud is something human, so it has failed, as expected, and will fail again. By the way, I am sure that it failed a number of times in the past, but the failures weren’t so big to be noticed like the last big event. Continue reading →
Everyone is aware of the problem of discovering the causes of a bug when it’s only present in one environment and, if it’s Production, the problem is even bigger, even if you have a solid error logging system in place.
Recently we faced this same situation and we didn’t have any clues to help us, only that the w3wp process was dying and the ASP.NET session remained locked. After some thought, we arrived at the conclusion that there was an infinite loop somewhere, and we had a vague idea of the “zone” of code where this was happening, but we couldn’t reproduce it in any other environment even after several hours of testing.
Only a small note to let you know that Amazon is hearing us and added a new feature to EC2: persistent storage.
As a subscriber of AWS services yesterday I received an email in which Amazon announces that we “will be able to create volumes ranging in size from 1 GB to 1 TB, and will be able to attach multiple volumes to a single instance. Volumes are designed for high throughput, low latency access from Amazon EC2, and can be attached to any running EC2 instance where they will show up as a device inside of the instance…”.
The mail ends saying that the new functionality “will be publicly available later this year” and offers a link to request to join the private beta program; I subscribed it and will let you now as soon as I’ll put my hands on it.