Since about 1AM PDT AWS US-East’s EBS service has been down. It’s been 24 hours now and many people are getting mighty antsy about this disaster, which according to their status site is caused by “networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes.“. The irony is, the automated backup activity is bringing down the entire EBS infrastructure and EC2 (those instances that depend on EBS anyway — which most probably do) in the availability zone.
As for my servers, they seem to be ok in US-WEST.