Today at about 12:26 PM and 12:34 PM (Pacific time), our network was briefly attacked by an extremely high volume of data — a “distributed denial of service” (DDoS) attack using forged (“spoofed”) source addresses. The volume of the attack was more than 50 times greater than the usual peak inbound data rate to all our servers combined. This caused Web sites and e-mail we host to be very slow or timeout completely for a few minutes. (All services are working normally now.)
The same attack happened a week ago. Based on what we learned previously, we were able to trace the attack in more detail, and we have identified a specific controversial site that the attackers are targeting. We have moved that site to a different section of our network that can fail without affecting other sites, and we will work with the site owner to move it to a dedicated DDoS protection service.
We apologize for the problems caused by this incident. We know that achieving maximum uptime and availability is important for all of our customers.
Between 5:10 and 5:22 A.M. Pacific time this morning (June 15), one of our upstream network providers experienced a large distributed denial of service attack (DDoS) targeted at one of their other customers, overwhelming their core network routers. This resulted in many people being unable to connect to our network during this period.
The problem has been resolved (the provider has blocked the attack), and they tell us it should not recur. We sincerely apologize for the inconvenience this caused.
Between 3:29 PM Pacific time and 3:33 PM Pacific time, our monitoring systems detected that most Internet users could not connect to our primary data center. E-mail delivery was properly queued up and delayed during this period.
We will follow up with the data center team, but the problem appears to have been resolved, and all services are operating normally. We’re continuing to monitor it closely, and we sincerely apologize for the inconvenience this caused our customers.
Updated: connectivity was lost for four minutes because the data center was fighting off a severe DoS attack.
Between 7:00 and 7:45 PM Pacific time Thursday night (March 11), we received two reports of slow or nonexistent network connections to sites on our servers.
Our automated monitoring systems didn’t detect any general problems, so the majority of customers were certainly unaffected — but we suspect that one of the “Internet backbones” between the affected customers and our data center had high packet loss during that period.
Both customers reported that the problem resolved itself by 7:45, and we haven’t received similar reports since, so there does not appear to be be an ongoing problem. We’ll continue to monitor it closely.
Some of our customers may have noticed “high packet loss” today from about noon to 12:25 PM (Pacific time). This could make it seem like Web sites hosted on our servers were loading slowly, or even timing out.
The problem has been resolved by our upstream provider, but we are working with them to make sure it doesn’t recur.
Between 5:11 and 5:46 PM Pacific time today, some people who reach our servers via an “Internet backbone” called Global Crossing (including some Comcast cable customers) were unable to connect to our data center. Other users weren’t affected.
Global Crossing has apparently corrected the problem, and everything is now operating normally. We’ll continue to monitor this issue closely.
Read the rest of this entry »
Since about 9:00 AM (Pacific time) this morning, we’ve been seeing network routing problems to some destinations on the Internet that use the “xo.net” backbone. For some customers, this will have the effect of making any access to your web site extremely slow — it may even be so slow as to seem completely non-responsive. Most customers will have no problems.
Our data center technicians are working on this problem. We’ll update this post as soon as the issue is resolved.
Update: This issue was resolved at approximately 10:20 AM, and all systems are operating normally.
Between 4:33 and 4:41 PM Pacific time, we experienced a short-lived problem where users who reach our servers via an “Internet backbone” called Global Crossing (including Comcast and Charter cable customers) were unable to connect. Other users weren’t affected.
The problem lasted for less than ten minutes, and everything is now operating normally.
We’re currently seeing about 15% “packet loss” from our data center to a handful of locations on the Internet (notably connections that go through the above.net backbone). Most people aren’t affected by this, but for those that are, this can cause connections to be slower than normal. We have a ticket open with the data center for this issue, and we’ll update this page when it’s resolved.
Update May 20: The packet loss problem was effectively resolved on Friday, although we’ve been monitoring the above.net backbone connection closely to ensure that there is no ongoing problem. Although we’ve seen a couple of short latency issues that we’re still following up with the data center about, customers are not experiencing any problems.