What will undoubtedly become known as one of the darkest points in U.S.
history may likewise be referred to as one of the darkest point in the history
of the Internet. Arguably the single most significant period for the Web
occurred today when the World Wide Web was called upon en masse for timely
information, in the first few critical minutes following
act of terrorism in New York City.
And because of the tens of millions of users logging on from all over the planet
– the Internet couldn’t handle the pressure, and most sites were rendered
KUAM was the first to break local coverage of the terrorist attack at the World
Trade Center that has both horrified and shocked our nation early this morning
on-air and online. When we got the word about exactly what happened, we
immediately scrambled our newsteam – reporters, producers, photographers, and
Web developers. Naturally, to find out what happened and where we were with the
issue, we looked to the ‘Net. For the first time in my many years of being a
‘Netizen – the Web was virtually inaccessible.
I theorized that the problem was traffic-based, and could have stemmed from any
number of sources:
- Our Internet service
- The host Web sites
themselves, being overloaded with inbound traffic and page requests
- The Internet infrastructure
itself, simply having too little bandwidth to accommodate the mammoth amount
of traffic generated by the world
The second assumption was largely more accurate, as the sheer volume of the
assumedly tens of thousands of people per minute hitting a site was simply too
much for sites too handle. The third assumption (to a minor degree impacted the
traffic, as well), as the north tower of the WTC housed thousands of phone lines
and backbone connections supporting the major network news sites, stunting their
The networks address the issue of high-traffic
All leading Web site managers will plan for a worst-case scenario when
conducting their capacity planning. The more astute ones will take this a step
further and project what would happen if a site was pushed beyond its normal
ability to provide service in a timely manner (on the Web, meaning serving up a
page within a few seconds). This demand planning is increased by an exponential
factor for news-oriented Web sites, due to the nature of their product. It
became apparent in the first 45 minutes after the incident took place that sites
were pushed excessively beyond their means.
CNN.com addressed the issue of traffic
overload in debatably the best manner. In a different, and perhaps more
intelligible fashion, the site scrapped its traditional columnar, info-heavy
page homepage layout in favor of a scaled-down text-only version, in order to
load quicker and impose less strain on its network servers (avoiding the
complexities of a database call). But perhaps due to the size and scope of CNN,
being a global news presence, had the slowest performance of the majors.
ABCNews.com adopted a similar strategy as
CNN, after being largely inaccessible in the hours following the initial ordeal.
MSNBC.com maintained its traditional layout,
although the huge swells of page requests to its own site caused frequent
timeouts, “Page Not Found” errors, and errors indicating that too many people
had access the site’s data store simultaneously (although arguably it had better
performance than most of the other majors). It was either on, or off....and in
the latter case, refreshing the page normally got you in...but at a snail’s
What’s important to realize is that these are major sites with heavy-duty
architecture supporting them on the back-end. They are constructed to be
bandwidth-intensive, using advanced clustering, Web server farms, and redundancy
schema that are intended to take on extremely high levels of page requests, most
being able to facilitate as many as several tens of thousands of users accessing
the same resource.
Today’s traffic exceeded most projections by far.
Major sites use alter-egos to fend off traffic swells
In lieu of being able to access content inherently from the networks, many of
the majors announced that their articles and exhibits could be accessed through
alternative sites...such as NewsHub.com
KUAM.COM’s inbound traffic levels
also took a significant spike in the first 4 hours, being the only Guam news
site with locally-produced content.
Many of the majors also added more servers and capacity to their networks, and
as the situation became more globally-known, the traffic started to subside and
become more workable. Networks strained and scrambled their research teams on
terrorism to provide image galleries, streaming coverage, timelines, historical
profiles, city information, and discussion forums, message boards, and chatrooms.
A problem noted by many sites using discussion forums was the excessive and
expected outcry by the online community, many of which included intense racial
slurs directed at largely Middle Eastern ethnicities, with common cries
encouraging violence in the most extreme measures.
Not just IP traffic....
We even received several e-mails throughout the night as an affiliate from the
networks, some of which saying, “We’re being hammered by users coming to the
page...if you can't get through the first time, keep trying.” E-mail
globally was largely unaffected by the traffic swell.
Verizon reported that cellular/mobile
phone usage doubled the normal capacity, causing much network congestion...and
incomplete calls for terrified citizens and family members.
Needless to say...the traffic on the Internet will return to normal in a few
weeks, if not days, and everything will go revert to some state of normalcy, and
carry on. This is sadly the least common denominator between technology and
human emotion – being a far cry from the healing that will have to start as a
nation mourns devastating tragedy.