The Internet is a busy place and it's growing by leaps and bounds, scientists say.
The Internet is a busy place. Every second, approximately 6,000 tweets are tweeted; more than 40,000 Google queries are searched; and more than 2 million emails are sent, according to Internet Live Stats, a website of the international Real Time Statistics Project.
But these statistics only hint at the size of the Web. As of September 2014, there were 1 billion websites on the Internet, a number that fluctuates by the minute as sites go defunct and others are born. And beneath this constantly changing (but sort of quantifiable) Internet that's familiar to most people lies the "Deep Web," which includes things Google and other search engines don't index. Deep Web content can be as innocuous as the results of a search of an online database or as secretive as black-market forums accessible only to those with special Tor software. (Though Tor isn't only for illegal activity, it's used wherever people might have reason to go anonymous online.)
With about 1 billion websites, the Web is home to many more individual Web pages. One of these pages, www.worldwidewebsize.com, seeks to quantify the number using research by Internet consultant Maurice de Kunder. De Kunder and his colleagues published their methodology in February 2016 in the journal Scientometrics. To come to an estimate, the researchers sent a batch of 50 common words to be searched by Google and Bing. (Yahoo Search and Ask.com used to be included but are not anymore because they no longer show the total results.) The researchers knew how frequently these words have appeared in print in general, allowing them to extrapolate the total number of pages out there based on how many contain the reference words. Search engines overlap in the pages they index, so the method also requires estimating and subtracting the likely overlap.
According to these calculations, there were at least 4.66 billion Web pages online as of mid-March 2016. This calculation covers only the searchable Web, however, not the Deep Web.
So how much information does the internet hold? There are three ways to look at that question, said Martin Hilbert, a professor of communications at the University of California, Davis.
"The internet stores information, the Internet communicates information and the Internet computes information," Hilbert told Live Science. The communication capacity of the Internet can be measured by how much information it can transfer, or how much information it doestransfer at any given time, he said.
In 2014, researchers published a study in the journal Supercomputing Frontiers and Innovations estimating the storage capacity of the Internet at 10^24 bytes, or 1 million exabytes. A byte is a data unit comprising 8 bits, and is equal to a single character in one of the words you're reading now. An exabyte is 1 billion billion bytes.
One way to estimate the communication capacity of the Internet is to measure the traffic moving through it. According to Cisco's Visual Networking Index initiative, the Internet is now in the "zettabyte era." A zettabyte equals 1 sextillion bytes, or 1,000 exabytes. By the end of 2016, global Internet traffic will reach 1.1 zettabytes per year, according to Cisco, and by 2019, global traffic is expected to hit 2 zettabytes per year.
One zettabyte is the equivalent of 36,000 years of high-definition video, which, in turn, is the equivalent of streaming Netflix's entire catalog 3,177 times, Thomas Barnett Jr., Cisco's director of thought leadership, wrote in a 2011 blog post about the company's findings.
In 2011, Hilbert and his colleagues published a paper in the journal Science estimating the communication capacity of the Internet at 3 x 10^12 kilobits per second, a measure of bandwidth. This was based on hardware capacity, and not on how much information was actually being transferred at any moment.
In one particularly offbeat study, an anonymous hacker measured the size of the Internet by counting how many IPs (Internet Protocols) were in use. IPs are the wayposts of the Internet through which data travels, and each device online has at least one IP address. According to the hacker's estimate, there were 1.3 billion IP addresses used online in 2012.
The Internet has vastly altered the data landscape. In 2000, before Internet use became ubiquitous, telecommunications capacity was 2.2 optimally compressed exabytes, Hilbert and his colleagues found. In 2007, the number was 65. This capacity includes phone networks and voice calls as well as access to the enormous information reservoir that is the Internet. However, data traffic over mobile networks was already outpacing voice traffic in 2007, the researchers found.
If all of these bits and bytes feel a little abstract, don't worry: In 2015, researchers tried to put the Internet's size in physical terms. The researchers estimated that it would take 2 percent of the Amazon rainforest to make the paper to print outthe entire Web (including the Dark Web), they reported in the Journal of Interdisciplinary Science Topics. For that study, they made some big assumptions about the amount of text online by estimating that an average Web page would require 30 pages of A4 paper (8.27 by 11.69 inches). With this assumption, the text on the Internet would require 1.36 x 10^11 pages to print a hard copy. (A Washington Post reporter later aimed for a better estimate and determined that the average length of a Web page was closer to 6.5 printed pages, yielding an estimate of 305.5 billion pages to print the whole Internet.)
Of course, printing out the Internet in text form wouldn't include the massive amount of nontext data hosted online. According to Cisco's research, 8,000 petabytes per month of IP traffic was dedicated to video in 2015, compared with about 3,000 petabytes per month for Web, email and data transfer. (A petabyte is a million gigabytes or 2^50bytes.) All told, the company estimated that video accounted for most Internet traffic that year, at 34,000 petabytes. File sharing came in second, at 14,000 petabytes.
Hilbert and his colleagues took their own stab at visualizing the world's information. In their 2011 Science paper, they calculated that the information capacity of the world's analog and digital storage was 295 optimally compressed exabytes. To store 295 exabytes on CD-ROMS would require a stack of discs reaching to the moon (238,900 miles, or 384,400 kilometers), and then a quarter of the distance from the Earth to the moon again, the researchers wrote. That's a total distance of 298,625 miles (480,590 km). By 2007, 94 percent of information was digital, meaning that the world's digital information alone would overshoot the moon if stored on CD-ROM. It would stretch 280,707.5 miles (451,755 km).
The Internet's size is a moving target, Hilbert said, but it's growing by leaps and bounds. There's just one saving grace when it comes to this deluge of information: Our computing capacity is growing even faster than the amount of date we store.
While world storage capacity doubles every three years, world computing capacity doubles every year and a half, Hilbert said. In 2011, humanity could carry out 6.4 x 10^18 instructions per second with all of its computers — similar to the number of nerve impulses per second in the human brain. Five years later, computational power is up in the ballpark of about eight human brains. That doesn't mean, of course, that eight people in a room could outthink the world's computers. In many ways, artificial intelligence already outperforms human cognitive capacity(though A.I. is still far from mimicking general, humanlike intelligence). Online, artificial intelligence determines which Facebook posts you see, what comes up in a Google search and even 80 percent of stock market transactions. The expansion of computing power is the only thing making the explosion of data online useful, Hilbert said.
"We're going from an information age to a knowledge age," he said.