Big Data is not a new concept. Although our contemporary methods for storing, organizing, and analyzing data have evolved drastically over the past two centuries, the practice of collecting data and using it for decision-making is much older. As new technologies emerged, such as the ability to store data on magnetic tape and the advent of the internet, the field of data analytics rapidly transformed. Even today, new methods for handling and visualizing data continue to shape the world of data analytics.

This article will explore the roots of the field of data analytics, from ancient times up to current practices.

Ancient Roots of Data Analytics

Although the term “data analytics” wouldn’t be invented for thousands of years, the ancients devised primitive yet effective ways to keep track of data:

  • 18,000 BCE: The Ishango Bone, which was discovered in 1960, depicts the earliest evidence of how data was stored in prehistory. Paleolithic tribes collected bones or sticks, and onto them marked notches to keep a record of their supplies and trade activity.
  • 2400 BCE: The abacus was used in ancient Babylon. This is the earliest tool known that was devoted exclusively to performing calculations. Along with the libraries that were used at this time, the abacus represents an important stage in mankind’s ability to perform calculations and store mass data.
  • 100-200 CE: The first mechanical computer, the Antikythera Mechanism, was constructed. Although the specific context of this tool is not known, it is believed to have been invented by Greek scientists. The Mechanism consists of 30 bronze gears that interlock and is thought to have been used to keep a record of the cycle of the Olympic Games, as well as astrological tracking.

The History of Incorporating Statistics to Data Analytics

The advent of statistics played an integral role in the field of data:

  • 1663: The first statistical data analysis experiment was completed in London by John Graunt. Graunt kept a record of information pertaining to mortality and hypothesized that he could create a warning system for the early detection of the Plague.
  • 1865: The phrase “business intelligence” was coined by Richard Millar Devens.
  • 1880: The invention of the Hollerith Tabulating Machine helped to streamline the US Census Bureau’s decade-long backlog of data that had to be processed. This machine used punch cards and reduced a decade of number crunching into just several months.
Data Analytics Certificate: Live & Hands-on, In NYC or Online, 0% Financing, 1-on-1 Mentoring, Free Retake, Job Prep. Named a Top Bootcamp by Forbes, Fortune, & Time Out. Noble Desktop. Learn More.

The Origins of Modern Data Storage

Modern data storage concepts began to emerge in the mid-1920s, which radically changed the way data could be kept:

  • 1928: German-Austrian engineer Fritz Pfleumer devised a way of storing information magnetically on tape. This revolutionary concept is still relied on today in the form of hard disks that are used to magnetically store digital data.
  • 1944: A librarian at Wesleyan University, Fremont Rider, attempted to quantify the huge amount of information that was being created. He concluded that to meet the growing need of storing every academic and popular work that was being created, libraries in America would need to double their capacity every 16 years.

The Advent of Large Data Centers

The emergence of large data centers further transformed the capacity for storing and accessing large stores of data:

  • 1965: The first data center was planned by the US Government. It was designed to house 175 million pairs of fingerprints and 742 million tax returns on magnetic tape.
  • 1970: A mathematician at IBM, Edgar Codd, devised a relational database. The framework of this model is still used in contemporary data services, which store data hierarchically and make it available to any user who understands what they are looking for.
  • 1989: The term “big data” was first used by author Erik Larson in an article he wrote for Harper’s Magazine. Around this time, the field of business intelligence grows in popularity as new systems and software are used to keep track of a company’s or organization’s performance.

The Invention of the Internet

The world was forever changed when the internet was invented. The ways in which data was stored, shared, and analyzed also underwent a significant transformation:

  • 1991: Tim Berners-Lee announced the first version of the internet. This interconnected web of data was accessible to all users from around the globe.
  • Mid-1990s: As the popularity of the internet grew, relational databases struggled to keep up with demand. Non-relational databases, or NoSQL, emerged to aid in managing the copious information flow.
  • 1996: Digital storage options became cheaper than paper for the first time.
  • 1997: Google search came out, which would become the go-to for searching for data on the internet.

The Emergence of Big Data

When the early forms of big data began to emerge, the world of technology continued its rapid evolution:

  • 1990s: Data mining was first used to uncover patterns in huge data sets.
  • 1999: The first use of the term “big data” appeared in the paper “Visually Exploring Gigabyte Datasets in Real Time.” In addition, the term “Internet of Things” was coined to capture the large number of online devices and their capacity to communicate with one another without human involvement.
  • 2001: The phrase “software as a service” emerged, and with it the technology that would later become core to the cloud-based applications that are still being used to this day.
  • 2005: Hadoop was created, an open-source framework designed to store and analyze huge sets of data.

The Current Use of the Term “Big Data”

The first two decades of the twenty-first century saw an even more rapid metamorphosis of the data management and analysis process:

  • 2008: Worldwide servers processed over nine zettabytes, or 9 trillion gigabytes, of information.
  • 2009: Companies in the US with more than 1,000 employees were typically storing over 200 terabytes of data.
  • 2010: Data Lakes emerged, which allowed data to be stored in its raw or natural format in a large repository.
  • 2014: For the first time in history, there was a greater number of people using mobile devices to access digital data rather than their computers.
  • 2015: AI, deep learning, and machine learning techniques emerge into the field of data science.
  • 2015-2020: The number of datasets available for various data types more than doubled.

In the past fifteen years alone, staggering advances in the fields of data science and data analytics have revolutionized the way mankind processes data. In the coming years, this trend is expected to continue, with new advances changing the data landscape daily.

Hands-On Data Analytics Classes

A great way to learn more about Data Analytics is to enroll in one of Noble Desktop’s data analytics classes. Courses are offered in New York City, as well as in the live online format in topics like Python, Excel, and SQL.

In addition, more than 130 live online data analytics courses are also available from top providers. Topics offered include FinTech, Excel for Business, and Tableau, among others.

Courses range from three hours to six months and cost from $219 to $27,500.

Those who are committed to learning in an intensive educational environment may also consider enrolling in a data analytics or data science bootcamp. These rigorous courses are taught by industry experts and provide timely, small-class instruction. Over 90 bootcamp options are available for beginners, intermediate, and advanced students looking to master skills and topics like data analytics, data visualization, data science, and Python, among others.

For those searching for a data analytics class nearby, Noble’s Data Analytics Classes Near Me tool provides an easy way to locate and browse approximately 400 data analytics classes currently offered in the in-person and live online formats. Course lengths vary from three hours to 36 weeks and cost $119-$27,500.