History of IOT

As of 2016, the vision of the Internet of things has evolved due to a convergence of multiple technologies, including ubiquitous wireless communication, real-time analytics, machine learning, commodity sensors, and embedded systems. This means that the traditional fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation), and others all contribute to enabling the Internet of things (IoT).

The concept of a network of smart devices was discussed as early as 1982, with a modified Coke machine at Carnegie Mellon University becoming the first Internet-connected appliance, able to report its inventory and whether newly loaded drinks were cold. Mark Weiser’s seminal 1991 paper on ubiquitous computing, “The Computer of the 21st Century”, as well as academic venues such as UbiComp and PerCom produced the contemporary vision of IoT. In 1994 Reza Raji described the concept in IEEE Spectrum as “[moving] small packets of data to a large set of nodes, so as to integrate and automate everything from home appliances to entire factories”. Between 1993 and 1996 several companies proposed solutions like Microsoft’s at Work or Novell’s NEST. However, only in 1999 did the field start gathering momentum. Bill Joy envisioned Device to Device (D2D) communication as part of his “Six Webs” framework, presented at the World Economic Forum at Davos in 1999.

The concept of the Internet of things became popular in 1999, through the Auto-ID Center at MIT and related market-analysis publications. Radio-frequency identification (RFID) was seen by Kevin Ashton (one of the founders of the original Auto-ID Center) as a prerequisite for the Internet of things at that point. Ashton prefers the phrase “Internet for things.” If all objects and people in daily life were equipped with identifiers, computers could manage and inventory them. Besides using RFID, the tagging of things may be achieved through such technologies as near field communication, barcodes, QR codes and digital watermarking.

In its original interpretation, one of the first consequences of implementing the Internet of things by equipping all objects in the world with minuscule identifying devices or machine-readable identifiers would be to transform daily life. For instance, instant and ceaseless inventory control would become ubiquitous. A person’s ability to interact with objects could be altered remotely based on immediate or present needs, in accordance with existing end-user agreements. For example, such technology could grant motion-picture publishers much more control over end-user private devices by remotely enforcing copyright restrictions and digital rights management, so the ability of a customer who bought a Blu-ray disc to watch the movie could become dependent on the copyright holder’s decision, similar to Circuit City’s failed DIVX.

A significant transformation is to extend “things” from the data generated from devices to objects in the physical space. The thought model for future interconnection environment was proposed in 2004. The model includes the notion of the ternary universe consists of the physical world, virtual world and mental world and a multi-level reference architecture with the nature and devices at the bottom level followed by the level of the Internet, sensor network, and mobile network, and intelligent human-machine communities at the top level, which supports geographically dispersed users to cooperatively accomplish tasks and solve problems by using the network to actively promote the flow of material, energy, techniques, information, knowledge, and services in this environment. This thought model envisioned the development trend of the Internet of things.