Layer by layer digitized information is changing our perception of the world, molding our personal needs and desires. The line between reality and digital is blurring. We are no longer limited to a two-dimensional view of our environment. We are beginning to see beyond the SmartPhone screen. Where does it lead?
Welcome to the Spatial Web
At first, there was limited interaction of Web 1.0. Upgrades to Web 2.0 moved on to social interaction and multimedia content. Now, as the result of high-bandwidths and 5G connectivity, three major technologies are taking advantage of the digital elbow room and giving rise to Web 3.0. Welcome to the Spatial Web.
Three Technologies
The Spatial Web is the product of a trio of technologies stacking their abilities while riding that high-speed connectivity wave.
* Augmented Reality (AR) is enhancing real-world objects with computer-generated perceptual information.
* Artificial Intelligence (AI) processes enormous amounts of information and provides it instantaneously.
* Sensors take real-time data and transfer it to a physical and virtual world.
Convergence of AR
In such diverse industries as health care and aviation, AR is combining the real with the not- so- real. Special eyeglasses, for instance, provide physicians wearing them with relevant medical records and research even as they perform their own analysis and diagnoses.
Similar glasses are in use by a major aircraft manufacturer with a 25 percent improvement in production time and near-zero error rates. AR headsets are also exhibiting strong inroads into training and orientation programs without reducing the productivity of other workers or sacrificing hardware.
Convergence of AI
The Instant delivery of data is AI’s responsibility when combined with AR technology. Embedded AI algorithms power AR experiences in everything from the arts to criminal investigations. The two systems also are proving vital to the product workflow and parts tracing in all types of manufacturing. The need for manned scanners is disappearing and warehouse accidents are slowing. Some predict AIs with its speed retrieval will someday provide assistance for the frustrated DIY’er.
Convergence of Sensors
Along with power needs, the AI-AR combo is heavily dependent on large numbers of specialized sensors to detect its environment. Different sensors are designed for different tasks.
Depth sensors measure the reflected light from laser-produced dots to produce environmental maps.
Stereoscopic imaging, utilizing two cameras, also measures depth. The technique is being challenged by two other industry giants with smaller and more customer compatible products. All, however, demand large amounts of energy.
For natural viewing experiences, gaze-tracking sensors are in development to not only create immersive screens with hand signals but to do the same with simple, visual cues.
Convergence of Blockchain
AR is a power-hungry technology. When combined with AI and sensors, the Computing Power Units far exceed what can be provided by a single GPU.
Enters Blockchain, a networked database of computers that share their GPU. Centralized GPUs and cloud computing systems are competing for attention for the energy-producing jobs but Blockchain remains, for the moment, to be the favorite system.
In addition, one of the major proponents of the Blockchain architecture is busy developing a distributed network which would allow anyone with a GPU to contribute it for a commission of up to $300 a month in tokens. In turn, the tokens can be redeemed for cash or used to create more AR content.
The convergence of AR + AI + Sensor — tied together with Blockchain — is a combination of technologies that have blurred our past but continue to open a bright future.