Artificial Intelligence (AI) has transformed the computing infrastructure in the 21st century in an effort to build machines that are capable of mimicking human-like behavior relying primarily on rationality, logic and the additional emotional component of decision-making. In a tech-driven era, where innovation is accelerating at an unprecedented rate, one could only imagine what the future beholds, especially considering the swarm of large companies and start-ups that implement AI in multiple segments of their businesses. More recently, budding fields such as the Internet of Things (IoT), Big Data, gene editing based on CRISPR-Cas9, are implementing different forms of AI and machine learning into their working protocol to tackle some of the challenges concerning process efficiency and rapid data collection.

IoT, which represents a network of embedded systems communicating with each other over the internet, has been a pioneer in implementing AI in a variety of systems to understand and evaluate patterns from a pool of data aiding the user in predicting behavior under various circumstances. For instance, automated vacuum-cleaning robots developed by iRobot run on a machine-learning framework designed to adapt to layouts in different environments and select the most efficient movement pattern, without having to be externally programmed to do so. Such is also the case with thermostats developed by Nest Labs (now acquired by Google) which improve energy efficiency by adapting to the temperature preferences of the user at different times of the day.

AI and Gene Editing

Gene editing represents a rather fascinating use of AI, especially considering the myriad challenges that arise from the similarity in multiple genomic regions, which, if improperly identified, could cause the alteration of the wrong DNA, leading to fatal consequences. Recently, biologists from Stanford University discovered a method to make the bacterial virus-fighting system capable of editing genes to combat infectious diseases induced from the virus, a method they coined the CRISPR-Cas9 gene editing technique [3].

A potential application of AI involves developing a machine learning training algorithm to help researchers identify specific off-target genomic regions in the body using training models, to help select the appropriate target region for gene-editing. If successful, this would represent a tremendous increase in the probability of selecting the appropriate DNA sequence to alter, with less room for unintended consequences.

Artificial General Intelligence (AGI)

It is important to consider what AI actually is when one utters the terms ‘AI’, ‘machine learning’ or ‘deep learning’. The premise behind AI is the ability to define human intelligence in a way that computers can be programmed to simulate it. Thus, when trying to understand what would constitute the most sophisticated version of AI, one would anticipate a system possessing the numerous analytical, emotional and social qualities depicted by humans in addition to intellectual and decision-making capabilities. This is the proposition behind Artificial General Intelligence (AGI), which, if successfully implemented, would represent a significant landmark in the history of innovation and technology. For the purpose of this essay, looking at applications that would entail the inclusion of AI to govern its functionalities, it is important to consider the potential consequences of what could be achieved and more importantly, would there be incidental consequences?

AI in Materials Science and Nanotechnology

Material Science is governed by the fundamental physical and chemical forces at the sub-atomic level which rely on the probability of one or more chemical constituents existing with one another in a stable state. In colloquial terms, to be able to determine the chemical structure of a compound, relies on its inherent stability to exist in that particular state, which gives it the lowest possible potential energy and thus a higher probability of predicting the existence of electrons at that location.

Scientists and engineers around the world have succeeded in synthesizing millions of compounds by understanding the interactions between the chemical constituents at the atomic level and characterizing the reaction mechanism using a plethora of analytical techniques. These compounds and their respective recipes are recorded in millions of databases, scientific journal articles, textbooks and even catalogs from commercial vendors.

Just like any human fingerprints, chemical compounds are unique in nature, possessing distinctive physical properties with respect to its structure, which would allow scientists to leverage AI to accurately and successfully predict the synthesis techniques to obtain a specific chemical compound simply by encoding the desired range of physical properties such as melting points, glass-transiting, thermal conductivity, toxicity levels, solubility parameters, etc. For instance, a group of researchers from MIT led by Dr. Ju Li developed a neural network algorithm to fine-tune optoelectronic and photonic properties of a class of semiconductors depending upon the strain induced within its electronic structure [2]. The algorithm was trained to associate the band-gap of conventional semiconducting materials with its electronic configuration, as well as its response to external stimuli such as a strong electric field in its vicinity.

Given sufficient computing power and training data, the possibilities to leverage AI in predicting chemical structures would be limitless, leading towards the development of a superior class of robust, resistance, and environmentally friendly materials capable of being manufactured into various form factors.

The ability to manipulate Xenon atoms using a scanning tunneling microscope by IBM researchers to spell the letters ‘IBM’ in 1986 was the first commercial exhibition of the capabilities of nanomaterials in action and laid the foundation for the proliferation of nanomaterials into myriad substrates to achieve properties that were once considered impossible with traditional microscopic or macroscopic fillers. However, it wasn’t until the discovery of carbon-nanotubes (CNTs) by Ijima in 1991, that the ‘nano-craze’ took off, setting up a platform for thousands of companies and start-ups to invest in this area and further exploit the properties of these unique class of materials.

One of the most important attributes of nanomaterials lies in its inordinately defect free structure, and high surface-area to volume ratio allowing it to be used as a reinforcement filler for achieving superior mechanical, electrical, optical, thermal and magnetic properties at exceptionally low weight loadings, which remains the foundation for nanocomposites. However, a persistent problem in achieving these properties remains in the inability to accurately disperse nanomaterials in a given matrix owing to its tendencies to form large agglomerates to minimize its inherent surface free energy. In such a scenario, AI would be a tremendously useful tool to augment systems in predicting the fundamental forces that underlie the interactions between nanomaterials and a carrier matrix, allowing engineers to develop computer simulations and understand the essential parameters that can accurately disperse nanomaterials to generate defect-free nanocomposites.

AI in Computer Simulations

As mentioned earlier, one of the challenges in developing a model to accurately predict the behavior of a system relies on two factors i) the level of computing power ii) understanding the fundamental forces that govern the working of the system. Computer simulations are useful tools to aid in the prediction of outcomes without having to run experiments as they are pre-programmed with mathematical models and statistical theorems that allow users to obtain answers with a minimum scope of error. In a practical sense, this is never accepted as computer simulations are viewed as a tool to obtain a starting point to carry out further experiments. With the intervention of relevant machine-learning algorithms, one can reasonably expect to minimize the amount of error generated through computer simulations as it lays out a platform for unbound testing of theoretical models as well as verification and validation. For instance, with the help of quantum computing, a deep neural network developed by a group of researchers in a project Deep Density Displacement Model (D$3M), were able to simulate the change in universal conditions by tweaking myriad parameters ex. varying the level dark matter in cosmos reducing the simulation times from 300 hours to just two minutes[1] with an error of just 2.8%. Using 8000 different simulations from multiple high-accuracy models as training data, the neural network was able to run calculations on the information to further yield faster and more accurate results.

AI in Drug Discovery

Drug discovery remains the holy grail for identifying cures to some of the most daunting medical conditions and illnesses, some of which have poorly understood underlying causes (ex. Alzheimer’s, Parkinson’s disease, etc.). Pharmaceutical giants invest billions of dollars in the research and development of curative drugs to augment their product line, which has shown consistently poor yields (1 drug per $2.6 billion invested) as well as a plethora of side-effects rendering them particularly inefficient [4]. The primary challenge remains in identifying the myriad biochemical pathways a drug takes, once incorporated within the body, and the potency in binding to the target receptor.

One of the applications of AI and machine learning in this process could involve programming the system to understand the different metabolic pathways and its most probable outcomes within the body, allowing users to identify the mechanism of breakdown of a drug within the body. Another fundamentally important aspect of this process lies in the development of the drug itself, which entails the identification of the toxicity issues associated with the chemical constituents within the drug. Similar to the process described in the preceding Materials Science section, the development of machine-learning algorithms that helps train a system on the outcomes of a particular chemical structure on a given receptor within the body is leading one to understand the underlying cause behind the efficacy of a class of drugs.

AI in Stock Market Analysis

Conventionally, stock market predictions relied primarily on two techniques, mainly i) fundamental analysis by determining if a company is over/undervalued by assessment of balance sheets, 10-K, 10-Q, YoY growth, earnings-per-share (EPS), share price/earnings (P/E) ratio, as well as assessment of the company’s portfolio which includes list of investors, partners and products/services provided by the company and, ii) technical analysis entailing the use of indicators such as trading volume, moving averages, support-and-resistance targets, relative strength indices, oscillatory indices, moving average convergence and divergence bands, etc. to predict the trendline followed by the stock.

The implementation of machine-learning tactics would be tremendously beneficial by capitalizing on other critically overlooked factors that affect the market dynamics such as rumors from various media sources which imitate substantial momentum within the stock price, the effect of interest rates by the federal reserve, social-media posts from notable authorities within a company etc. which can serve as training data for the system to generate algorithms to automate stock picking. Of course, developing such a system to select profitable stocks would require capabilities of AGI; however, accurately identifying the various segments that correlate with inflection points in stock prices could be a ‘game-changer’ for investors and day-traders, for which AI presents numerous possibilities.

AI in Traffic Flow Systems

One of the most underrated applications of AI and neural networks is controlling traffic flow at a traffic junction. Tracking the population density of automobiles and pedestrians at different time-periods of the day as well as different days of the week could serve as valuable training data to develop AI models to control traffic flow across a junction.

AI in Cyber Security

Data acquisition is evolving at an unprecedented rate, with companies and businesses are undertaking substantial efforts to gather user data, to further understand user behavior, which in turn helps them in designing marketing strategies to target specific population segments to sell their product. This requires a vast array of inter-connected servers to store and process this data. However, a breach in security or a malware attack could pose significant threats to the operation of this process and could lead to millions in losses either through server down-time or loss in data. Implementation of machine learning and AI in this scenario would help administrators identify system vulnerabilities as well as the introduction of malware into databases and immediately propose solutions to eliminate threats, as well as design protection systems to prevent future attacks.

It is also important to consider that AI could be used as a tool for the attacker himself/herself, which would again entail the implementation of a nested AI scheme to detect any unfamiliar patterns.

AI in FinTech and Digitization

Being on the forefront of adopting AI, the financial industry has tremendously reduced operating costs by in excess of 22% by automating several tasks that required human intervention such as identification of fraud, analysis of background reports, virtual assistants for recognizing user needs and proposing viable solutions, analysis of loan agreements (COiN by J.P. Morgan) which substantially decreased the number of man-hours required to plunge through the data. Some of the potential advancements that could possibly be implemented in the near future include tracking user expenses as well as making recommendations on user purchases and expenditures, implementing machine-learning algorithms to make predictions on price-swings on a product depending on market conditions, etc.

AI in Household Ergonomics

One of the more unconventional applications of AI is tracking and predicting the ergonomic placement of household appliances, furniture, and equipment in order to maximize the efficiency of storage space, as well as the presence/creation of free-space, whenever necessary. Bumblebee SPACES Inc. is an AI-startup using machine-learning to develop a form of robotic household, which includes movable furniture and possessions operated by human-commands. It is fascinating to consider how the dramatic change in household ergonomics can serve as an ideal alternative to the ever-increasing housing costs, especially in affluent parts of the country, creating an ecosystem of ‘use-on-demand’ facilities.

Understanding the Big Picture

AI will continue to advance substantially within the next few decades in a path towards achieving AGI; however, there remain some fundamentally unanswered challenges that still could lead the field to a standstill. It is also important to understand the true purpose which is to make/ help make better decisions and reduce the probability of error; an ideal scenario would involve designing a system to reveal the numerous possibilities leading to favorable outcomes with humans being the final decision-makers serving as the emotional component of the system.

Recently, Neuralink, a company led by Elon Musk, revealed its plan for implementing flexible electrodes as brain-implants to generate neural signals to connect with multiple devices or people while still maintaining human-like functionality. It allows humans to outsource some of the decision-making aspects to a computerized version of themselves, similar to a cyborg, and yet still maintain an aspect of their social character. Could this be the next milestone towards implementing AGI? For now, AGI seems to be a life-time away, but there always remains the possibility of fundamental breakthroughs that could expedite its implementation in today’s world.

  • Siyu He, Yin Li, Yu Feng, Shirley Ho, Siamak Ravanbakhsh, Wei Chen, Barnabás Póczos. Learning to predict the cosmological structure formation. Proceedings of the National Academy of Sciences, 2019; 201821458 DOI: 10.1073/pnas.1821458116
  • Proceedings of the National Academy of Sciences Mar 2019, 116 (10) 4117-4122; DOI: 10.1073/pnas.1818555116
  • Big pharma and health care: unsolvable conflict of interests between private enterprise and public health. Mayer Brezis Isr J Psychiatry Rel Sci. 2008; 45(2): 83–94.

About the Author

Siddhant Iyer

University of Massachusetts Lowell

  • Field of Study:Polymer/Plastics Engineering
  • Expected Year of Graduation:2020
  • Chosen Prompt: Artificial intelligence technology, it’s future development and impact on people’s lives.