Breakthroughs In AI

With the recent advances in science, it has been established that AI is a ubiquitous technology – its potential for upliftment encompasses various fields, from astronomy to agriculture.

This article sheds light on some of the significant current breakthroughs in AI that aim to solve frontier world problems.

Tiny AI/Tiny ML

This remarkable technology will tackle one of the primary challenges of present-day AI algorithms—the high computing power required to deploy them, which leads to increased carbon emissions. This aggravates the climate change issues we already face.

It will also make deploying these algorithms very efficient and fast.

"The pervasiveness of ultra-low-power embedded devices, coupled with the introduction of embedded machine learning frameworks like TensorFlow Lite* for Microcontrollers*, will enable the mass proliferation of AI-powered IoT* devices.",

— Vijay Janapa Reddi, Associate Professor at Harvard University

*IoT-Internet of Things-Interconnected Objects that can collect & exchange data over a wireless network-without any involvement of humans

*Microcontrollers-

*TensorFlow Lite-

How It Works

Training the model is similar to that in machine learning (Training on a significant chunk of data and testing on the remaining data) but the key difference lies in the post-training stage; the method employs a process called deep compression, which involves:

1. Model Distillation (by pruning/knowledge distillation) to reduce the data sparsity /redundancy within networks. While large networks have a high representative capacity, if the network capacity is not saturated, it could be represented in a smaller network with a lower representation capacity (i.e., fewer neurons).

2. Quantization - The model is then quantized into a format compatible with the embedded system’s architecture. This is necessary to reduce the storage size of weights according to the bit arithmetic of the system, and accuracy is negligibly impacted.

3. Huffman Encoding - The data is then gathered & enacted upon by minimum redundancy code and undergoes lossless data compression.

*However, although optimal among methods encoding symbols separately, Huffman coding is not always optimal among all compression methods—it is replaced with arithmetic coding or asymmetric numeral systems if a better compression ratio is required.

4. Compilation - The model is then compiled into C or C++ code (most microcontrollers work in these languages for efficient memory usage) and run by the interpreter on the device.

Source:arxiv.org

Read More at Towards Data Science | Tiny ML

Wikipedia | Huffman coding

Why It Matters

Our devices no longer need to talk to the cloud for us to benefit from the latest AI-driven features while being economically and ecologically sustainable simultaneously!.

Tiny AI will also make new applications possible, like mobile-based medical-image analysis. Tiny AI is also localized- The sensor and AI algorithm are working on the same device. This is better for privacy since our data no longer needs to leave the device to improve a service or a feature. However, the system's key advantages are also its limitations; it becomes essential to build a preferential infrastructure (where tasks are prioritized according to various metrics like computing power to be completed in the same order) because of the limited storage space. 

GPT-3- A step towards Artificial General Intelligence

GPT-3 stands for the 3rd version of the autocomplete tool (technically a language predictor), a Generative Pre-trained Transformer designed by OpenAI.

The transformer is extremely powerful. Any type of text that’s been uploaded to the internet has likely become grist to GPT-3’s mighty pattern-matching mill. It is a large language model, trained on thousands of books and most of the internet. It can practically code in any programming language, requiring just the pseudocode as input, and could even write and edit articles such as these!

It can also generate text that is almost indistinguishable from that of humans. Intrigued? Here’s a fictional dialogue between Claude Shannon and Alan Turing penned by GPT-3

With an algorithm feeding off the web non-stop, the repercussions are manifold. The web also includes misinformation and already biased and discriminatory opinions.

The researchers are wary of its potential misuse—hence have chosen to limit access to approved customers after evaluating use cases critically.

How It Works

The team trained GPT-3, an autoregressive language model, with 175 billion parameters. These are the weights of the connections between the network’s nodes and a good proxy for the model’s complexity, 10x more than any previous non-redundant language model and applied without any fine-tuning!

Why It Matters

Its abilities, ranging from writing creative fiction to bringing back eminent dead personalities to have discussions, show a plethora of possibilities for future AI technologies.

Fun Fact: It isn’t limited to just text, and can autocomplete Images too!

“It’s impressive (thanks for the nice compliments!), but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

Sam Altman, OpenAI CEO

DeepMind

This AI laboratory is infamous amongst the AI community for building the first computer, Deep Blue, to have beat then world chess grandmaster Garry Kasparov. and has kickstarted what is known as the onset of Artificial General Intelligence.

AlphaFold

DeepMind's AlphaFold, a deep-learning* system, has cracked one of biology's grand challenges – it can accurately predict the structure of proteins within the width of an atom. A protein's biological function is tied to its 3D structure; knowledge of the final folded shape is essential to understanding how a specific protein works. The AI technique helps predict the protein’s structure, in a way that is data agnostic* and much cheaper than other techniques such as X-ray crystallography. This knowledge, in turn, helps researchers better understand diseases and design drugs.

*Deep Learning- A specialized form of machine learning that is based on Artificial Neural Networks.

*Agnostic- compatible with various operating systems and hence more accessible

But First: What is Protein Folding?

Protein folding occurs in a cellular compartment called the endoplasmic reticulum. This is a vital cellular process, and proteins must be correctly folded into specific, three-dimensional shapes in order to function correctly. Accumulation of misfolded proteins can cause various diseases like Alzheimer’s, Parkinson’s, and unfortunately, some of these diseases are very common.

Credit: Edward Kinsman/Science Photo Library

DeepMind's success wasn't so much a function of picking the right neural nets but rather "how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] answer the question."

Briana Brownell, data scientist and founder of the AI company PureStrategy

How It Works

The approach works in 2 stages:

  1. Multiple sequence alignments - Comparison of a protein sequence with similar ones in a database to reveal pairs of amino acids that aren’t found next to each other in a chain, but that tend to appear in tandem, which suggests that the 2 amino acids are located near each other in the folded protein.

Then a neural network was trained to predict the distance between two such paired amino acids in the folded protein. Their machine learning algorithms were trained on precisely measured distances in proteins, and hence the accuracy of predicting how proteins would fold up was high. A parallel neural network predicted the angles of the joints between consecutive amino acids in the folded protein chain.

  1. Gradient Descent-To predict the structure, the exact set of distances and angles weren’t known, so a more feasible method of gradient descent was deployed to iteratively refine the structure so it came close to the predictions from the first step.

Although computational predictions aren’t yet accurate enough to be widely used in drug design, the increasing accuracy allows for other applications, such as understanding how a mutated protein contributes to disease or knowing which part of a protein to turn into a vaccine for immunotherapy.

“These models are starting to be useful,”

John Moult, Biologist, University of Maryland & the founder of the CASP** biennial competition

**The Critical Assessment of protein Structure Prediction (CASP) is a competition where teams are challenged to design computer programs that predict protein structures from sequences.

Meanwhile, a holistic step to tackle ethical concerns:

AI has solved many scientific problems using deep learning/Artificial Neural Network frameworks. This ever-increasing progress in AI research makes it imperative to focus on making it more capable and maximizing AI's societal benefit while ensuring there aren’t any negative implications.

For the first time, researchers who submit papers to NeurIPS, one of the biggest AI research conferences in the world, must now state the "potential broader impact of their work" on society as well as any financial conflict of interest, conference organizers told VentureBeat.

Interestingly, while AI technologies are starting to show promise, a few "Useless Machines," like the Leave Me Alone Box, also exist at the very opposite juncture. Conceived at Bell Laboratories in the early 1950s by the computer scientist Marvin Minsky, a pioneer in the field of artificial intelligence, their sole function is to switch themselves off by operating their own "off" switch.

"There is something unspeakably sinister about a machine that does nothing—absolutely nothing—except switch itself off ”

- acclaimed Sci-fi Author Arthur Clarke, on seeing the Leave Me Alone Box at Minsky’s mentor, Claude Shannon’s desk.

The Useless Machines provide a lesson to everyone trying to find solutions using AI – To remember when to switch off! – that one must deliberate whether or not the AI technique would provide correct solutions, as in many cases it takes a large number of iterations and consumes precious time, only to provide lackluster results. It is important not to leapfrog into developing the tech stack, instead of strategizing.

The inventors first named it the "Ultimate Machine" — a name that didn't stick yet somehow revealed something of their invention's ironic self-enclosure.

REFERENCES:

10 Breakthrough Technologies 2020

-Vatsala Nema


Recommended For You

Subscribe to our newsletter.

You can subscribe to our newsletter to get the latest updates and blogs straight into your mail.

Contribute to us.

You can contribute to this project. Contact us through email or whatsapp.


Copyright © 2023 Chrysalis IISERB