Artificial Intelligence, popularly known as AI, has been the main driver of bringing disruption to today’s tech world. While its applications, like machine learning, neural network and deep learning, have already earned recognition with their wide-ranging applications and use cases, AI is still in an embryonic stage. This means new developments are simultaneously taking place in this discipline, which can soon transform the AI industry and lead to new possibilities. There’s much more to AI than self-driving cars and friendly customer service chatbots. Let’s take a look at some of the promising AI technologies of tomorrow.
Generative artificial intelligence refers to programs that make it possible for machines to use things like text, audio files and images to create content.
Recent advances in AI have allowed many companies to develop algorithms and tools to generate artificial 3D and 2D images automatically. The MIT Technology review described generative AI as one of the most promising advances in the world of AI in the past decade. It is poised for the next generation of apps for auto programming, content development, visual arts, and other creative, design, and engineering activities.
Apart from designing and campaign, Generative AI has brought innovation in many areas.
1. Provides better customer service, facilitates and speeds up check-ins, enables performance monitoring, seamless connectivity, and quality control, and helps find new networking opportunities.
2. Helps in film preservation and colorizations.
3. Helps in healthcare by rendering prosthetic limbs, organic molecules and other items from scratch when actuated through 3D printing, CRISPR, and other technologies.
4. It can also enable early identification of potential malignancy to more effective treatment plans. For instance, in the case of diabetic retinopathy, generative AI not only offers a pattern-based hypothesis, but can also interpret the scan and generate content, which can help to reveal the physician’s next steps.
5. IBM has used this technology for researching on antimicrobial peptide (AMP) to find drugs for COVID-19.
Federated Learning is privacy-preserving model training in diverse, distributed networks. Mobile phones, wearable devices, and autonomous vehicles are just a few of the modern distributed networks generating a wealth of data each day. Due to the growing computational power of these devices—coupled with concerns about transmitting private information—it is increasingly attractive to store data locally and push network computation to the edge devices. Federated learning has emerged as a training paradigm in such settings. As we discuss in this post, federated learning requires fundamental advances in areas such as privacy, large-scale machine learning, and distributed optimization, and raises new questions at the intersection of machine learning and systems.
Potential applications of federated learning may include tasks such as learning the activities of mobile phone users, adapting to pedestrian behavior in autonomous vehicles, or predicting health events like heart attack risk from wearable devices. We discuss two established applications in more detail below.
Learning over smart phones: By jointly learning user behavior across a large pool of mobile phones, statistical models can power applications such as next-word prediction, face detection, and voice recognition. However, users may not be willing to physically transfer their data to a central server in order to protect their personal privacy or to save the limited bandwidth/battery power of their phones. Federated learning has the potential to enable predictive features on smart phones without diminishing the user experience or leaking private information. Figure 1 illustrates an application where we aim to learn a next-word predictor in a large-scale mobile phone network based on users’ historical text data.
Learning across organizations: Organizations such as hospitals can also be viewed as remote ‘devices’ that contain a multitude of patient data for predictive healthcare. However, hospitals operate under strict privacy practices, and may face legal, administrative, or ethical constraints that require data to remain local. Federated learning is a promising solution for these applications, as it can reduce strain on the network and enable private learning between various devices/organizations. Figure 2 depicts an example application in which a model is learned from distributed electronic health data.
Neural Network Compression
AI made rapid progressions in analyzing big data by leveraging deep neural network (DNN). However, the key disadvantage of any neural network is that it is computationally intensive and memory intensive, which makes it difficult to deploy on embedded systems with limited hardware resources. Further, with the increasing size of the DNN for carrying complex computation, the storage needs are also rising. To address these issues, researchers have come with an AI technique called neural network compression.
Generally, a neural network contains far more weights, represented at higher precision than are required for the specific task, which they are trained to perform. If we wish to bring real-time intelligence or boost edge applications, neural network models must be smaller. For compressing the models, researchers rely on the following methods: parameter pruning and sharing, transferred or compact convolutional filters, and knowledge distillation.