Techniques and optimization algorithms in deep learning: A review

Authors

Nitin Liladhar Rane
Vivekanand Education Society's College of Architecture (VESCOA), Mumbai, India
Suraj Kumar Mallick
Department of Geography, Shaheed Bhagat Singh College, University of Delhi, New Delhi India
Ömer Kaya
Engineering and Architecture Faculty, Erzurum Technical University, Erzurum, Turkey
Jayesh Rane
Pillai HOC College of Engineering and Technology, Rasayani, India

Synopsis

Deep Learning (DL), a branch of Artificial Intelligence (AI), has transformed various industries by allowing machines to carry out activities that were once thought to be only possible through human intelligence. The rapid progress in deep learning methods and algorithms has played a key role in reaching unparalleled levels of accuracy and efficiency in diverse applications. This research explores the most recent and popular techniques and algorithms in deep learning, offering a detailed look at how they are created and used. Important focuses include convolutional neural networks (CNNs) for recognizing and processing images, recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) for analysing sequential data and language processing, and generative adversarial networks (GANs) for generating authentic synthetic data. Moreover, the research delves into new advancements like transformers and self-attention mechanisms, which have greatly enhanced results in activities such as language translation and text generation. The research also explores new developments in transfer learning, federated learning, and explainable AI, showcasing their ability to improve model generalization, privacy, and interpretability. The study highlights the increased significance of incorporating these advanced methods across different fields such as healthcare, finance, autonomous driving, and robotics, in order to propel the upcoming wave of technological advancements. This research seeks to educate and provide direction to future research and development in the dynamic field of DL by examining advanced algorithms.

Keywords: Artificial intelligence, Machine learning, Deep learning, Techniques, Convolutional neural networks, Generative adversarial networks, Algorithms.

Citation: Rane, N. L., Mallick, S. K., Kaya, O., & Rane, J. (2024). Techniques and optimization algorithms in deep learning: A review. In Applied Machine Learning and Deep Learning: Architectures and Techniques (pp. 59-79). Deep Science Publishing. https://doi.org/10.70593/978-81-981271-4-3_3

3.1 Introduction

One area of machine learning called "deep learning" (DL) is thought to be revolutionary and has the potential to change many scientific and industrial domains (Oprea et al, 2020; Lakshmanna et al, 2022; Abdar et al, 2021). Artificial neural network-based deep learning techniques have advanced significantly in recent years, enabling computers to learn and make judgements more effectively and precisely than in the past (Lakshmanna et al, 2022; Mehrish et al, 2023; Murat et al, 2020; Wang et al, 2022). Deep learning algorithms have made great strides in a number of domains, including computer vision, natural language processing, speech recognition, and autonomous systems. These algorithms are able to extract intricate patterns and representations from enormous volumes of data (Abdar et al, 2021; Domingues et al, 2020; Nagaraju & Chawla, 2020; Saleem et al., 2021). As the industry keeps changing, it is crucial to delve into and comprehend the many methods and algorithms that support DL, their real-world uses, and the difficulties that come with implementing them (Oprea et al, 2020; Islam et al., 2021; Alyoubi et al., 2020; Altaheri et al., 2023). The fundamental idea behind DL is to use numerous layers of interconnected neurons to replicate the human brain's structure and behaviour ( Nagaraju & Chawla, 2020; Mijwil et al., 2023; Bouguettaya et al., 2022; Khan et al., 2021). The hierarchical layer structure enables DL models to gradually uncover more advanced features from raw input data, resulting in improved predictions that are both accurate and resilient (Lakshmanna et al, 2022; Murthy et al., 2020; Sarker, 2021; Shoeibi et al., 2021). convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs) have individually made distinct contributions in DL by tackling specific issues and improving the functionalities of the systems (Amiri et al., 2024). CNNs are now the foundation of image and video analysis, while RNNs excel in sequential data tasks like language translation and time series prediction.

Despite these advancements, DL still faces a number of challenges, including the need for large labelled datasets, computational complexity, and model interpretability (Lakshmanna et al., 2022; Hazra et al., 2021). Furthermore, there are a number of opportunities and difficulties when integrating deep learning (DL) with reinforcement learning, transfer learning, and meta-learning (Mehrish et al, 2023; Murat et al, 2020; Amiri et al., 2024; Arya et al., 2023). To overcome these obstacles, one needs have a thorough understanding of the available techniques and algorithms in addition to continuously coming up with new ideas to expand and enhance DL's efficacy (Murat et al, 2020; Wang et al, 2022; Domingues et al, 2020). In this study, our goal is to offer an in-depth analysis of the current status of methods and algorithms in the field of DL, with a particular emphasis on their evolution, uses, and upcoming directions. By conducting a thorough review of literature, we will analyze the main findings and revelations from recent research, focusing on how DL methods have evolved and affected the field. We have analysed keywords and co-occurrences to discover common themes and trends in the research community. Furthermore, we have utilized cluster analysis to reveal connections and classifications among different DL methods, providing a thorough understanding of the subject.

 Fig 3.1 Deep learning architecture

Fig 3.1 shows the hierarchical relationship between artificial intelligence (AI), machine learning (ML), neural networks (NN), and deep learning (DL). Each of them is located above the subcategories it covers and represents a broader field. Deep learning has the capacity to learn from more complex and detailed data using multi-layer neural networks. Deep learning is the fastest developing and most popular field in the fields of artificial intelligence and machine learning today. This is one of the methods used. Combined with large data sets and powerful computational resources, deep learning models are highly effective at solving complex problems (Zhang et al., 2021; Tulbure et al., 2022).

Contributions of this chapter:

  • Reviewing literature involves analyzing current studies to identify main findings, research methods, and trends in DL approaches and algorithms.
  • Discovering popular themes and new subjects in the field of DL by examining keywords and their connections systematically.
  • Cluster Analysis involves discovering connections and classifications among different DL techniques and approaches, offering a detailed understanding of the subject area.

3.2 Methodology

To gather relevant research articles, conference papers, and reviews, a thorough literature evaluation was first conducted using reputable academic sources such as IEEE Xplore, ACM Digital Library, Springer, and Elsevier. The primary goal of the search approach was to locate papers that addressed deep learning techniques and algorithms in detail. Peer-reviewed publications released during the previous ten years were one of the selection criteria, which helped to ensure that the results remained current and relevant. Terms like "deep learning techniques," "deep learning algorithms," "neural networks," "convolutional neural networks," "recurrent neural networks," and "transformer models" were the main focus of the search at this point. Following a literature study, a keyword analysis was carried out to identify the terms and concepts that appeared most frequently in the articles that were gathered. This included pulling out keywords from the abstracts and titles of the chosen articles and utilizing text mining tools to measure how often they appeared. The aim was to identify the main topics and key areas of concentration within the domain of deep learning. Analyzing the frequency and distribution of keywords enabled the identification of prevalent trends and important research focuses.

To look into the relationships between the detected terms, a co-occurrence analysis was done. With the help of this study, it will be possible to better comprehend the interactions and connections between various deep learning principles and methodologies. By visualising the co-occurrence network, the network analysis tools made it possible to identify significant clusters and critical nodes. This technique required a thorough understanding of the deep learning research subjects' relationships and organisation. To create meaningful categories out of the detected keywords and their co-occurrence patterns, a cluster analysis was performed. Clustering technologies such as k-means and hierarchical clustering were used to group the keywords according to their co-occurrence frequency and similarity. This study identified different research clusters and subfields in deep learning, allowing for a detailed comprehension of the numerous techniques and algorithms commonly researched together.

 3.3 Results and discussions

Co-occurrence and cluster analysis of the keywords used in DL techniques

Fig. 3.2 depicts the co-occurrence and cluster analysis of keywords related to deep learning techniques. The network diagram displays various groupings, each corresponding to a specific sub-area in deep learning. "Deep learning," the principal node, is the most notable, indicating both its primary significance and its wide-ranging influence. The centre node is surrounded by a number of significant clusters with dense connections of relevant terms. The "convolutional neural network" (CNN) cluster is one of the most well-known groupings. Terms like "object detection," "image segmentation," "convolutional neural networks," and "image classification" belong to this group. The group's notable degree of interconnectedness highlights the significance of CNNs in image processing jobs. As CNNs are skilled at identifying patterns and features in images, they are crucial tools in the field of deep learning, particularly when working with visual data. The terms "image processing," "image segmentation," and "object detection" crop up often, indicating how widely used CNNs are in computer vision. There's a significant cluster next to CNN that focusses on "learning systems" and "information classification." This group contains ideas like "support vector machines," "semantics," and "features extraction." The inclusion of "machine-learning" and "learning algorithms" in this category demonstrates that deep learning incorporates various machine learning techniques and goes beyond CNNs. To stress how important "classification" and "features extraction" are to building effective learning systems, their importance is emphasised. Feature extraction is an essential process in machine learning, converting raw data into a suitable format for modeling, while classification is a key component in various applications, including image recognition and natural language processing.

The combination of "speech recognition" and "natural language processing" (NLP) denotes yet another crucial area of deep learning. Phrases like "embeddings," "online social networking," and "data mining" highlight the variety of applications for natural language processing techniques. Similar to word embeddings, embeddings are essential for encoding textual data because they capture semantic meaning, which makes tasks like sentiment analysis and machine translation possible. Natural language processing (NLP) is crucial for deciphering user behaviour and trends, as demonstrated by its application to the analysis of large amounts of text data from online platforms such as social networking and data mining. The "internet of things" and "reinforcement learning" are the focus of another important group. This category includes terms like "predictive models," "optimisation," and "decision making." Reinforcement learning is a strong method in which agents learn how to make choices by engaging with their surroundings and getting feedback through rewards or punishments. The presence of "internet of things" implies incorporating reinforcement learning into IoT applications, enabling smart agents to improve processes and make independent choices. Emphasizing "optimization" and "predictive models" showcases how reinforcement learning has the potential to enhance efficiency and performance across different systems.

Fig. 3.2 Co-occurrence analysis of the keywords used in DL techniques studies

The key concepts in the fields of "machine learning" and "artificial intelligence" (AI) are shown in the group. Words like "diagnosis," "benchmarking," and "automation" illustrate the range of applications for artificial intelligence techniques. The term "controlled study" and "article" are associated, indicating that these procedures have been thoroughly evaluated and confirmed by empirical studies and rigorous research. Given that AI systems can increase diagnostic precision and streamline processes, the terms "diagnosis" and "automation" suggest that AI has a significant impact on industrial automation and healthcare. "Computers, neural networks, and algorithms" form another group, emphasising the crucial computational models and methods that underpin deep learning. The concepts of "brain," "image reconstruction," and "diagnostic accuracy" in this group emphasize how neural networks, inspired by biology, are utilized in medical imaging. Neural networks, designed to mimic the human brain, excel at understanding intricate patterns, which makes them ideal for tasks such as image reconstruction and enhancing diagnostic precision in healthcare.

Emerging techniques in DL

The introduction of transformer architectures is a noteworthy development in DL (Mehrish et al, 2023; Murat et al, 2020; Shoeibi et al., 2024; Boulemtafes et al., 2020; Dildar et al., 2021). Natural language processing (NLP) was the initial domain for transformers like BERT and GPT, which have since demonstrated exceptional performance on a variety of tasks (Murat et al., 2020; Domingues et al., 2020; Hasan et al., 2021; Kashyap et al., 2022; Mathew et al., 2021). Unlike typical RNNs, transformers process input data simultaneously by using self-attention techniques, which improves computing efficiency and enables the training of larger models. Transformers have been so adaptable that they are now being used in areas other than NLP, such as computer vision, where Vision Transformers (ViTs) are achieving groundbreaking results in tasks like image classification. Another new method is self-supervised learning, which seeks to use vast quantities of unlabeled data to acquire valuable representations without requiring significant manual labeling. Self-supervised learning methods, like contrastive learning and masked language modeling, have been highly successful in reducing the performance disparity between supervised and unsupervised learning. Contrastive learning involves training models to differentiate between data points that are similar and dissimilar, enabling them to learn representations that have significance. This method has been effectively used in different areas like image and speech recognition, and is set to greatly decrease the need for labeled datasets. Table 3.1 shows the emerging techniques in DL.

Table 3.1 Emerging techniques in DL

Sl. No

Techniques

Description

Key Applications

Challenges

1

Transformers

Leveraging self-attention mechanisms, transformers epitomize a paradigm shift in processing sequential data.

NLP, machine translation, image analysis

Substantial computational demands

2

GANs (Generative Adversarial Networks)

By engaging two adversarial networks in a competitive dynamic, GANs are instrumental in synthesizing realistic data.

Image generation, video synthesis

Training instability and mode collapse

3

Capsule Networks

Capsule networks, designed to surmount the limitations of conventional CNNs, capture spatial hierarchies more effectively.

Image recognition, medical diagnosis

Computationally intensive and complex architecture

4

Self-Supervised Learning

By deriving meaningful representations from unlabeled data through predictive tasks, self-supervised learning reduces dependency on extensive labeled datasets.

NLP, image processing, speech recognition

Crafting effective pretext tasks

5

Reinforcement Learning

This technique entails learning optimal actions via iterative interactions with the environment, honing decision-making processes.

Robotics, game playing, autonomous driving

Sample inefficiency and extensive trial requirements

6

Federated Learning

Federated learning enables the training of models across decentralized devices, thereby preserving data privacy by obviating the need for data centralization.

Healthcare, finance, IoT

Communication overhead and synchronization issues

7

Neural Architecture Search (NAS)

Employing search algorithms to autonomously design neural network architectures, NAS epitomizes optimization and innovation in model architecture.

Model optimization, architecture design

Prohibitive computational resource requirements

8

Graph Neural Networks (GNNs)

By operating on graph structures, GNNs adeptly capture the intricate relationships between nodes, making them indispensable for relational data modeling.

Social networks, biological networks

Scalability challenges and computational complexity

9

Transfer Learning

Transfer learning harnesses pre-trained models to tackle new tasks, thereby leveraging pre-existing knowledge to enhance efficiency.

NLP, image classification

Potential risks of negative transfer

10

Few-Shot Learning

Few-shot learning is adept at performing tasks with minimal training examples, thus addressing the challenge of data scarcity in various domains.

Image recognition, NLP

High sensitivity to noise in training data

11

Meta-Learning

Meta-learning, or "learning to learn," enhances adaptability across diverse tasks, providing rapid adjustments to new challenges.

Reinforcement learning, hyperparameter tuning

Complex and demanding training procedures

12

Deep Reinforcement Learning

Merging the principles of DL and reinforcement learning, this technique excels in solving high-dimensional decision-making problems.

Robotics, gaming, finance

Necessitates extensive training data

In the realm of digital literacy, the concept of meta-learning, also referred to as "learning to learn," is gaining popularity (Wang et al., 2022; Sharma et al., 2020; Xiao et al., 2020; Lee et al., 2021). Meta-learning algorithms are designed to adapt quickly to new tasks by leveraging prior information from similar activities. This approach is particularly useful when there is a lack of data or when information is changing rapidly. The popular approach known as Model-Agnostic Meta-Learning (MAML) trains models to perform well on novel tasks with minimal fine-tuning. By putting flexibility first, meta-learning has the potential to create AI systems that are more adaptable and can handle a range of problems. Graph neural networks (GNNs) are a major breakthrough in DL, particularly for tasks that deal with relational data. Graph Neural Networks (GNNs) expand upon conventional neural networks by processing data structured in graphs, enabling their use in analyzing social networks, studying molecular biology, and building recommendation systems. GNNs are able to acquire more detailed and informative representations by understanding the relationships between nodes in a graph. Recent advancements in GNNs like GATs and GCNs have shown significant performance enhancements, leading to increased research and utilization in the field.

With the advent of Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), new avenues for producing real artificial intelligence data have become possible in the field of generative models (Murat et al, 2020; Xiao et al., 2020; Lee et al., 2021; Yang et al., 2020). Generator-discriminator neural networks, which play a zero-sum game, combine to form GANs, which produce incredibly authentic data. The quality and variety of generated images have improved significantly in recent times thanks to innovations like StyleGAN and BigGAN. On the other hand, VAEs offer a probabilistic approach to data creation, which is advantageous for applications requiring a strong theoretical foundation, such as anomaly detection and data compression. The incorporation of techniques from neural architecture search (NAS) has become a popular method for automatically designing neural network structures. The conventional method of designing DL models depends on human knowledge and instinct, while NAS automates the search for optimal architectures using predetermined criteria. Methods such as reinforcement learning and evolutionary algorithms are frequently employed to steer the search procedure. NAS has resulted in the finding of new structures that surpass human-created equivalents in different assessments, showing the possibility of automated AI design to speed up advancements in the field.

Federated learning, a promising technique, tackles privacy and data security concerns by allowing machine learning models to be trained on decentralized data sources (Abdar et al, 2021; Mehrish et al, 2023; Yang et al., 2020). Rather than consolidating data in one central location, federated learning enables training of models on devices locally, sharing and aggregating only model updates. This method improves privacy and decreases data transfer requirements, making it appropriate for use in healthcare, finance, and other sensitive fields. Recent developments in federated learning are centered on enhancing communication efficiency, resilience, and scalability, solidifying its importance as a critical research field amidst increasing data privacy concerns. The significance of Explainable AI (XAI) is increasing due to the growing complexity and opacity of DL models. XAI techniques seek to enhance the transparency and interpretability of AI models, allowing users to comprehend the rationale behind model predictions. Techniques like LIME and SHAP offer understanding of model behavior by spotlighting the impact of individual features on predictions. As regulations for AI transparency and accountability develop, the need for models that can be explained is likely to grow, leading to more research in this field.

Few-shot learning is a growing technique focused on training models with just a small number of examples from each class (Lakshmanna et al, 2022; Abdar et al, 2021; Mehrish et al, 2023;  Boulemtafes et al., 2020; Dildar et al., 2021). This method is based on how humans can rapidly grasp new concepts with minimal data (Domingues et al, 2020;  Nagaraju & Chawla, 2020; Khan et al., 2021; Murthy et al., 2020). Few-shot learning methods like prototypical networks and matching networks have displayed potential in tasks such as image categorization and linguistic comprehension. Few-shot learning has the potential to make AI more usable in real-world situations with limited data by allowing models to extrapolate from a small number of examples. The merging of quantum computing and DL is an exhilarating frontier that could transform the field in a revolutionary way. Quantum computing utilizes the laws of quantum mechanics to conduct computations that are impossible for classical computers. Researchers are investigating quantum machine learning algorithms, like quantum neural networks and quantum GANs, due to their potential to solve difficult problems with greater efficiency compared to conventional approaches. Although in the initial phases, the merging of quantum computing and DL has the potential to result in advancements in fields like cryptography, optimization, and materials science.

Optimization algorithms in DL

Gradient Descent and Its Variants

The core idea behind optimising DL models is gradient descent (Abdar et al., 2021; Mehrish et al., 2023; Murat et al., 2020; Xiao et al., 2020). It continuously adjusts a neural network's weights to lower the loss function. The simplest type, known as Stochastic Gradient Descent (SGD), uses the gradient of the loss relative to the weights, which is computed for each training sample, to modify the weights. SGD is a simple algorithm, however its updates are highly variable, which leads to noisy gradients. SGD has been developed in many versions to lessen this problem. A commonly used form is Mini-batch Gradient Descent, where the weights are adjusted using a small group of training samples, finding a middle ground between variance and computational speed. Another important variation is SGD with Momentum, which speeds up convergence by taking into account previous gradients to reduce fluctuations.

Adaptive Learning Rate Methods

Building on the foundation established by gradient descent, adaptive learning rate techniques have revolutionised optimisation in deep learning (DL) by dynamically modifying the learning rate for each parameter (Murat et al, 2020; Wang et al, 2022; Domingues et al, 2020; Khan et al., 2021). These methods significantly improve the model's overall performance as well as the rate of convergence. One of the first adaptive methods, AdaGrad modifies each parameter's learning rate based on previous gradients. It performs well when dealing with sparse data, but it has trouble when the learning rate drops off quickly. RMSprop, which is based on AdaGrad, addresses this problem by using a declining average of squared gradients, which aids in stabilising the learning rate. Adam, which stands for Adaptive Moment Estimation, merges the advantages of AdaGrad and RMSprop by calculating individualized learning rates for each parameter based on the gradients' first and second moments. Adam is now the standard optimization algorithm because of its strength and effectiveness in various applications. The latest updates to Adam, like AdamW, bring in weight decay regularization, improving its performance even more. Fig. 3.3 shows the optimization algorithms in DL.

Fig. 3.3 Optimization algorithms in DL

Second-Order Methods

Gradients are used in first-order methods (like gradient descent), but second-order methods make use of second-order information (like the Hessian matrix) to better understand the curvature of the loss surface. These methods allow faster convergence for complex models even when they are computationally intensive. A conventional second-order technique that uses the inverse of the Hessian matrix to iteratively update the parameters is called Newton's technique. For large neural networks, computing the Hessian is expensive, even if it seems appealing in theory. For DL tasks, L-BFGS—a quasi-Newton technique that approximates the Hessian matrix—is a better fit.

Evolutionary Algorithms

Evolutionary algorithms improve neural networks by iteratively refining a set of feasible solutions, drawing inspiration from natural evolution (Domingues et al, 2020; Nagaraju & Chawla, 2020; Saleem et al., 2021; Boulemtafes et al., 2020). In DL, techniques such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and Genetic Algorithms (GAs) have been applied to optimise in high-dimensional spaces and give robustness against local minima. Genetic algorithms use strategies including crossover, mutation, and selection to mimic the processes of natural selection. On the other hand, CMA-ES efficiently traverses the search space by modifying the distribution's covariance matrix to produce new solutions. These techniques are particularly useful when gradient information is hard to get by or unreliable.

Bayesian Optimization

For modifying hyperparameters in DL, Bayesian optimisation is a potent method for enhancing expensive unknown functions (Shoeibi et al., 2024; Boulemtafes et al., 2020; Dildar et al., 2021; Xiao et al., 2020). It creates a probabilistic model of the target function and uses it to determine the optimal hyperparameters repeatedly. In Bayesian optimisation, Gaussian Processes (GPs) are commonly used to model the objective function. In order to determine the optimal hyperparameters, the exploration and exploitation balance are led by the acquisition function. Improvements such as BO-TPE have improved efficiency and scalability, making Bayesian optimisation more common in automated machine learning.

Gradient-Free Optimization

It may be necessary to employ optimisation techniques that do not rely on gradients since gradient information may not be readily available or reliable (Bouguettaya et al., 2022; Khan et al., 2021; Murthy et al., 2020; Boulemtafes et al., 2020). These methods provide robustness and adaptation by exploring the parameter space without relying on gradients (Lakshmanna et al, 2022; Nagaraju & Chawla, 2020; Saleem et al., 2021). The optimisation method known as Simulated Annealing (SA) does not rely on gradients; instead, it simulates the annealing process in metallurgy. In order to prevent being trapped in local minimums, it accepts suboptimal solutions with a certain probability, which lowers the acceptance probability as the search goes on. A gradient-free method called particle swarm optimisation imitates the social dynamics of flocking birds. Every individual particle explores the search space by adjusting its position according to both its own experience and the best-known position of the swarm.

There are constant advancements and enhancements in the field of optimization algorithms in DL. A new trend involves the creation of optimization methods that are adaptive and flexible, capable of changing their approach according to the training process. For example, the Lookahead Optimizer enhances the training process and convergence by considering future parameter updates when updating the weights. Likewise, the Ranger optimizer merges RAdam and Lookahead, utilizing the advantages of each technique for strong optimization. Another development involves combining learning rate schedules with warm restarts. Cyclical Learning Rates (CLR) fluctuate the learning rate within a set range, aiding the model in avoiding being stuck in local minima and better navigating the loss surface. Stochastic Gradient Descent with Warm Restarts (SGDR) intermittently resets the learning rate, enabling the model to reach various minima during convergence.

Optimization in Federated Learning

Given that federated learning requires several devices cooperating to train a model without exchanging data in a decentralised fashion, it poses unique optimisation issues (Murat et al, 2020; Wang et al, 2022; Domingues et al, 2020; Hazra et al., 2021). Two key concerns in federated learning are communication efficiency and data disparities. One optimisation approach that is commonly used in the field of federated learning is called Federated Averaging, or FedAvg. It controls convergence rate and transmission costs since it applies updates to each device independently and aggregates them at a central server. Innovations like Federated Learning with Compression (FedCom) and Federated Learning with Differential Privacy (FL-DP) focus on communication efficiency and privacy, respectively.

Optimization in Reinforcement Learning

Sophisticated optimisation techniques also benefit reinforcement learning algorithms, which are intended to find the best policies through interaction with an environment (Bouguettaya et al., 2022; Khan et al., 2021; Murthy et al., 2020; Wu et al., 2020; Gul et al., 2022). Proximal Policy Optimisation (PPO) and REINFORCE are two examples of policy gradient techniques that work by computing the gradient of the average reward to enhance the policy. Two important advancements in RL optimisation are PPO and Trust Region Policy Optimisation (TRPO). PPO uses a clipped objective function to simplify TRPO, while TRPO limits policy changes in a trust region to maintain stability in updates. These methods have significantly improved the efficiency and resilience of RL algorithms.

Ensemble deep learning

Using many models and combining their predictions yields more robust, accurate, and consistent outcomes through ensemble deep learning (Abdar et al., 2021; Mehrish et al., 2023; Murat et al., 2020; Domingues et al., 2020). Ensemble approaches are predicated on the idea that by resolving individual shortcomings and leveraging combined strengths, a group of diverse models can produce better outcomes than a single model. The method has gained significant traction in recent years, mostly as a result of the need for enhanced performance in demanding tasks such as natural language processing, picture recognition, and predictive analytic. Ensemble DL is mostly used because of its ability to reduce over-fitting (Altaheri et al., 2023; Mijwil et al., 2023; Bouguettaya et al., 2022; Hazra et al., 2021). DL models, especially those with extensive architectures, are susceptible to overfitting the training data, resulting in inadequate generalization on unfamiliar data (Gul et al., 2022; Amiri et al., 2024; Arya et al., 2023). Ensemble methods combine predictions from various models to reduce individual quirks and random variations, leading to predictions that are more generalized and trustworthy. Methods like bagging, boosting, and stacking are regularly used to build these ensembles, with each one providing a distinct contribution to the resilience of the ultimate model.

Bagging, also known as Bootstrap Aggregating, requires training several iterations of the identical model on various portions of the training dataset. Every subset is created by selecting randomly from the original dataset, allowing for replacement. The separate models are trained one by one, and their predictions are combined through averaging or voting to reach the ultimate decision. Bagging is highly effective at decreasing variance, which makes it a useful tool for high-variance algorithms like deep neural networks. Random forests, a commonly used ensemble technique, demonstrate bagging principles when applied to decision trees. On the contrary, boosting involves training models one after another in a sequence, with each new model aiming to fix the mistakes made by the previous one. This repetitive procedure focuses on gaining knowledge through errors, thus enhancing the overall precision. AdaBoost and GBMs are well-known examples of boosting methods. In the realm of DL, techniques such as Gradient Boosted Neural Networks (GBNNs) have demonstrated significant potential. These techniques efficiently decrease both bias and variance, resulting in enhanced performance across different tasks.

Stacking involves training multiple base models and using their predictions as inputs for a meta-learner, making it a powerful ensemble technique. The meta-learner combines the forecasts to generate the ultimate result. Stacking combines the advantages of various models, resulting in a versatile and highly efficient method. In the realm of DL, stacking may entail merging models of varied architectures, like convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer models, to extract a wide range of features and patterns present in the data. The development of ensemble DL has been greatly impacted by the introduction of DL frameworks and hardware accelerators in recent times. Tools like TensorFlow, PyTorch, and Keras make it easy to create intricate ensemble models. The ability of GPUs and TPUs to process tasks simultaneously has made it possible to train large ensembles in a timely manner, allowing the use of ensemble methods in practical scenarios.

The success of Ensemble Models greatly depends on the diversity they incorporate. Diversity can be achieved by changing the model architectures, training data, hyperparameters, or even the training goals. For example, in image recognition tasks, ensemble models could merge CNNs with varied structures like ResNet, VGG, and Inception, each contributing distinct advantages to the ensemble. In the field of natural language processing, ensembles could consist of various models such as BERT, GPT, and Transformer-XL, with each specializing in distinct areas of comprehending language. Neural Architecture Search (NAS) is a developing area which has advanced the abilities of ensemble DL. NAS automates the creation of neural network structures, possibly finding new and very efficient designs suitable for ensembles. Ensembles can achieve top-notch performance on various tasks by merging architectures discovered through NAS with conventional hand-designed models.

Ensemble DL is utilized in various fields due to its flexibility and efficiency. Ensemble models are utilized in healthcare for purposes such as disease diagnosis, predicting prognosis, and analyzing medical images. For example, groups of DL models have successfully achieved impressive levels of accuracy in identifying illnesses such as diabetic retinopathy, pneumonia, and different types of cancers from medical images. These models not only deliver accurate results, but also demonstrate resilience to changes in data quality and distribution. In the field of finance, ensemble DL models are utilized for evaluating risks, detecting fraud, and engaging in algorithmic trading. Financial institutions can decrease the chances of incorrect results by using predictions from various models to make better decisions. Ensemble techniques are utilized in manufacturing for predictive maintenance to forecast equipment breakdowns and improve maintenance timing, ultimately decreasing downtime and operational expenses. Investigating Interpretability and Trust in Ensemble Models is currently a focus of ongoing research efforts. Even though ensembles are recognized for their precision, they are frequently criticized for being opaque models, which can make it difficult to understand their predictions. Recent attempts in explainable AI (XAI) are focused on tackling this problem by creating methods that can offer understanding into the decision-making process of group models. Approaches such as SHAP and LIME are being modified to be compatible with ensembles, assisting stakeholders in comprehending and having faith in the model results.

Approaches such as knowledge distillation, which involve transferring knowledge from a group of models to a single model, are becoming more popular because they enable the advantages of ensembles to be maintained while decreasing the computational requirements. Another potentially successful path involves combining ensemble methods with new AI approaches like federated learning and multi-task learning. Federated learning allows models to be trained using decentralized data sources without sharing the data, guaranteeing privacy and security. Utilizing ensemble techniques in a federated learning environment can harness the varied data distributions in separate nodes, improving the overall model's effectiveness. On the flip side, multi-task learning enables models to learn several interconnected tasks at the same time. In this situation, groups can exchange information between tasks to enhance generalization and efficiency.

 3.4 Conclusions

The field of DL has seen impressive growth, thanks to major improvements in techniques and algorithms that have transformed different areas. Modern DL methods like transformers, GAN, and reinforcement learning, are crucial innovations influencing the current landscape of the field. Transformers, equipped with self-attention mechanisms, have achieved record levels in natural language processing (NLP), allowing for unparalleled precision and effectiveness in tasks such as language translation, text summarization, and sentiment analysis. Transformers like GPT-4 and BERT have utilized advanced technology to reach a level of text comprehension and generation similar to that of humans, emphasizing their significant influence. GAN have transformed the limits of creativity and realism in the field of artificial intelligence. GANs have shown impressive abilities in creating extremely lifelike images, videos, and synthetic data for training various machine learning models, thanks to their adversarial training approach. This has created new possibilities in areas like computer vision, art, and entertainment, where there is a growing need for high-quality artificial content.

Reinforcement learning, a new field within DL, has demonstrated great potential in decision-making and autonomous systems. Techniques such as deep Q-networks (DQNs) and policy gradient techniques have allowed agents to acquire optimal policies in challenging, ever-changing settings, resulting in advances in robotics, gaming, and autonomous driving. Reinforcement learning algorithms have the potential to create adaptive and intelligent systems by gradually improving through interactions with the environment. Moreover, the combination of DL with new technologies like edge computing and federated learning is tackling important issues concerning data privacy, latency, and computational efficiency. Edge computing facilitates immediate processing and decision-making at the location of the data, while federated learning permits joint model training across decentralized data sources without violating privacy. As the industry progresses, it is crucial to prioritize scalability, interpretability, and ethical factors to fully utilize the capabilities of DL for the betterment of society.

 References

Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D., Liu, L., Ghavamzadeh, M., ... & Nahavandi, S. (2021). A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information fusion, 76, 243-297.

Altaheri, H., Muhammad, G., Alsulaiman, M., Amin, S. U., Altuwaijri, G. A., Abdul, W., ... & Faisal, M. (2023). Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review. Neural Computing and Applications, 35(20), 14681-14722.

Alyoubi, W. L., Shalash, W. M., & Abulkhair, M. F. (2020). Diabetic retinopathy detection through deep learning techniques: A review. Informatics in Medicine Unlocked, 20, 100377.

Amiri, Z., Heidari, A., Navimipour, N. J., Unal, M., & Mousavi, A. (2024). Adventures in data analysis: A systematic review of Deep Learning techniques for pattern recognition in cyber-physical-social systems. Multimedia Tools and Applications, 83(8), 22909-22973.

Arya, A. D., Verma, S. S., Chakarabarti, P., Chakrabarti, T., Elngar, A. A., Kamali, A. M., & Nami, M. (2023). A systematic review on machine learning and deep learning techniques in the effective diagnosis of Alzheimer’s disease. Brain Informatics, 10(1), 17.

Bouguettaya, A., Zarzour, H., Kechida, A., & Taberkit, A. M. (2022). Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Computing and Applications, 34(12), 9511-9536.

Boulemtafes, A., Derhab, A., & Challal, Y. (2020). A review of privacy-preserving techniques for deep learning. Neurocomputing, 384, 21-45.

Dildar, M., Akram, S., Irfan, M., Khan, H. U., Ramzan, M., Mahmood, A. R., ... & Mahnashi, M. H. (2021). Skin cancer detection: a review using deep learning techniques. International journal of environmental research and public health, 18(10), 5479.

Domingues, I., Pereira, G., Martins, P., Duarte, H., Santos, J., & Abreu, P. H. (2020). Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artificial Intelligence Review, 53, 4093-4160.

Gul, S., Khan, M. S., Bibi, A., Khandakar, A., Ayari, M. A., & Chowdhury, M. E. (2022). Deep learning techniques for liver and liver tumor segmentation: A review. Computers in Biology and Medicine, 147, 105620.

Hasan, A. M., Sohel, F., Diepeveen, D., Laga, H., & Jones, M. G. (2021). A survey of deep learning techniques for weed detection from images. Computers and Electronics in Agriculture, 184, 106067.

Hazra, A., Choudhary, P., & Sheetal Singh, M. (2021). Recent advances in deep learning techniques and its applications: an overview. Advances in Biomedical Engineering and Technology: Select Proceedings of ICBEST 2018, 103-122.

Islam, M. M., Karray, F., Alhajj, R., & Zeng, J. (2021). A review on deep learning techniques for the diagnosis of novel coronavirus (COVID-19). Ieee Access, 9, 30551-30572.

Kashyap, A. A., Raviraj, S., Devarakonda, A., Nayak K, S. R., KV, S., & Bhat, S. J. (2022). Traffic flow prediction models–A review of deep learning techniques. Cogent Engineering, 9(1), 2010510.

Khan, Z. Y., Niu, Z., Sandiwarno, S., & Prince, R. (2021). Deep learning techniques for rating prediction: a survey of the state-of-the-art. Artificial Intelligence Review, 54, 95-135.

Lakshmanna, K., Kaluri, R., Gundluru, N., Alzamil, Z. S., Rajput, D. S., Khan, A. A., ... & Alhussen, A. (2022). A review on deep learning techniques for IoT data. Electronics, 11(10), 1604.

Lee, S. W., Mohammadi, M., Rashidi, S., Rahmani, A. M., Masdari, M., & Hosseinzadeh, M. (2021). Towards secure intrusion detection systems using deep learning techniques: Comprehensive analysis and review. Journal of Network and Computer Applications, 187, 103111.

Mathew, A., Amudha, P., & Sivakumari, S. (2021). Deep learning techniques: an overview. Advanced Machine Learning Technologies and Applications: Proceedings of AMLTA 2020, 599-608.

Mehrish, A., Majumder, N., Bharadwaj, R., Mihalcea, R., & Poria, S. (2023). A review of deep learning techniques for speech processing. Information Fusion, 101869.

Mijwil, M., Salem, I. E., & Ismaeel, M. M. (2023). The significance of machine learning and deep learning techniques in cybersecurity: A comprehensive review. Iraqi Journal For Computer Science and Mathematics, 4(1), 87-101.

Murat, F., Yildirim, O., Talo, M., Baloglu, U. B., Demir, Y., & Acharya, U. R. (2020). Application of deep learning techniques for heartbeats detection using ECG signals-analysis and review. Computers in biology and medicine, 120, 103726.

Murthy, C. B., Hashmi, M. F., Bokde, N. D., & Geem, Z. W. (2020). Investigations of object detection in images/videos using various deep learning techniques and embedded platforms—A comprehensive review. Applied sciences, 10(9), 3280.

Nagaraju, M., & Chawla, P. (2020). Systematic review of deep learning techniques in plant disease detection. International journal of system assurance engineering and management, 11(3), 547-560.

Oprea, S., Martinez-Gonzalez, P., Garcia-Garcia, A., Castro-Vargas, J. A., Orts-Escolano, S., Garcia-Rodriguez, J., & Argyros, A. (2020). A review on deep learning techniques for video prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6), 2806-2826.

Saleem, M. H., Potgieter, J., & Arif, K. M. (2021). Automation in agriculture by machine and deep learning techniques: A review of recent developments. Precision Agriculture, 22(6), 2053-2091.

Sarker, I. H. (2021). Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science, 2(6), 420.

Sharma, V. K., & Mir, R. N. (2020). A comprehensive and systematic look up into deep learning based object detection techniques: A review. Computer Science Review, 38, 100301.

Shoeibi, A., Khodatars, M., Jafari, M., Ghassemi, N., Sadeghi, D., Moridian, P., ... & Gorriz, J. M. (2024). Automated detection and forecasting of covid-19 using deep learning techniques: A review. Neurocomputing, 127317.

Shoeibi, A., Khodatars, M., Jafari, M., Moridian, P., Rezaei, M., Alizadehsani, R., ... & Acharya, U. R. (2021). Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review. Computers in Biology and Medicine, 136, 104697.

Tulbure, A. A., Tulbure, A. A., & Dulf, E. H. (2022). A review on modern defect detection models using DCNNs–Deep convolutional neural networks. Journal of Advanced Research, 35, 33-48.

Wang, N., Wang, Y., & Er, M. J. (2022). Review on deep learning techniques for marine object recognition: Architectures and algorithms. Control Engineering Practice, 118, 104458.

Wu, Y., Wei, D., & Feng, J. (2020). Network attacks detection methods based on deep learning techniques: a survey. Security and Communication Networks, 2020(1), 8872923.

Xiao, Y., Tian, Z., Yu, J., Zhang, Y., Liu, S., Du, S., & Lan, X. (2020). A review of object detection based on deep learning. Multimedia Tools and Applications, 79, 23729-23791.

Yang, S., Wang, Y., & Chu, X. (2020). A survey of deep learning techniques for neural machine translation. arXiv preprint arXiv:2002.07526.

Zhang, W., Li, H., Li, Y., Liu, H., Chen, Y., & Ding, X. (2021). Application of deep learning algorithms in geotechnical engineering: a short critical review. Artificial Intelligence Review, 1-41.

Published

October 13, 2024

Categories

How to Cite

Rane, N. L., Mallick, S. K., Kaya, Ömer, & Rane, J. (2024). Techniques and optimization algorithms in deep learning: A review. In Applied Machine Learning and Deep Learning: Architectures and Techniques (pp. 59-79). Deep Science Publishing. https://doi.org/10.70593/978-81-981271-4-3_3