Performance metrics analysis: Evaluating machine learning models in the detection of cloud-based DDoS

Authors

Chisom Elizabeth Alozie
University of the Cumberlands, United States

Synopsis

Tables 10 and 11 show the performance metric result of this experiment. The metrics used to evaluate the machine learning models are accuracy, precision, recall, f1-score, and computation time on new datasets and open-source datasets as shown in Tables 10 and 11. An 80:20 split of the overall dataset was used for the Model building where 80% was used for training and 20% was used for validation and testing. The objective of the evaluation is to assess the effectiveness of DDoS datasets in terms of their ability to detect DDoS attacks in the cloud system. The results demonstrate that the new dataset performed very well, achieving a 100% accuracy rate with Random Forest, SVM, Decision tree, and KNN, and then 98% with Naïve Bayes. An F1 score (measure) of 100% on all models except NB with 98%. This indicates that a model trained with a newly generated dataset performs very well, as it correctly predicts threats (precision) and captures all relevant cases of malicious traffic (recall) at a 100% rate in RF, DT, and KNN, 99% on SVM and NB with a 97% rate.
On the other hand, the CSE-CIC-IDS2018 dataset was used for training a larger sample size since time constrain could not allow the generation of such a sample size in the new dataset. The RF, DT and KNN achieved 100% accuracy followed by NB at 99% and the SVM performs the lowest with an accuracy of 95%. as shown in table 11. It depicts that the CSE-CIC-IDS2018 dataset is a very good data source for the detection of DDoS on a cloud network.

Published

February 2, 2025

Categories

How to Cite

Alozie, C. E. . (2025). Performance metrics analysis: Evaluating machine learning models in the detection of cloud-based DDoS. In Analysing Cloud DDoS Attacks Using Supervised Machine Learning (pp. 49-63). Deep Science Publishing. https://doi.org/10.70593/978-93-49307-78-0_4