Dynamic RBFN with vector attention-guided feature selection for spam detection in social media
Article, Complex and Intelligent Systems, 2026, DOI Link
View abstract ⏷
Online social media platforms have emerged as primary engagement channels for internet users, leading to increased dependency on social network information. This growing reliance has attracted cybercriminals, resulting in a surge of malicious activities such as spam. Consequently, there is a pressing need for efficient spam detection mechanisms. Although several techniques have been proposed for social network spam detection, spammers continually evolve their strategies to bypass these systems. In response, researchers have focused on extracting additional features to better identify spammer patterns. However, this often introduces feature redundancy and complexity, which traditional machine learning-based feature selection methods struggle to manage in highly complex datasets. To address this, we propose a novel attention network-based feature selection method that assigns weights to features based on their importance, reducing redundancy while retaining relevant information. Additionally, an adaptive Radial Basis Function Neural Network (RBFN) is employed for spam classification, enabling dynamic weight updates to reflect evolving spam behaviors. The proposed method is evaluated against state-of-the-art feature selection, deep learning models, and existing spam detection techniques using accuracy, F-measure, and false-positive rate. Experimental results demonstrate that our approach outperforms existing methods, offering superior performance in detecting spam on social networks.
Optimized CNN-Transformer Hybrid Model for Enhanced Brain Tumor Detection in Medical Imaging
Tatwa D.D., Elakkiya E., Antonyraj S., Nayak A., Sah L.R.B., Sah S.K.
Conference paper, 2025 4th OPJU International Technology Conference on Smart Computing for Innovation and Advancement in Industry 5.0, OTCON 2025, 2025, DOI Link
View abstract ⏷
Detecting brain tumors manually from MRI scans is challenging, time-consuming, and often inaccurate due to similarities in tissue and tumor appearance. This highlights the need for an efficient automatic tumor detection system. We propose a deep learning-based model for brain tumor detection from 2D MRI scans. The model utilizes convolutional neural networks with transformer blocks to enhance spatial and contextual feature recognition. Trained on diverse tumor images, Compared to traditional methods like SVM, our approach showed superior performance. Implemented using TensorFlow and Keras, this method supports accurate and rapid tumor detection for clinical applications. In our study, the CNN model achieved an accuracy of 99.46%, surpassing the current state-of-the-art results. This CNN-based approach can assist doctors in accurately detecting brain tumors in MRI images, potentially significantly speeding up the treatment process.
Artificial intelligence based on multi objective algorithm for effective load forecasting
Raj S.A., Kumar S.V.D.A., Elakkiya E., Palamarthi G.K., Palepu S., Bashida S.
Book chapter, Integrated Technologies in Electrical, Electronics and Biotechnology Engineering, 2025, DOI Link
View abstract ⏷
In recent years, researchers have directed more attention towards accurately predicting and maintaining stable loads, recgnizing their profound impact on the economy and the crucial need for effective power system management. However, the majority of past studies have focused solely on either decreasing forecast errors or improving stability, with few delving into both simultaneously. Developing a forecasting model that addresses both objectives concurrently presents a formidable task, primarily due to the intricate nature of load behavior patterns. Hence, in order to concurrently accomplish both objectives, we propose and implement an Artificial intelligence based multi objective algorithm (AIMOA). The suggested model demonstrates superior performance compared to baseline models across different real-world electricity datasets, with results indicating strong performance of our proposed approach.
Hybrid Models for Ehanced Intrusion Detection on NSL KDD and KDD CUP 99 Data Set
Elakkiya E., Chukka B., Kadiyam K.S.T., Pulagam P., Raj S.A.
Conference paper, 2025 4th OPJU International Technology Conference on Smart Computing for Innovation and Advancement in Industry 5.0, OTCON 2025, 2025, DOI Link
View abstract ⏷
Intrusion detection is essential for safeguarding computer networks against malicious activities. This work integrates three advanced approaches to achieve robust intrusion detection, leveraging two distinct datasets. Firstly, a Graph Neural Network (GNN) and Tabular Transformer model utilize the KDD Cup 99 dataset to classify network intrusions, achieving best-in-class accuracy by effectively modeling complex relationships within the data. Secondly, a Generative Adversarial Network (GAN)-augmented Multilayer Perceptron (MLP) employs the NSL-KDD dataset to enhance data diversity, generating realistic synthetic samples that improve classification performance. Lastly, a hybrid framework combining Variational Autoencoders (VAEs) and GANs, also leveraging the NSL-KDD dataset, addresses class imbalance and data synthesis challenges, producing high-quality synthetic data while retaining essential features. Each approach achieves its best accuracy on its respective dataset, demonstrating significant advancements in intrusion detection accuracy, reducing false alarm rates, and ensuring computational efficiency.
Quantum Computers-Real-World Applications and Challenges
Natarajan G., Anna Bai S.C.P., Soman S., Elango E.
Book chapter, Quantum Computing and Artificial Intelligence: The Industry Use Cases, 2025, DOI Link
View abstract ⏷
Quantum computing has aided in the advancement of Artificial Intelligence and Machine Learning technology. In recent years, Quantum Computing has grown in popularity, and Artificial Intelligence has emerged as one of its key application areas. The use of quantum computing, a novel technology that processes data using quantum physics theories, may dramatically boost the speed and effectiveness of machine learning. Quantum computing is based on quantum physics, which varies from classical physics in numerous ways. In quantum physics, particles such as electrons and photons, which can exist in several states at the same time, can be used to represent information. This means that quantum computers can process massive amounts of data far faster than ordinary computers, as well as solve complex problems that normal computers find difficult. Quantum computing can be used to speed up the process of doing complex computations and simulations in the field of Machine Learning. A quantum computer, for example, may be used to rapidly evaluate large data sets and identify patterns that would be difficult to detect using traditional computing approaches. Obviously, the use of Quantum Computing can improve the efficiency of Machine Learning systems. Even Artificial Intelligence systems can be made significantly more effective and efficient by using a quantum computer to examine data and decide the most effective ways to carry out various tasks. This type of optimization has the potential to dramatically increase the efficiency of Artificial Intelligent systems while also cutting maintenance costs. In this chapter, the basic introduction about Quantum computing, Quantum Physics, types of Quantum computers, and their characteristics are depicted. Later, the advantages and disadvantages of quantum computers are derived, and further, some of the real-time applications of quantum computers in the most prevailing domains such as drug discovery, financial modeling, weather forecasting, traffic management, and environmental modeling are explained. Finally, the challenges in implementing the quantum computers are elucidated.
Exploring the Versatile Applications of Graph Neural Networks
Raja S.R., Natarajan G., Bose S., Elango E.
Book chapter, Graph Neural Networks: Essentials and Use Cases, 2025, DOI Link
View abstract ⏷
This chapter explores the diverse applications of graph neural networks (GNNs), a powerful class of neural networks designed to work directly with graph-structured data. We begin with an overview of the fundamental principles of GNNs, highlighting their ability to capture difficult relationships and dependencies in data represented as graphs. The chapter is organized into several key application domains, including social network analysis, where GNNs are used to predict user behaviour and enhance recommendation systems; bioinformatics, where they facilitate drug discovery and protein-protein interaction prediction; and natural language processing, where they assist in semantic understanding and relation extraction. Additionally, we delve into the realm of computer vision, demonstrating how GNNs can improve object detection and scene understanding by modelling spatial relationships. Furthermore, we examine emerging applications in areas such as financial fraud detection, traffic prediction, and knowledge graph completion. Each section discusses specific case studies, the architecture of GNNs employed, and the results achieved, underscoring the versatility and effectiveness of GNNs across various fields. In conclusion, we reflect on the future of GNN research, highlighting potential challenges and opportunities for innovation, including the need for scalability, interpretability, and integration with other machine learning frameworks. This chapter serves as a comprehensive resource for researchers and practitioners looking to harness the power of GNNs in their respective domains.
Leveraging machine and deep learning (ML/DL) algorithms towards AI models for automating software development
Natarajan G., Elango E., Cyriac R., Muthusamy S.
Article, Advances in Computers, 2025, DOI Link
View abstract ⏷
The rapid progress of artificial intelligence, which appoints algorithms based on deep learning and machine learning to automate various types of procedures, has suppressed software development. This chapter examines the role of ML/DL technology in AI-driven software development, focusing on their application in code construction, bug identity, program adaptability and future maintenance. The efficiency of the important ML and DL models in automatic development processes is examined, including the decision trees, nervous networks, transformers, and generic models. There is also a discussion of such moral issues, model interpretation and difficulties in data quality. This study demonstrates AI's ability to increase productivity, reduce the time of development and increase software reliability by providing insights into ML/DL-Interactive Automation.
Leveraging Artificial Intelligence and IoT for Healthcare 5.0: Use Cases, Applications, and Challenges
Natarajan G., Elango E., Soman S., Bai S.C.P.A.
Book chapter, Edge AI for Industry 5.0 and Healthcare 5.0 Applications, 2025, DOI Link
View abstract ⏷
During the beginning of the Industrial Revolution, succeeding manufacturing improvements have resulted in increasingly complicated, automated, and sustainable production techniques, enabling machines to be handled with ease of use, performance, and durability in modern expanding areas. People presently demand the human touch of mass personalization; hence, Industry 5.0 aids them in the transition from mass manufacturing to mass personalization. Industry 5.0 is enabling mass customization, and today’s industry needs significant advancements in manufacturing processes, production system digitalization, and intelligence. Previously, Industry 4.0 enabled mass customization, which was insufficient. Type 1 diabetes, for example, is difficult to maintain since people have different degrees of metabolism and dimensions, as well as different skin thicknesses, behaviors, and lifestyles. The transition to Industry 5.0 enables the provision of an application that tracks people’s habits and routines, developing a diabetic control approach and, eventually, a lower, more discrete, and dependable gadget personalized to the individual. The ability to create an Industry 5.0 technique would thus be completely life-changing for diabetes patients. With the goal to develop symmetrical innovation, Industry 5.0 may get insight via big data that creates a network of digital information. It may do what a human wishes by utilizing cooperative robots to increase precision and performance. For instance, collaborative robots can be used on the operating table to conduct novel surgery. According to Forrester’s perspective, big data consists of four components: information volume, information diversity, information value, speed of generation of new information, and interpretation. The Internet of Things (IoT), in which sensor-equipped equipment with connection communicate data to other machines and computer systems, automate various operations, and collect vast amounts of new data types, is one of the reliable enablers. The essential role of artificial intelligence is discussed in this chapter. The role of IoT in modern medical equipment manufacturing are elaborated. Explainable AI advancements and sophisticated enhancements provided by Industry 5.0 are discussed. Later, the modern healthcare systems integrated with Industry 5.0 and its several applications and their challenges are depicted.
Machine Learning for Bioinformatics and Healthcare: Applications and Challenges
Natarajan G., Gnanasekaran R., Balasubramanian S., Elango E.
Book chapter, Studies in Big Data, 2025, DOI Link
View abstract ⏷
The fusion of bioinformatics and machine learning (ML) has brought forth a new era of innovative healthcare. The various applications of machine learning in bioinformatics and healthcare are examined in this chapter, with an emphasis on how revolutionary these applications could be for disease identification, drug development, and customized therapy. Because of its ability to find intricate patterns in large datasets, machine learning has emerged as a key component of bioinformatics study. Machine learning algorithms are used in genomics to decipher the human genome, identify genetic variants associated with diseases, and predict an individual’s susceptibility to inherited disorders. Targeted medicines and individualized healthcare interventions are being made possible by this fresh understanding. Machine learning (ML) is a critical component of drug research and development since it helps identify promising drug candidates, speeds up chemical compound screening, and improves clinical trial designs. By employing data-driven insights and predictive modeling, machine learning (ML) is bringing innovative drugs to market faster and cheaper, which will eventually benefit patients all around the world. Moreover, ML-driven healthcare uses extend beyond the creation of new drugs and genomes. Machine learning is increasing the precision and effectiveness of healthcare delivery in a variety of domains, including patient risk assessment, therapeutic recommendation, and diagnostic assistance systems. These AI-powered tools enhance patient outcomes, lower diagnostic error rates, and enable healthcare professionals to make wiser decisions.
The evolution of healthcare: bridging conventional and quantum computing
Hanees A.L., Elango E., Natarajan G., Nagasubramanian G.
Book chapter, Quantum Computing for Healthcare Data: Revolutionizing the Future of Medicine, 2025, DOI Link
View abstract ⏷
Quantum computing is going to completely transform the medical field, replacing traditional computing. Pattern identification and predictive analysis are two types of jobs that quantum computing may expedite significantly. In contrast, classical computing, which is fueled by artificial intelligence techniques including machines learning and deep learning, primarily uses enormous datasets to feed these types of operations. This development is anticipated to enable real-time visualization of complex medical records, leading to faster and more accurate diagnoses via genetics and imaging information. Through leveraging the mathematical capabilities of quantum computing, healthcare providers may anticipate significant advancements in individualized medicine, therapy optimizing, and entire patient care, that will improve the standards for the delivery of medical services. Innovative and inventive collaborations exist involving Quantum Computing and the healthcare sector. Thus it was merely an extension of decades when the field of healthcare was drastically changed by quantum computing. The development of quantum technology means that an entirely new phase of computation is about to begin. Despite being a purely scientific subject, the laws of quantum mechanics and technologies have the power to completely transform a variety of industries, including healthcare. Quantum convergence presents enormous opportunities throughout the medical sector. Furthermore, technologies in general and AI in particular have made major improvements to the healthcare sector. These advances in technology are being used and transforming the healthcare industry to provide better care, assistance, and diagnosis. In the same way, quantum computing hopes to revolutionize how it is used in the field of healthcare. These days, personalized medicine that utilizes pharmaceutical kinetics human physiology and genomics is the standard. Therefore quantum computing is an ideal way to achieve this.
6G communication paradigm: A comprehensive exploration of the next generation connectivity
Natarajan G., Elango E., Anna Bai S.C.P., Gnanasekaran R.
Article, Advances in Computers, 2025, DOI Link
View abstract ⏷
The introduction of 6G communication signals a watershed moment in the growth of wireless technology, ushering us into an age in which connection defies present boundaries. This chapter explores the multidimensional terrain of 6G, deconstructing its essential components and emphasizing its revolutionary potential. In terms of data speed, 6G promises to break previous records, with unparalleled gigabit-per-second speeds that revolutionize the idea of real-time communication. The chapter goes into the technological advances that are propelling this quantum jump, including as enhanced modulation methods, unique antenna designs, and the incorporation of terahertz frequency ranges. Furthermore, the emphasis on ultra-low latency in 6G ushers in a paradigm change, allowing applications requiring immediate reaction, ranging from immersive augmented reality experiences to mission-critical autonomous systems.This chapter delves into the various technologies, such as edge computing and network slicing, that are aimed to reduce latency and improve overall user experience. The conversation has progressed to the lofty objective of accommodating a significant increase in device density. 6G envisions a future in which the Internet of Things (IoT) expands to unprecedented proportions, necessitating novel technologies to manage the sheer number of linked devices. A review of 6G's architectural advances, such as decentralized networks and better energy-efficient protocols, gives light on the technology's ability to sustain this vast ecosystem. Beyond the technological elements, the chapter explores the possible societal consequences of 6G, which range from access democratization to improved healthcare services and intelligent urban planning.The ethical aspects, security measures, and worldwide consequences of 6G adoption are also examined. In summary, this chapter serves as a complete investigation of the technology, applications, and consequences that collectively form the next generation communication paradigm.
Edge AI for Connected Healthcare in Internet of Medical Things for Smart Cities
Hanifa M.S.M., Alnaamani K.S.H., Natarajan G., Elango E.
Book chapter, Edge AI for Industry 5.0 and Healthcare 5.0 Applications, 2025, DOI Link
View abstract ⏷
The convergence of edge AI and the Internet of Medical Things (IoMT) has ushered in a transformative era for connected healthcare, particularly in the context of smart cities. This chapter explores the integration of edge AI technologies within the IoMT framework to enhance the efficiency, accessibility, and intelligence of healthcare services in urban environments. Leveraging the proximity of edge computing to medical devices and sensors, this approach minimizes latency, reduces bandwidth consumption, and ensures real-time processing of critical healthcare data. The synergy between edge AI and IoMT not only facilitates advanced diagnostics and personalized treatment but also contributes to the development of proactive healthcare systems. The chapter delves into the challenges and opportunities associated with deploying edge AI in connected healthcare, addressing issues of security, privacy, and interoperability. Furthermore, it examines the impact of edge AI in optimizing resource utilization, improving patient outcomes, and fostering a data-driven healthcare ecosystem. As smart cities evolve, the integration of edge AI into the IoMT promises to revolutionize healthcare delivery, paving the way for more resilient, adaptive, and responsive medical systems.
Delineating artificial intelligence and its engrossing potentials for automated software developments
Elango E., Balasubramanian S., Natarajan G.
Article, Advances in Computers, 2025, DOI Link
View abstract ⏷
Artificial Intelligence (AI) is revolutionizing software development, which optimizes the code production, increases productivity, and automatic complex operations. This study checks how AI can revolutionize software development by automatic by automatic AI code synthesis, debugging functions, testing, and signs. Software engineering machine is becoming more wisely automatic for AI-operated models such as learning and deep learning, which reduces human labor by increasing accurate and dependence. Predictive Analytics, Automatic Refactoring, and generative models are examples of the AI-powered tools that are changing traditional growth processes. AI's ability to inspire innovation and effectiveness in the software sector has been exposed in this letter, which also underlines its abilities, difficulties and possibilities for automated software development.
FLAGaTST: Fuzzy Logic Transformed Adversarial GAN and Time Series Transformer for Robust MPPT Under Partial Shading Conditions
Elakkiya E., Antony Raj S., Priya S.
Article, IEEE Access, 2025, DOI Link
View abstract ⏷
Partial shading poses a significant challenge in photovoltaic systems by creating multiple peaks in the power-voltage curve, complicating the task of accurately tracking the Maximum Power Point. Traditional maximum power point tracking methods often struggle to identify the true Global Maximum Power Point, leading to suboptimal energy harvesting. This paper proposes a novel hybrid tracking framework that integrates fuzzy logic, synthetic data generation using Generative Adversarial Networks (GANs), and time-series modeling with Transformer architectures. Fuzzy logic improves resilience to input uncertainties by translating raw data into interpretable fuzzy values. GANs augment the dataset by generating realistic synthetic samples, thereby improving generalization. The Transformer model leverages self-attention mechanisms to capture long-term temporal patterns in solar irradiance and power profiles. By combining these strengths, the proposed method delivers a robust and accurate global maximum power point tracking solution, particularly under dynamic and partially shaded environments. Experimental results demonstrate its superior performance and scalability compared to conventional maximum power point tracking approaches.
Introduction to quantum computing in healthcare
Elango E., Nagasubramanian G., Kumar S.R.
Book chapter, Quantum Computing for Healthcare Data: Revolutionizing the Future of Medicine, 2025, DOI Link
View abstract ⏷
The application of quantum computing to healthcare has the potential to completely transform methods used in medical research, diagnosis, and therapy. An overview of the possible uses of quantum computing in healthcare is given in this abstract. It looks into how quantum computing's capacity to handle enormous volumes of data at once can speed up difficult computations in fields like genetic research, medication development, and customized medicine. It also covers how quantum computing might improve healthcare data management systems' security and effectiveness by using cutting-edge encryption algorithms and optimization strategies. The abstract also looks at the opportunities and difficulties that lie ahead for incorporating quantum computing innovations into the healthcare environment, highlighting the revolutionary potential that these technologies might possess for enhancing patient outcomes and fostering medical innovation.
Optimizing Deep Learning for Pneumonia Diagnosis Using Chest X-Ray Data
Book chapter, Sensor Data Analytics for Intelligent Healthcare Delivery, 2025, DOI Link
DALEX (Model Agnostic Exploration, Explanation and Learning Implementation in Interpretable AI)
Anandaram H., Subramaniyan S.K., Sundaravadivazhagan B., Shanmuganathan B., Elango E.
Book chapter, Interpretable and Trustworthy AI: Techniques and Frameworks, 2025, DOI Link
Demystifying AI: A Comparative Study on Artificial General Intelligence and Artificial Super Intelligence
Natarajan G., Jeyaraman P., Sundaravadivazhagan B., Elango E.
Book chapter, Interpretable and Trustworthy AI: Techniques and Frameworks, 2025, DOI Link
Local Interpretable Model-Agnostic Explanations (LIME)
Elango E., Subramaniyan S.K., Anandaram H., Shanmuganathan B.
Book chapter, Interpretable and Trustworthy AI: Techniques and Frameworks, 2025, DOI Link
Challenges in Integrating Machine Learning and Deep Learning with VR/AR
Elango E., Natarajan G., Hanees A.L., Balasundaram I.
Book chapter, Virtual Reality and Augmented Reality with 6G Communication, 2025, DOI Link
View abstract ⏷
Virtual reality (VR) and augmented reality (AR), once specialist technology, have quickly transformed into potent tools with a broad range of applications in several industries. These technological advances build virtual worlds that either completely immerse people in virtual experiences (VR) or supplement the real world (AR). In recent decades, the integration of deep learning algorithm and machine learning (ML) has significantly expanded the fascinating potential and capacities of AR and VR by improving realism, customization, and engagement. This chapter examines the challenges that ML and DL are addressing, the great opportunities they present for the future, and the way algorithms are changing AR and VR.
Augmented Reality and Virtual Reality: Transforming the Learning Experience with AI Tools
Natarajan G., Raja S.R., Elango E., Soman S.
Book chapter, Virtual Reality and Augmented Reality with 6G Communication, 2025, DOI Link
View abstract ⏷
To upgrade the learning experience in the constantly changing educational landscape, it is crucial to incorporate upcoming technology into it. The complementary capabilities of augmented reality (AR) and virtual reality (VR) in collaboration with artificial intelligence (AI) technologies to upgrade educational practices are explored in this chapter. A dynamic platform for concentrated and individualized learning that supplies the need of a wide range of educational requirements is provided by the combination of these technologies. By protecting virtual statistics in real-global situations, augmented reality in schooling strives to reinforce students' engagement and understanding. On the other hand, virtual reality assists the learning experience by creating completely simulated worlds. When collaborating with AI, these technologies have the capability to change the material delivery on the basis of an individual's learning styles, progress, and preferences which results in a more personalized learning experience. This bankruptcy explores the numerous methods of AR and VR programs that may be better with the assistance of AI equipment in education. Student performance data can be analyzed with the aid of AI systems which provide educators with valuable information for personalized strategies. For a more responsive and entertaining learning environment, natural processing enables communal and adjustable virtual teachers. Moreover, the AR and VR on skill learning are discussed in this chapter. Learners can apply theoretical knowledge to practical use with the help of AI-powered immersive simulations that replicate the actual world situations. This, on the other hand, complements and alternatively aids essential wondering and problem-fixing capabilities. The capability issues and suspicions of the use of AR and VR with AI in schooling, like getting access to issues, moral worries, and the want for trainer education to successfully combine rising technology into the syllabus, are likewise reviewed within the chapter. This financial ruin concludes with the resource of the use of giving emphasis on the transformational possibilities of turning into a member of augmented reality, virtual reality, and artificial intelligence in schooling. This progressive approach has the functionality to triumph over the disadvantage of conventional schooling to put together the scholars for barriers of this unexpectedly global conversion global through growing personalized, immersive, and responsive getting-to-know experiences.
Multiple Granularity Context Representation based Deep Learning Model for Disaster Tweet Identification
Conference paper, 2024 5th International Conference on Innovative Trends in Information Technology, ICITIIT 2024, 2024, DOI Link
View abstract ⏷
Twitter has evolved into a pivotal platform for information exchange, particularly during emergencies. However, amidst the vast array of data, identifying tweets relevant to damage assessment remains a significant challenge. In response to this challenge, this study presents a novel approach designed to identify tweets related to damage assessment in times of crises. The challenge lies in sifting through an immense volume of data to isolate tweets pertinent to the specific event. Recent studies suggest that employing contextual word embedding approaches, such as transformers, rather than traditional context-free methods, can enhance the accuracy of disaster detection models. This study leverages multiple granularity level context representation at the character and word levels to bolster the efficiency of deep neural network techniques in distinguishing between disaster-related tweets and unrelated ones. Specifically, the weighted character representation, generated with the self-attention layer, is utilized to discern important information at the fine character level. Concurrently, Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) algorithms are employed in the word-level embedding to capture global context representation. The effectiveness of the proposed learning model is assessed by comparing it with existing models, utilizing evaluation measures viz., accuracy, F1 score, precision, and recall. The results demonstrate the effectiveness of our model compared to existing methods.
RBFN-Augmented DDoS Detection with CNN-GRU Fusion
Elakkiya E., Bista R.B., Shah C., Rajput A., Gupta A.K., Chaudhary R.
Conference paper, 2024 15th International Conference on Computing Communication and Networking Technologies, ICCCNT 2024, 2024, DOI Link
View abstract ⏷
Distributed denial-of-service (DDoS) assaults represent a substantial menace in contemporary security of networks, demanding effective detection mechanisms to mitigate their escalating impact. Despite notable progress in related research, the diverse attack modes and fluctuating scale of malicious traffic continue to challenge the development of detection methods with optimal accuracy. This paper addresses this gap by proposing a comprehensive DDoS attack detection approach leveraging deep learning methodologies. The NSL-KDD Dataset serves as the experimental foundation for training, testing, and validating deep learning algorithms. The proposed method integrates the Minimum Redundancy Maximum Relevance (MRMR) feature selection algorithm, enhancing model performance, mitigating overfitting, and reducing computational complexity. The classifier comprises Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) components. CNN excels at object detection and localization within images or videos, while the GRU provides a dynamic mechanism for selectively updating the network's hidden state, effectively managing flow information. The experimental results demonstrate the efficacy of the proposed approach in achieving improved detection accuracy and robust performance against DDoS attacks.
Deepfake Detection Using Multi-Modal Fusion Combined with Attention Mechanism
Shirley C.P., Berin Jeba Jingle I., Abisha M.B., Venkatesan R., Yashvanth Ram R.V., Elango E.
Conference paper, 4th International Conference on Sustainable Expert Systems, ICSES 2024 - Proceedings, 2024, DOI Link
View abstract ⏷
The proliferation of deepfake technology poses a significant challenge to the authenticity of digital content. This research explores the application of multimodal fusion techniques to enhance deepfake detection accuracy. By combining visual and audio features, the proposed method leverages the complementary nature of different data types to detect discrepancies introduced by deepfake manipulation. An attention mechanism is incorporated to focus on salient regions within each modality, further improving detection accuracy. Convolutional Neural Networks (CNNs) and Mel-Frequency Cepstral Coefficients (MFCCs) are employed for feature extraction, followed by feature fusion for deepfake detection. This approach demonstrates the effectiveness of multimodal fusion in combating the evolving threat of deepfake technology. By advancing deepfake detection techniques, this research contributes to safeguarding the integrity of digital content and preserving trust in media.
DDOS Attack Detection using DeepDDOS : A Hybrid Approach using CNN GRU and MLP model
Elakkiya E., Chiratanagandla K.A., Jethy J., Shah N.K., Singh V.K.
Conference paper, 2024 5th IEEE Global Conference for Advancement in Technology, GCAT 2024, 2024, DOI Link
View abstract ⏷
This study presents an innovative method that uses PCA for DDoS attack detection and mitigation techniques include feature selection and a hybrid neural network (DeepDDos) model. This model includes Conv1D layers for extracting features, MaxPooling layer for dimensionality reduction, and a GRU layer for capturing sequential patterns. Dropout layers mitigate overfitting, while Flatten layers prepare data for analysis. Conv1D layers enhance the model's ability to identify DDoS attack patterns. MaxPooling layers reduce spatial dimensions while preserving important information. The GRU layer captures temporal dependencies, facilitating robust attack pattern identification. The model incorporates MLP layers for classification, including three Dense layers. Empirical assessment confirms the model's effectiveness in precisely identifying and mitigating DDoS attacks, thereby strengthening cybersecurity defenses against advancing threats.
Deep Learning for Antenna Parameter Optimization in Wireless Communications
Pavithra R., Elakkiya E., Duraisamy B., Gayathree K., Sworna Jo Lijha J.
Conference paper, 2nd International Conference on Self Sustainable Artificial Intelligence Systems, ICSSAS 2024 - Proceedings, 2024, DOI Link
View abstract ⏷
Communication technology in recent days has undergone rapid growth and the need to improve the performance of antennas has also got its equal importance. The inclusion of the optimization methods satisfies the need to increase performance. In this paper, the inset-fed rectangular patch antenna is optimized using Deep learning along with Particle Swarm Optimization (PSO). The neural network is trained using the datasets and PSO is adapted to optimize the antenna parameters. The output from the neural network is the relationship between the model parameters and the antenna parameters. The antenna designed from the neural networks output is found to have best performance in terms of directivity, gain, efficiency, and miniaturization. The simulated results show that the antenna has a 24% reduction in size and a 70% improvement in efficiency and is used for applications in the 5.8 GHz (ISM band).
Deep Learning in Smart Manufacturing: Advancements, Applications, and Challenges
Natarajan G., Bai S.C.P.A., Balasubramanian S., Elango E.
Book chapter, Intelligent Computing and Optimization for Sustainable Development, 2024, DOI Link
View abstract ⏷
Smart manufacturing, which is also known as Industry 4.0, is an emerging breakthrough in technology for effective and enhanced manufacturing processes that utilize the Internet of Things (IoT) and Artificial Intelligence (AI). Deep Learning has become a versatile tool in smart manufacturing, allowing companies to reach the level of accuracy, productivity, and efficiency that were earlier unimaginable. This chapter provides a detailed study of the achievements, applications, and problems related to integrating deep learning with smart manufacturing. The initial part of this chapter explains the fundamental concepts of deep learning and concentrates mainly on neural network construction and training methods. Transformer-based models, recurrent neural networks (RNNs), and convolution neural networks (CNNs) are important for drawing out precise trends and insights from a range of industrial data sources. The capability of deep learning to manage both structured and unstructured data, like text, sensor readings, and images, is explored to get a thorough knowledge of the ways it can be possibly utilized in the manufacturing business. There are numerous ways to employ deep learning in smart manufacturing. All the facets of the manufacturing process are explored in this chapter – including supply chain, anomaly detection, process optimization, quality control, and predictive maintenance – where deep learning is utilized optimally. Moreover, the effect of generative frameworks, namely Generative Adversarial Networks (GANs), in enhancing product design, simulation, and prototyping is explored, giving the understanding of potential paths for innovating creative and effective goods. Though there are countless benefits of using deep learning in smart manufacturing, there are also a few drawbacks. The fundamental challenges of data standards, privacy, and quality are investigated in the chapter. It also examines the hardships of interpretability and explainability, which are pivotal to growing assurance in AI-driven decision-making in industries. Thus, investigating the adaptability and processing needs of deep learning algorithms gives suggestions for possible solutions and different designs to meet these drawbacks. This chapter concludes by looking into the prospects of deep learning in smart manufacturing. The research mainly focuses on the upcoming advances in transfer learning, federated learning, and reinforcement learning, all of which show the assurance in boosting the efficiency and extensibility of industrial processes. Additionally, to develop regulations and standards for the moral and legitimate use of AI technology in the industrial sector, the concept encourages association among universities, firms and government agencies. Emphasizing the technology’s applications, challenges, and opportunities provides an understanding of deep learning’s potential to academics, professionals, and decision-makers and moves the industrial sector toward a progressively intelligent, efficient, and sustainable future.
Leveraging Artificial Intelligence and Machine Learning for Advanced Threat Detection in Smart Manufacturing
Natarajan G., Balasubramanian S., Elango E., Gnanasekaran R.
Book chapter, Artificial Intelligence Solutions for Cyber-Physical Systems, 2024, DOI Link
View abstract ⏷
Smart manufacturing systems offer various benefits in terms of efficiency and productivity as they get better, becoming more automated and networked. However, this rapid rise also raises new issues, chief among them being security. This chapter examines the application of machine learning (ML) and artificial intelligence (AI) techniques to enhanced threat detection in the framework of smart manufacturing. Systems may adapt, acquire knowledge, and react in real time to new threats when AI and ML technologies are used in smart industrial settings. The methods and tools used to develop and implement AI and ML techniques for threat detection in smart manufacturing facilities are covered in detail in this chapter, which emphasizes the significance of these tools for identifying sophisticated and subtle threats which might defy conventional security measures. This chapter covers the essential elements of AI and ML threat detection, such as feature selection, model training, data collection and preprocessing, and ongoing model adjustment. It additionally examines a variety of hazards, including supply chain attacks, insider threats, and vulnerabilities in internet of things devices, highlighting the importance of a multipronged strategy for threat identification. The chapter looks at case studies and practical implementations that show how AI and ML may be used to reduce risks, minimize downtime, and ensure the integrity of smart manufacturing processes. It looks into how these technologies may strengthen network security, anomaly detection, and predictive maintenance while safeguarding important resources and production procedures. Ultimately, this chapter highlights how important AI and ML are to enhancing the security of smart manufacturing by offering a proactive defence against new threats. By utilizing advanced analytics and automation, organizations can safeguard the continuity and stability of their smart manufacturing systems within a highly interconnected and delicate digital ecosystem.
CGFSSO: the co-operative guidance factor based Salp Swarm Optimization algorithm for MPPT under partial shading conditions in photovoltaic systems
Raj S.A., Elakkiya E., Rajmohan S., Samuel G.G.
Article, International Journal of Information Technology (Singapore), 2024, DOI Link
View abstract ⏷
The increasing adoption of solar photovoltaic (PV) power generation stems from its renewable and eco-friendly attributes. However, conventional Maximum Power Point Tracking (MPPT) methods encounter difficulties in efficiently harnessing power from PV systems under Partial Shading Conditions (PSC). During PSC, these systems exhibit fluctuating power outputs due to shading, leading to challenges in identifying the Global Maximum Power Point (GMPP). The presented research introduces a pioneering Co-Operative Guidance factor based Salp Swarm Optimization algorithm (CGFSSO) tailored for MPPT in PSC scenarios within PV systems. The CGFSSO method focuses on precise GMPP localization with minimized oscillations by enhancing the update mechanism and effectively exploring the expansive search space. To assess its efficacy, the proposed CGFSSO approach undergoes comparison against conventional MPPT techniques, Fuzzy logic and Optimization based MPPT methods through rigorous simulation studies. The results underscore the CGFSSO method's exceptional performance in precisely tracking the GMPP and improving MPPT power efficiency when contrasted with established methodologies. This study signifies a promising stride towards optimizing power extraction from PV systems operating under demanding partial shading conditions.
Plastic Litter Detection using YOLOv8 Algorithm
Dhanishtaa R., Elakkiya E., Gunashri R., Madhumathi R.
Conference paper, 2nd International Conference on Self Sustainable Artificial Intelligence Systems, ICSSAS 2024 - Proceedings, 2024, DOI Link
View abstract ⏷
One of the most concerning environmental issues is water pollution, which is mostly brought on by plastic waste that is dumped into the aquatic region from land. These plastics pose a threat to the marine ecosystem's equilibrium, the coastal communities, economic health, and their animals. This would unavoidably have an impact on aquatic and human life. Even if they work well, most widely used techniques have several drawbacks when it comes to identifying and measuring plastics. As a result, it is important to implement alternative techniques that will make it simple to recognize plastics and facilitate their removal using latest technologies. This research study utilizes the You Only Look Once (YOLO) v8 Deep Learning (DL) algorithm for object detection and identifying plastics in the surface of water bodies and gives the count of the plastics that are detected as a result.
Role of Artificial Intelligence in Industry 4.0: Applications and Challenges
Elango E., Natarajan G., Balasubramanian S., Soman S.
Book chapter, AI-Driven Digital Twin and Industry 4.0: A Conceptual Framework with Applications, 2024, DOI Link
View abstract ⏷
Artificial intelligence has now enabled important advancements in Industry 4.0. (AI). Industries are generally focusing on increasing modeling, productivity, and lowering costs of operations in the hopes that they can achieve this via joint efforts between people and robotics. Smart industries' interconnected manufacturing processes rely on numerous devices that communicate via AI automation systems via acquiring and comprehending all data types. The adoption of smart automation and robotics can significantly alter modern production. AI provides pertinent data to help decision-making and forewarn users of possible mistakes. Industries will have to use AI to analyze data transmitted via IoT devices in order to integrate connected machines and IoT devices into their innovations. Businesses now have the ability to carefully monitor all of their end-to-end processes. This chapter, which is based on a literature assessment, tries to summarize the crucial role of AI in the implementation of Industry 4.0. The research objectives are therefore designed to aid in this chapter's accessibility to researchers, practitioners, students, and business professionals. It starts out by going through the key technological aspects and characteristics of AI that are essential for Industry 4.0. Second, the deployment of AI for Industry 4.0 is enabled by substantial achievements and a variety of hurdles, which are both identified in this study. The chapter concludes by listing and discussing important applications of AI for Industry 4.0. We can observe from a thorough overview investigation that the benefits of AI are pervasive and that stakeholders must comprehend the type of software platform they need for the new manufacturing order. Additionally, this technology looks for relationships in order to prevent mistakes and ultimately predict them. Thus, Industry 4.0 objectives are gradually being achieved through advanced AI technologies.
Precision Agriculture: A Novel Approach on AI-Driven Farming
Elango E., Hanees A., Shanmuganathan B., Kareem Basha M.I.
Book chapter, Signals and Communication Technology, 2024, DOI Link
View abstract ⏷
One of these unusual countries with possible correlations is India. While maintaining our strong agrarian roots, we have advanced in technological terms. We do, meanwhile, wish to eliminate the gap that exists between our technological advancements and their uses in farming. Agriculture faces numerous obstacles when it comes to planning and harvesting. Farmers must formulate appropriate crops, starting with a study of the soils and land, fertilizer, seeding strategies, irrigation, pest control, monitoring for perfection, and ultimate harvesting. Inadvertent prepared plans no longer just reduce farmers’ profit margins but also undermine their livelihoods. For the first time in history, the Earth’s population will surpass eight billion by 2024, placing further strain on the world’s supply chain, which is already hampered by an unstable climate and a lack of water. Farmers are putting cutting-edge technological ideas into practice to meet future demands. For instance, IBM researchers are developing solutions that use cloud-connected devices, artificial intelligence (AI), and the Internet of Things (IoT) at every stage of the supply chain. In order to secure the security of the global supply, new IoT solutions are helping to monitor the health of beehives. Smart farming, robotic harvesters with AI-enhanced dexterity, diagnostic drones that can apply pesticides and fertilizer to rice fields, and other innovative technologies are also being used by farmers. In order to create and execute this plan, the farmers should accurately identify and assess their area. They can, however, be restricted by utilising the limited attainability and efficiency of guided surveying and surveillance. These regulations supply an upward push to making plans and projection errors, which develop into destiny losses. In order, drone-backed surveying, mapping, and surveillance tech are “ripe” for applications to fill this gap. Drones fill the gap in the analytics process. Elevated optical and thermal sensors can be carried by drones. Drones from Ideas Furnace can even carry both thermal and optical surveillance gear. This implies that each and every moment we fly they could perform more thorough and extensive surveys. Drones in agriculture are starting up new opportunities to boom their crop yield with higher farming applications and real-time access to information.
EEG Signal Processing for Action Recognition Using Machine Learning Paradigms
Abirami S.P., Misha Chandar B., Karthikeyan G., Elakkiya E.
Conference paper, Proceedings - 2024 OITS International Conference on Information Technology, OCIT 2024, 2024, DOI Link
View abstract ⏷
Many intriguing applications, such as the ability to move prosthetic limbs and enable more fluid man-machine contact, may be made possible by automatic interpretation of brain readings. The problem of precisely categorizing EEG signals linked with memory categories is tackled first in the inquiry. With the restricted availability of pre-trained models for such signal classification, a Convolution-based Neural Network (CNN) is constructed from scratch. By using EEG recordings from UC Berkeley's Bio-Sense Lab, this study seeks to improve memory recall using machine learning and precise feature selection. Fifteen participants' EEG data are converted into the frequency domain, and the amplitudes are used as key characteristics. The selection of these qualities is improved by a self-attention mechanism, which maximizes the distinction among various memory categories. The primary focus is to evaluate the performance of the most advanced algorithms, with the secondary objective of outperforming previous methods in terms of classification accuracy. A fine-tuned subset of the frequency-based characteristics is evaluated using a Support Vector Machine (SVM) classifier. By showcasing the efficiency of self-attention in honing feature subsets, this study highlights the significance of feature engineering in EEG-based memory classification. This method is positioned as a promising advancement in the analysis of EEG data since it improves the separation between memory categories through the application of frequency-domain modifications and SVM classifiers. Furthermore, investigating time series features shows how well they may capture intricate patterns, pointing to fresh avenues for future neuro-informatics and cognitive study.
Deep Learning Approach for Disaster Tweet Classification
Conference paper, 2024 15th International Conference on Computing Communication and Networking Technologies, ICCCNT 2024, 2024, DOI Link
View abstract ⏷
In the rapidly interconnected landscape of today, social media has established itself as an indispensable tool with profound implications. Among these, the rapid dissemination of disaster-related content emerges as a pivotal advantage, facilitating swift information flow during times of crisis. However, traditional methods for identifying such content often grapple with inherent delays and inefficiencies. These approaches, reliant on manual surveillance or basic keyword matching, struggle to keep stride with the real-time dynamics of social media. Consequently, this lag in identification can result in missed windows for prompt response and aid provision in critical scenarios. To remedy this, we advocate for the utilization of the advanced BERT pre-trained model. Our proposed methodology leverages BERT's contextual understanding of language, enabling it to discern disaster-related content swiftly and accurately. Even when working with a limited dataset, our model showcases remarkable proficiency, achieving an impressive 79% accuracy in identifying disaster-related tweets. This innovative approach expedites content identification, thereby reinforcing the efficiency of disaster response strategies. By embracing this novel paradigm, we unlock the potential to revolutionize disaster-related information sharing. The amalgamation of social media's immediacy with BERT's analytical prowess empowers stakeholders to stay attuned to unfolding events in real-time, enhancing the ability to deploy resources and assistance where they are most needed. In essence, our proposal not only streamlines disaster communication but also holds the promise of saving lives through timely and targeted interventions. Index Terms-social media, disaster-related content, BERT, real-time dynamics
The roadmap to AI and digital twin adoption
Elango E., Natarajan G., Hanees A.L., Bai S.C.P.A.
Book chapter, Digital Twin Technology and AI Implementations in Future-Focused Businesses, 2024, DOI Link
View abstract ⏷
Organizations are quickly realizing the transformative possibilities of digital twins and artificial intelligence (AI) in this era of fast technical advancement. This chapter provides a brief synopsis of "The Roadmap to AI and Digital Twin Adoption, " a comprehensive resource that delves into the key elements and techniques necessary for the successful integration of AI and digital twins across a range of sectors. This roadmap explores the mutually beneficial relationship between artificial intelligence (AI) and digital twins, emphasizing how each may enhance overall performance, decision-making, and operational efficiency. It covers the fundamental concepts of artificial intelligence (AI), such as natural language processing, machine learning, and deep learning, and how important they are in relation to digital twins. The guide's emphasis extends to the practical use of AI and digital twins, offering guidance on data collection and management, model training, and algorithm choice.
Enhancing Privacy and Security in Online Education Using Generative Adversarial Networks
Natarajan G., Elango E., Hanees A.L., Pon Anna Bai S.C.
Book chapter, Enhancing Security in Public Spaces Through Generative Adversarial Networks (GANs), 2024, DOI Link
View abstract ⏷
As online education grows in popularity, issues concerning learners' privacy and security have become increasingly important. This chapter delves into the creative use of generative adversarial networks (GANs) to handle the complex difficulties of protecting sensitive information in the online education scene. The chapter opens with a detailed assessment of the present situation of online education. The chapter focuses on the integration of GANs into the online education environment to improve privacy and security. The chapter delves into the technical features of GANs, demonstrating how these networks may be tailored to generate synthetic yet indistinguishable data, reducing the danger of privacy violations. In addition to privacy protection, the chapter investigates the function of GANs in improving the overall cybersecurity posture of online education platforms. Finally, the chapter emphasises Generative Adversarial Networks' transformational potential in altering the privacy and security environment of online education.
Prediction of Soil Fertility Using ML Algorithms and Fertilizer Recommendation System
Madhumathi R., Dhanishtaa R., Elakkiya E., Gunashri R., Arumuganathan T.
Conference paper, International Conference on Self Sustainable Artificial Intelligence Systems, ICSSAS 2023 - Proceedings, 2023, DOI Link
View abstract ⏷
Prediction of soil nutrient is one of the important primary input management systems to increase crop yield. The nutrient present in the soil plays a major role in the healthy growth of the plants. There are many nutrients present in the soil such as Nitrogen (N), Phosphorus (P), Potassium (K) Calcium (Ca), Magnesium (mg) and Sulphur (S), etc. Farmers do not have sufficient knowledge and information about the nutrient present in the soil. Therefore, they apply a lot of fertilizers in the soil, which causes the unbalanced soil nutrition levels. Hence, the aim this work is to predict the available nutrients in the soil using various Machine Learning (ML) techniques. In this work, three macro nutrients namely NPK which are needed for healthy plant growth are focused. Classification algorithms were used for prediction of soil fertility. Among all of these algorithms, Random Forest give the highest accuracy of 84%. With the results obtained, the model is deployed using web app for recommending fertilizer which is very helpful to the farmers.
MS3A: Wrapper-Based Feature Selection with Multi-swarm Salp Search Optimization
Shathanaa R., Sreeja S.R., Elakkiya E.
Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link
View abstract ⏷
Feature selection is crucial in improving the effectiveness of classification or clustering algorithms as a large feature set can affect classification accuracy and learning time. The feature selection process includes choosing the most pertinent features from an initial feature set. This work introduces a new feature selection technique using salp swarm algorithm. In particular, an improved variation of the salp swarm algorithm is presented with modifications done to different stages of the algorithm. The proposed work is evaluated by first studying its performance on standard CEC optimization benchmarks. In addition to this, the applicability of the introduced algorithm for feature selection problems is verified by comparing its performance with existing feature selection algorithms. The experimental analysis depicts that the proposed methodology achieves performance improvement over existing algorithms for both numerical optimization and feature selection problems and reduces the feature subset size by 39.1% when compared to the traditional salp swarm algorithm.
Multi-cohort whale optimization with search space tightening for engineering optimization problems
Rajmohan S., Elakkiya E., Sreeja S.R.
Article, Neural Computing and Applications, 2023, DOI Link
View abstract ⏷
Metaheuristic algorithms have been widely studied and shown to be suitable for solving various engineering optimization problems. This paper presents a novel variant of the whale optimization algorithm known as multi-cohort whale optimization algorithm to solve engineering optimization problems. The new algorithm improves the existing whale optimization by dividing the population in to cohorts and introduces a separate exploration procedure for each cohort. Also, a new boundary update procedure for the search space is introduced. In addition to this, opposition-based initialization and elitism are employed to aid quick convergence of the algorithm. The proposed algorithm is compared with whale optimization algorithm variants and other metaheuristic algorithms for different numerical optimization problems. Statistical analysis is performed to ensure the significance of the proposed algorithm. In addition to this, the proposed and existing algorithms are studied based on three engineering optimization problems. The analyses show that the proposed algorithm achieves 53.75% improvement in average fitness when compared to the original whale optimization algorithm.
Stratified hyperparameters optimization of feed-forward neural network for social network spam detection (SON2S)
Article, Soft Computing, 2022, DOI Link
View abstract ⏷
Over the last decade, popularity and fascination for social networks have exponentially increased. This rapid growth has triggered cybercriminals to utilize social networks for their malicious activities like social network spam. This risk and danger in social networking sites urge the need for an accurate and efficient spam detection model. Traditionally, supervised and unsupervised classification algorithms are used to identify social network spam. But spammers often change their behavior to evade spam filtering techniques. This results in huge data volatility. Traditional techniques are, therefore, sometimes ineffective in filtering spam in the social network. Hence, this work proposes to use a feed-forward neural network that can use the hidden relationship in this complex data for spam detection model. To improve the model’s accuracy and to speed up the training process, important hyperparameters such as learning rate, momentum term, architecture of neural network, activation function, training algorithm, initial weights ranges, and initial tuning of weights are necessary. As there is no general and predefined method available for this process, a reinforcement learning and k-Norm factor-based shuffled frog leaping algorithm to find the optimum set of neural network parameters is proposed in this paper. In the first stage, learning rate and momentum parameters in the continuous variable range are tuned using reinforcement learning. In the second stage, the best possible combinations of remaining parameter values are chosen using the proposed modified shuffled frog leaping algorithm that uses k-Norm to improve the exploitation. Experiments were carried out for the Tip spam dataset and Twitter dataset on a feed-forward neural network with tuned parameters. The results prove that the proposed algorithm achieves higher accuracy and lower false positive rate when compared to other existing techniques.
TextSpamDetector: textual content based deep learning framework for social spam detection using conjoint attention mechanism
Elakkiya E., Selvakumar S., Leela Velusamy R.
Article, Journal of Ambient Intelligence and Humanized Computing, 2021, DOI Link
View abstract ⏷
Online Social Networks (OSNs) allow easy membership leading to registration of a huge population and generation of voluminous information. These characteristics attract spammers to spread spam which may cause annoyance, financial loss, or personal information loss to the user and also weaken the reputation of social network sites. Most of the spam detection methods are based on user and content-based features using machine learning techniques. But, these annotated features are difficult to extract in real-time due to the privacy policy of most social network sites. Even for the features that can be extracted, because of their large size, the manual extraction process is complex and time-consuming. So there is a need for text level spam detection that does not require extraction of hard-core features. Existing deep learning based or existing single attention mechanism based text classification methods could not perform well as social network data are sparse with short texts and noises. Moreover, Spammers avoid direct spam words and use indirect words to evade spam filtering techniques and thus resulting in the dynamic and non-stationary nature of the social network spam texts. These indirect words contain hidden context that creates attention drift problem. So conjoint attention mechanism along with two attention mechanisms namely normal attention and context preserving attention are proposed to avoid attention drift problem in this deep learning-based text level spam detection technique (TextSpamDetector). Attention drift problem is solved by one attention mechanism which helps to find the important words while another attention mechanism allows focusing on attention in target context by referring to higher level abstraction of context vector. These attention mechanisms are referring to different context representations of the input text for finding informative words from the structural context representation. This structural context representation containing both local semantic features as well as global semantic dependency features is generated by CNN and BiLSTM. The proposed model is evaluated with the existing spam detection techniques using three datasets and the experimental results have proved that the proposed model performs well in terms of accuracy, F measure, and false-positive rate.
GAMEFEST: Genetic Algorithmic Multi Evaluation measure based FEature Selection Technique for social network spam detection
Article, Multimedia Tools and Applications, 2020, DOI Link
View abstract ⏷
Social Network sites have become incredibly important in the present day. This popularity attracts the attacker to easily approach a large population and to have access to massive information for performing intrusion activities in Online Social Networks (OSN) including spamming. Spammers not only spread unsolicited messages but also perform malicious activities that harm the user’s financial or personal life and tarnish the reputation of social network platforms. Efficient spam detection requires the selection of relevant features to portray spammer behavior. Most of the existing feature selection techniques use any one of the evaluation measures such as, distance, dependence, consistency, information, and classifier error rate. The feature selection techniques select features from different perspectives based on the evaluation measures. Each evaluation measure produces different subset, and the detection rate differs accordingly. The majority of the existing works focus on the individual feature ranking, and discard the lowest weight feature. Lowest weight feature may produce more accurate prediction if, it is combined with other features. So, there is a need for the feature selection technique that considers the characteristics of all the evaluation measures to produce the appropriate subset, which increases the spam detection rate and assigns a weight for the combination of features. In regard to this, the paper proposes a new multi evaluation measure combined with feature subset selection based on the genetic algorithm, GAMEFEST. The performance of the proposed work has been evaluated using Twitter, Apontador, and YouTube datasets. Experimental results prove that our proposed GAMEFEST with Minimum Surplus Crossover (MSC) improves the efficiency of the learning process and increases the spam detection rate.
CIFAS: Community Inspired Firefly Algorithm with fuzzy cross-entropy for feature selection in Twitter Spam detection
Elakkiya E., Selvakumar S., Velusamy R.L.
Conference paper, 2020 11th International Conference on Computing, Communication and Networking Technologies, ICCCNT 2020, 2020, DOI Link
View abstract ⏷
Social network sites such as Facebook, Twitter, YouTube, etc., are very popular among internet users for information sharing and communication. This popularity also attracts cybercriminals to spread their malicious activities including spams using social networks. Spam may contain unsolicited information or malicious links that may harm Twitter users and make them feel insecure. So there is a crucial need for effective spam detection which requires the distinctive features that clearly portray the spammer behavior. This is accomplished by information theoretic-based feature selection methods, as these methods select the informative features that retain the original meaning. Most of the existing information theory-based feature selection methods focus on retaining the features that contain more information while removing features that contain less information for improving the performance. This may lead to the information loss. Also, the features that are individually less significant may be useful when combined with other features. So, Community Inspired Firefly Algorithm for Spam detection (CIFAS) is proposed to handle combination search for the features that provide good performance using fuzzy cross entropy as the fitness function. Fuzzy cross-entropy is used in this work, to preserve the information contained in the selected feature subset equally to the information contained in full feature set. From the experimental result, it has been proven that the proposed CIFAS offers defense against spam similar to the original feature set and performs better than the existing approaches in terms of accuracy, false-positive rate, and F measure for the two standard twitter datasets.
Initial Weights Optimization using Enhanced Step Size Firefly Algorithm for Feed Forward Neural Network applied to Spam Detection
Conference paper, IEEE Region 10 Annual International Conference, Proceedings/TENCON, 2019, DOI Link
View abstract ⏷
Spams are unsolicited and unnecessary messages which may contain harmful codes or links for activation of malicious viruses and spywares. Increasing popularity of social networks attracts the spammers to perform malicious activities in social networks. So an efficient spam detection method is necessary for social networks. In this paper, feed forward neural network with back propagation based spam detection model is proposed. The quality of the learning process is improved by tuning initial weights of feed forward neural network using proposed enhanced step size firefly algorithm which reduces the time for finding optimal weights during the learning process. The model is applied for twitter dataset and the experimental results show that, the proposed model performs well in terms of accuracy and detection rate and has lower false positive rate.