Artificial intelligence based on multi objective algorithm for effective load forecasting
Dr Elakkiya E, Antony Raj S|Anil Kumar S V D|Girija Kumari Palamarthi|Sumitra Palepu|Shaik Bashida
Source Title: Integrated Technologies in Electrical, Electronics and Biotechnology Engineering, DOI Link
View abstract ⏷
In recent years, researchers have directed more attention towards accurately predicting and maintaining stable loads, recgnizing their profound impact on the economy and the crucial need for effective power system management. However, the majority of past studies have focused solely on either decreasing forecast errors or improving stability, with few delving into both simultaneously. Developing a forecasting model that addresses both objectives concurrently presents a formidable task, primarily due to the intricate nature of load behavior patterns. Hence, in order to concurrently accomplish both objectives, we propose and implement an Artificial intelligence based multi objective algorithm (AIMOA). The suggested model demonstrates superior performance compared to baseline models across different real-world electricity datasets, with results indicating strong performance of our proposed approach.
Stacked hybrid model for load forecasting: integrating transformers, ANN, and fuzzy logic
Dr Elakkiya E, Antony Raj S|Arunkumar Balakrishnan|Bhavyasri Sanisetty|Revanth Balaji Bandaru
Source Title: Scientific Reports, Quartile: Q1, DOI Link
View abstract ⏷
Modern energy management systems must include load forecasting in order for utilities to plan and optimize electricity distribution, lower operating costs, and improve grid stability. With the addition of renewable energy sources and the advancement of smart grid technology, energy systems have become increasingly complex, making accurate forecasting increasingly challenging. Conventional techniques, including regression models and ARIMA, frequently perform less well because they are unable to capture the complex multivariate relationships and temporal dependencies present in energy data. Furthermore, these techniques are prone to errors in the presence of noisy data and have scalability issues when used on big, high-dimensional datasets. This paper presents a hybrid forecasting framework that combines artificial neural networks with Time Series Transformers and Fuzzy Logic Transform in order to overcome these drawbacks. The Transformer architecture excels in capturing long-term dependencies and interdependencies between features through its self-attention mechanism. Meanwhile, FLT + ANN effectively preprocesses noisy, irregular data and models short-term nonlinear patterns. The combination of these techniques creates a robust framework capable of handling complex energy datasets while maintaining high accuracy. Extensive tests on actual energy datasets show that the suggested hybrid model outperforms both conventional and stand-alone methods. With RMSE and MAE reductions of up to 1520%, the model outperforms baseline models such as Random Forests, Decision Trees, and Linear Regression. These findings demonstrate how the suggested paradigm has the potential to transform load forecasting and enable more intelligent, effective energy systems.
CGFSSO: the co-operative guidance factor based Salp Swarm Optimization algorithm for MPPT under partial shading conditions in photovoltaic systems
Dr Elakkiya E, S Antony Raj., Shathanaa Rajmohan., G Giftson Samuel
Source Title: International Journal of Information Technology (Singapore), Quartile: Q1, DOI Link
View abstract ⏷
The increasing adoption of solar photovoltaic (PV) power generation stems from its renewable and eco-friendly attributes. However, conventional Maximum Power Point Tracking (MPPT) methods encounter difficulties in efficiently harnessing power from PV systems under Partial Shading Conditions (PSC). During PSC, these systems exhibit fluctuating power outputs due to shading, leading to challenges in identifying the Global Maximum Power Point (GMPP). The presented research introduces a pioneering Co-Operative Guidance factor based Salp Swarm Optimization algorithm (CGFSSO) tailored for MPPT in PSC scenarios within PV systems. The CGFSSO method focuses on precise GMPP localization with minimized oscillations by enhancing the update mechanism and effectively exploring the expansive search space. To assess its efficacy, the proposed CGFSSO approach undergoes comparison against conventional MPPT techniques, Fuzzy logic and Optimization based MPPT methods through rigorous simulation studies. The results underscore the CGFSSO method's exceptional performance in precisely tracking the GMPP and improving MPPT power efficiency when contrasted with established methodologies. This study signifies a promising stride towards optimizing power extraction from PV systems operating under demanding partial shading conditions.
Multiple Granularity Context Representation based Deep Learning Model for Disaster Tweet Identification
Source Title: 2024 5th International Conference on Innovative Trends in Information Technology, ICITIIT 2024, DOI Link
View abstract ⏷
Twitter has evolved into a pivotal platform for information exchange, particularly during emergencies. However, amidst the vast array of data, identifying tweets relevant to damage assessment remains a significant challenge. In response to this challenge, this study presents a novel approach designed to identify tweets related to damage assessment in times of crises. The challenge lies in sifting through an immense volume of data to isolate tweets pertinent to the specific event. Recent studies suggest that employing contextual word embedding approaches, such as transformers, rather than traditional context-free methods, can enhance the accuracy of disaster detection models. This study leverages multiple granularity level context representation at the character and word levels to bolster the efficiency of deep neural network techniques in distinguishing between disaster-related tweets and unrelated ones. Specifically, the weighted character representation, generated with the self-attention layer, is utilized to discern important information at the fine character level. Concurrently, Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) algorithms are employed in the word-level embedding to capture global context representation. The effectiveness of the proposed learning model is assessed by comparing it with existing models, utilizing evaluation measures viz., accuracy, F1 score, precision, and recall. The results demonstrate the effectiveness of our model compared to existing methods. © 2024 IEEE.
Deep Learning Approach for Disaster Tweet Classification
Dr Elakkiya E, Rohit Bahadur Bista., Chandan Shah
Source Title: 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
View abstract ⏷
In the rapidly interconnected landscape of today, social media has established itself as an indispensable tool with profound implications. Among these, the rapid dissemination of disaster-related content emerges as a pivotal advantage, facilitating swift information flow during times of crisis. However, traditional methods for identifying such content often grapple with inherent delays and inefficiencies. These approaches, reliant on manual surveillance or basic keyword matching, struggle to keep stride with the real-time dynamics of social media. Consequently, this lag in identification can result in missed windows for prompt response and aid provision in critical scenarios. To remedy this, we advocate for the utilization of the advanced BERT pre-trained model. Our proposed methodology leverages BERTs contextual understanding of language, enabling it to discern disaster-related content swiftly and accurately. Even when working with a limited dataset, our model showcases remarkable proficiency, achieving an impressive 79% accuracy in identifying disaster-related tweets. This innovative approach expedites content identification, thereby reinforcing the efficiency of disaster response strategies. By embracing this novel paradigm, we unlock the potential to revolutionize disaster-related information sharing. The amalgamation of social medias immediacy with BERTs analytical prowess empowers stakeholders to stay attuned to unfolding events in real-time, enhancing the ability to deploy resources and assistance where they are most needed. In essence, our proposal not only streamlines disaster communication but also holds the promise of saving lives through timely and targeted interventions. Index Terms-social media, disaster-related content, BERT, real-time dynamics
RBFN-Augmented DDoS Detection with CNN-GRU Fusion
Dr Elakkiya E, Rohit Bahadur Bista., Chandan Shah., Ankit Rajput., Ashish Kumar Gupta., Raj Chaudhary
Source Title: 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
View abstract ⏷
Distributed denial-of-service (DDoS) assaults represent a substantial menace in contemporary security of networks, demanding effective detection mechanisms to mitigate their escalating impact. Despite notable progress in related research, the diverse attack modes and fluctuating scale of malicious traffic continue to challenge the development of detection methods with optimal accuracy. This paper addresses this gap by proposing a comprehensive DDoS attack detection approach leveraging deep learning methodologies. The NSL-KDD Dataset serves as the experimental foundation for training, testing, and validating deep learning algorithms. The proposed method integrates the Minimum Redundancy Maximum Relevance (MRMR) feature selection algorithm, enhancing model performance, mitigating overfitting, and reducing computational complexity. The classifier comprises Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) components. CNN excels at object detection and localization within images or videos, while the GRU provides a dynamic mechanism for selectively updating the networks hidden state, effectively managing flow information. The experimental results demonstrate the efficacy of the proposed approach in achieving improved detection accuracy and robust performance against DDoS attacks.
DDOS Attack Detection using DeepDDOS : A Hybrid Approach using CNN GRU and MLP model
Dr Elakkiya E, Kaushik Aadhithya Chiratanagandla., Jayesh Jethy., Nabin Kumar Shah., Vishal Kumar Singh
Source Title: 2024 5th IEEE Global Conference for Advancement in Technology (GCAT), DOI Link
View abstract ⏷
This study presents an innovative method that uses PCA for DDoS attack detection and mitigation techniques include feature selection and a hybrid neural network (DeepDDos) model. This model includes Conv1D layers for extracting features, MaxPooling layer for dimensionality reduction, and a GRU layer for capturing sequential patterns. Dropout layers mitigate overfitting, while Flatten layers prepare data for analysis. Conv1D layers enhance the models ability to identify DDoS attack patterns. MaxPooling layers reduce spatial dimensions while preserving important information. The GRU layer captures temporal dependencies, facilitating robust attack pattern identification. The model incorporates MLP layers for classification, including three Dense layers. Empirical assessment confirms the models effectiveness in precisely identifying and mitigating DDoS attacks, thereby strengthening cybersecurity defenses against advancing threats
EEG Signal Processing for Action Recognition Using Machine Learning Paradigms
Dr Elakkiya E, Abirami S P., Misha Chandar B., Karthikeyan G
Source Title: 2024 OITS International Conference on Information Technology (OCIT), DOI Link
View abstract ⏷
Many intriguing applications, such as the ability to move prosthetic limbs and enable more fluid man-machine contact, may be made possible by automatic interpretation of brain readings. The problem of precisely categorizing EEG signals linked with memory categories is tackled first in the inquiry. With the restricted availability of pre-trained models for such signal classification, a Convolution-based Neural Network (CNN) is constructed from scratch. By using EEG recordings from UC Berkeley's Bio-Sense Lab, this study seeks to improve memory recall using machine learning and precise feature selection. Fifteen participants' EEG data are converted into the frequency domain, and the amplitudes are used as key characteristics. The selection of these qualities is improved by a self-attention mechanism, which maximizes the distinction among various memory categories. The primary focus is to evaluate the performance of the most advanced algorithms, with the secondary objective of outperforming previous methods in terms of classification accuracy. A fine-tuned subset of the frequency-based characteristics is evaluated using a Support Vector Machine (SVM) classifier. By showcasing the efficiency of self-attention in honing feature subsets, this study highlights the significance of feature engineering in EEG-based memory classification. This method is positioned as a promising advancement in the analysis of EEG data since it improves the separation between memory categories through the application of frequency-domain modifications and SVM classifiers. Furthermore, investigating time series features shows how well they may capture intricate patterns, pointing to fresh avenues for future neuro-informatics and cognitive study.
MS3A: Wrapper-Based Feature Selection with Multi-swarm Salp Search Optimization
Dr Elakkiya E, Rajmohan Shathanaa., S R Sreeja
Source Title: Lecture Notes in Networks and Systems, Quartile: Q4, DOI Link
View abstract ⏷
Feature selection is crucial in improving the effectiveness of classification or clustering algorithms as a large feature set can affect classification accuracy and learning time. The feature selection process includes choosing the most pertinent features from an initial feature set. This work introduces a new feature selection technique using salp swarm algorithm. In particular, an improved variation of the salp swarm algorithm is presented with modifications done to different stages of the algorithm. The proposed work is evaluated by first studying its performance on standard CEC optimization benchmarks. In addition to this, the applicability of the introduced algorithm for feature selection problems is verified by comparing its performance with existing feature selection algorithms. The experimental analysis depicts that the proposed methodology achieves performance improvement over existing algorithms for both numerical optimization and feature selection problems and reduces the feature subset size by 39.1% when compared to the traditional salp swarm algorithm.
Multi-cohort whale optimization with search space tightening for engineering optimization problems
Dr Elakkiya E, Shathanaa Rajmohan.,S R Sreeja
Source Title: Neural Computing and Applications, Quartile: Q1, DOI Link
View abstract ⏷
Metaheuristic algorithms have been widely studied and shown to be suitable for solving various engineering optimization problems. This paper presents a novel variant of the whale optimization algorithm known as multi-cohort whale optimization algorithm to solve engineering optimization problems. The new algorithm improves the existing whale optimization by dividing the population in to cohorts and introduces a separate exploration procedure for each cohort. Also, a new boundary update procedure for the search space is introduced. In addition to this, opposition-based initialization and elitism are employed to aid quick convergence of the algorithm. The proposed algorithm is compared with whale optimization algorithm variants and other metaheuristic algorithms for different numerical optimization problems. Statistical analysis is performed to ensure the significance of the proposed algorithm. In addition to this, the proposed and existing algorithms are studied based on three engineering optimization problems. The analyses show that the proposed algorithm achieves 53.75% improvement in average fitness when compared to the original whale optimization algorithm.