Transfer Learning Model for Anomaly Detection in Data Streaming – Data Engineering Perspective
Suryadevara G., Udayaraju P., Pachipulusu P., Gayathri M., Sitharam M., Kumar V.D.
Conference paper, 2nd International Conference on Machine Learning and Autonomous Systems, ICMLAS 2025 - Proceedings, 2025, DOI Link
View abstract ⏷
The main objective of this paper is to implement a transfer learning model for predicting anomalies in online streaming data. Streaming data is a continuous data generation and transmission model with a huge amount of data, enabling different kinds of vulnerable attacks in the network. It leads to negative impacts on the overall network performance. Several earlier methods have been proposed to improve anomaly detection accuracy in streaming data, whereas the false positive rate is high. This paper has aimed to increase the anomaly detection rate with a reduced false positive rate. Hence, it proposed a novel transfer learning method for designing an effective anomaly detection model in data streaming applications. It implements a long., short-term memory for managing the continuous generation and transfer of data called streaming data because it has multiple built-in features like forget gate., which operates the memory by eliminating unwanted and redundant data flows in the streaming process. The LSTM model is deployed in a kind of MANET called VANET, where it is applied to detect anomalies during vehicle communication. This paper provides high prediction accuracy since it integrates various data analytics tasks, like preprocessing, feature extraction, and classification, which feed quality data and perform fast analysis. The LSTM can detect anomalies, including DoS, DDoS, Sybil, Sinkhole, Wormhole, and blackhole. The simulation is carried out by implementing LSTM in Python and executed on a benchmark dataset to verify the efficacy of LSTM. The output shows that the model provides higher accuracy, low latency, and high throughput and is suitable for many real-time applications like IoT networks and cybersecurity.
PREDICTION OF NON-ALCOHOLIC FATTY LIVER DISEASE (NAFLD) USING DNA PATHOLOGICAL DATA AND SUPPORT VECTOR MACHINES
Prasad T.V.K.P., Bandla S.L., Srikanth N., Ramya Sree K., Aswani I., Udayaraju P., Rama Krishna B.L.V.S.
Article, Journal of Theoretical and Applied Information Technology, 2025,
View abstract ⏷
Non-Alcoholic Fatty Liver Disease (NAFLD) has emerged as one of the most prevalent liver disorders globally, affecting nearly one-third of the population, with particularly high incidence rates in countries like the UK. Despite its widespread occurrence, accurate estimation of its prevalence remains a challenge. Earlystage NAFLD, typically characterized by simple steatosis, can silently progress to more severe conditions such as non-alcoholic steatohepatitis (NASH), fibrosis, and cirrhosis if left untreated. This progression significantly compromises liver function and increases the risk of cardiovascular complications. However, current diagnostic methods, including magnetic resonance spectroscopy and ultrasound imaging, are often limited by cost, accessibility, and diagnostic specificity. Given the clinical urgency and the limitations of conventional diagnostics, this study addresses the critical need for an accessible and accurate method to detect early-stage liver disease—specifically, to predict NASH within the NAFLD spectrum. We propose a machine learning-based approach that leverages clinical and pathological data, including blood parameters and ultrasound-derived tissue characteristics, to support early detection. Using a dataset of 181 patients, we applied preprocessing techniques such as normalization and categorical encoding to prepare the data for modelling. Features such as integrated backscatter (IB), Q-factor, and homogeneity factor (HF) were extracted to quantify liver tissue characteristics. Support Vector Machine (SVM), chosen for its balance of simplicity and efficiency in handling high-dimensional datasets, was employed for classification and regression tasks. Experimental validation using Python-based implementations demonstrated the model's effectiveness, achieving an average accuracy of 89.95% across both clinical and imaging-derived datasets. This study underscores the potential of machine learning in improving early diagnosis of liver diseases and reducing their long-term clinical burden.
Behavioral Pattern Analysis Framework with Markov Chains and Graph Neural Networks for Scalable Customer Behavior Prediction in Travel Reservations
Kotaru C., Udayaraju P., Kolasani D., Sayana R., Tumkunta S., Gummadi V.
Conference paper, 2nd International Conference on Machine Learning and Autonomous Systems, ICMLAS 2025 - Proceedings, 2025, DOI Link
View abstract ⏷
The ability to accurately predict and understand customer behavior is central to the success of modern Customer Data Platforms (CDPs), which rely on unified customer profiles to drive personalization, marketing strategies, and customer retention. Traditional segmentation models, such as Recency, Frequency, and Monetary (RFM), struggle to capture customer interactions' dynamic, sequential, and interconnected nature in travel reservation environments. This paper introduces the Behavioral Pattern Analysis Framework (BPAF), a hybrid approach integrating Markov Chains and Graph Neural Networks (GNNs) for scalable, real-time customer behavior prediction. BPAF models customer interactions as a series of states (e.g., browsing destinations, searching for flights or accommodations, booking reservations) to track transitions through various customer journey stages. Markov Chains capture the sequential nature of customer actions, providing insights into the likelihood of moving from one state to another. Meanwhile, Graph Neural Networks model the complex relationships and dependencies between customer actions, allowing for a more comprehensive understanding of the interplay between behaviors across touchpoints. This combination enhances the ability to uncover advanced behavior patterns, such as delayed bookings, cross-category travel interests, and the influence of external factors (e.g., seasonality or promotions). Integrating Markov Chains and GNNs within BPAF ensures scalability by allowing the system to handle large datasets and model complex relationships efficiently. This hybrid framework can process vast amounts of data in real-time. It is particularly well-suited for dynamic environments like travel reservations, where customer behaviour evolves rapidly, and interactions span multiple channels. Experiments using real-world travel reservation datasets demonstrate that BPAF significantly outperforms traditional models in predicting customer actions, improving conversion rate predictions and enabling more accurate personalized recommendations. The paper concludes by discussing the potential of BPAF to drive more effective customer engagement, optimize booking strategies, and support growth through scalable, behaviour-driven insights.
Intelligent Industrial IoT: A Data-Driven Approach for Smart Manufacturing and Predictive Maintenance
Putteti S., Santhi G., Mittoor G.R., Nagamani C., Udayaraju P.
Conference paper, Proceedings of 3rd International Conference on Augmented Intelligence and Sustainable Systems, ICAISS 2025, 2025, DOI Link
View abstract ⏷
The fast growth of the Industrial Internet of Things (IIoT) has changed modern manufacturing by allowing real-time data collection, smart automation, and predictive analysis. However, the large amount of data from industrial sensors and machines needs efficient processing, analysis, and decision-making to improve efficiency and reduce downtime. This study introduces an Intelligent Industrial IoT (I-IIoT) system that combines edge computing, artificial intelligence (AI), and big data analysis to support smart manufacturing and predictive maintenance. The proposed model uses machine learning (ML) and deep learning (DL) to identify equipment issues, predict failures, and improve production. A cloud-edge hybrid setup processes data in real time, reducing delays and making the system more responsive. A blockchain-based data-sharing method ensures data security, privacy, and smooth communication between IIoT systems. Comparisons with traditional maintenance methods reveal significant improvements, such as over 95% accuracy in predictions, a 30% reduction in downtime, and 25% better use of resources. These results suggest that data-driven IIoT solutions can transform industrial operations by improving automation, security, and decision-making, leading to the next level of smart factories.
Application of Artificial Intelligence: Recurrent Neural Network and Convolution Neural Network Performs in Industrial Design and Optimization
Pamaiahgari V.P.R., Santhi G., Reddy K.J., Udayaraju P., Tumkunta S.
Conference paper, Proceedings of 3rd International Conference on Augmented Intelligence and Sustainable Systems, ICAISS 2025, 2025, DOI Link
View abstract ⏷
Efficiency is supreme in Industrial Engineering. Many advanced AI algorithms are generally used in the industrial environment to monitor, identify, and detect machinery conditions and other related activities. The primary challenging task is accurate fault detection, which is essential to classifying the defects in manufacturing. Hence, choosing a suitable AI algorithm for monitoring the industrial environment is most important. This paper intends to solve the problem of automatically optimizing specific industrial products' design, structure, and process; it selects advanced AI models, such as RNN and CNN. The paper will address the optimization issue using neural networks in the Recurrent Neural Network and Convolutional Neural Network architectures. The goal behind the CNN model is to implement fault detection, selection of construction materials, and validation of the design through CAD methods utilizing feature extraction and pattern recognition. The RNN model assists the user by
Implementing Resource-Aware Scheduling Algorithm for Improving Cost Optimization in Cloud Computing
Reddy K.J., Udayaraju P., Pamaiahgari V.P.R., Kumar V.D.
Conference paper, 4th International Conference on Sentiment Analysis and Deep Learning, ICSADL 2025 - Proceedings, 2025, DOI Link
View abstract ⏷
An uncountable number of computational resources are shared for various applications using a transformative technology called Cloud Computing. It is an emerging technology that can offer scalable and on-demand resources. However, cost optimization and resource allocation are critical issues due to the increasing number of cloud user and their request. It can be solved by optimising resource allocation by reducing the task/user queue. It is possible only if the requested resource is free; otherwise, it must schedule the same or relevant resource. This paper demonstrates a Resource-Aware Scheduling (RAS) algorithm to increase the speed of resource allocation by mapping the user request with the resource availability. The proposed algorithm examines the log information of the resources, workload behaviour, resource availability, price, task efficiency, and reduced wastage and allocates the resources dynamically. To do that, it maps user information with resource information and schedules based on availability and priority. The simulation-based experiment is carried out in a private cloud space and demonstrates the resource allocation, utilization, response time, and cost. It is also compared with the traditional scheduling algorithm to evaluate the performance of the RAS. The evaluation found that the RAS model is more suitable for cloud resource allocation.
A Hybrid Machine Learning Model for Analyzing the Dynamic Behavior of the Cloud Data for Optimal Resource Allocation and Scheduling to Enhance Cost Optimization
Kamma H., Udayaraju P., Santhi G., Tumkunta S.
Conference paper, Proceedings of 8th International Conference on Inventive Computation Technologies, ICICT 2025, 2025, DOI Link
View abstract ⏷
The challenges of efficient resource allocation and scheduling in cloud computing and the dynamic and unpredictable nature of cloud data, which leads to suboptimal cost management, continue to be serious problems. The aim of this work is to devise a machine learning model-that is a hybrid-as the best solution for resource allocation and scheduling in cloud computing so that the costs can be minimized in a dynamic cloud environment while working efficiently. This differs from the traditional approaches that have trouble dealing with several factors at the same time. Instead, this work utilizes both supervised and reinforcement learning methodologies so as to devise an integrated solution. In detail, the Long Short-Term Memory (LSTM) networks are employed to provide an accurate forecast of the workload's time series while the Deep Q-Networks (DQN) allow for smart decision-making on how to distribute the resources in the best way. The system is always on the lookout for cloud operations, not just gathering real-time data on the workload fluctuations but also on the resource requests and utilization patterns to build an adaptive scheduling model, which in turn leads to enhancements in cost-efficient service quality. The experiment outcomes validate the model presented in this paper as it manages effectively the problem of underutilization and over-provisioning with a 25% reduction in costs compared to the traditional methods of scheduling. This work merges predictive analytics and intelligent resource management, thus facilitating cloud computing to be better at cost, scalability, and high-performance in highly dynamic environments.
Developing A Recommendation System for Medical Expert Searching Using Graph Neural Networks
Cherukuri S.C., Udayaraju P., Suryadevara G., Tottempudi S.
Conference paper, 4th International Conference on Sentiment Analysis and Deep Learning, ICSADL 2025 - Proceedings, 2025, DOI Link
View abstract ⏷
Graph Neural Networks are one of the most robust paradigms for analyzing complex, related, and tightly coupled datasets. The main objective of this paper is to use any of the emerging data analytics models to develop a common recommendation system for consumer products used in daily human life. Thus, this paper implements the GNN model for interlinking various datasets of consumer products, like food, medicine, and others. The paper's novelty is interconnecting multiple datasets, interlinking them using GNN-based graphs, and extracting features to provide accurate predictions. The applications of GNNs are explored to understand the functionalities and capability of handling several heterogeneous datasets to develop a unified recommendation system. The existing recommendation systems struggled to obtain the inter-relationships among multiple consumer datasets and interconnecting heterogeneous datasets. In contrast, GNN can address the issues and challenges. A graph structure is created dynamically to interconnect all the consumer product data items based on structural, contextual, usage, and user opinion and experience. A benchmark dataset was obtained from UCI and Kaggle repositories, and GNN in Python was experimented with to understand its efficacy. The experimental outputs demonstrate that the GNN model is highly efficient in interconnecting heterogeneous datasets and creating similarity-aware recommendation systems.
Building Privacy-Aware Data Pipelines: Balancing Scalability, Compliance, and Performance in Modern Data Architectures
Mittoor G.R., Udayaraju P., Putteti S.
Conference paper, 2nd International Conference on Machine Learning and Autonomous Systems, ICMLAS 2025 - Proceedings, 2025, DOI Link
View abstract ⏷
In cloud computing, data is generated from various sources in different locations, such as database management systems, data streaming environments, and files, and is automatically transferred to a data lake. Managing that data in the data lake is a challenging problem that involves developing a scalable model to integrate, process, and move it from one source to another with optimized cost and performance. The earlier research used various optimisation, data management and processing tools, but their performance efficiency is poor. This paper has been motivated to improve the overall performance by implementing the Random Forest algorithm for an overview of the incoming data, which is passed into a cost-effective data pipeline that helps to ingest and process massive amounts of incoming data daily. It also involves Apache Spark cloud services integrating the data seamlessly to manage storage and processing. Based on the data privacy and governance policies the RF model monitors and secures sensitive data in the input dataset. The simulation output is obtained by executing the proposed model in the cloud Apache framework, and the efficiency is verified regarding execution time, response time for user query, and accuracy. It is also verified that the data transmission with the cost efficiency for the healthcare dataset simulated in the Spark, S3, and AWS lake provides an aware pipeline model. A large-scale healthcare dataset is used in the simulation to confirm the efficacy of the data pipelining model, and the data transmission rate, claims, and cost efficiency are verified.
Implementing A Two-Stage Authentication and Authorization Protocol for Improving the User Level Security in Cloud Applications
Conference paper, Proceedings of 3rd International Conference on Augmented Intelligence and Sustainable Systems, ICAISS 2025, 2025, DOI Link
View abstract ⏷
Cloud applications have transformed the data into various stored, processed, and accessed forms. However, they remain vulnerable to cyber threats, including unauthorized access and security breaches. Traditional login methods, such as passwords, often fail to provide strong protection against attacks like credential theft, brute force attempts, and session hijacking. This paper introduces a Two-Stage Authentication and Authorization Protocol (TS-AAP) to improve security. The first stage uses multi-factor authentication (MFA), requiring users to verify their identity with a combination of credentials, such as passwords, one-time passwords (OTPs), or biometric authentication. The second stage applies role-based access control (RBAC) and behaviour analysis to ensure that only authorized users can access specific cloud resources. This approach follows predefined security policies and continuously monitors user activity in real-time. By combining these two layers of security, TS-AAP effectively prevents unauthorized access, reduces identity spoofing risks, and strengthens data protection in cloud applications. Experimental results show that the system improves authentication accuracy to 98.2%, lowers unauthorized access attempts by 90%, and reduces authentication time to just 1.8 seconds. These findings confirm that TS-AAP is a reliable and efficient solution for enhancing cloud security and protecting sensitive data from modern cyber threats.
Optimized machine learning mechanism for big data healthcare system to predict disease risk factor
Thatha V.N., Chalichalamala S., Pamula U., Krishna D.P., Chinthakunta M., Mantena S.V., Vahiduddin S., Vatambeti R.
Article, Scientific Reports, 2025, DOI Link
View abstract ⏷
Heart disease is becoming more and more common in modern society because of factors like stress, inadequate diets, etc. Early identification of heart disease risk factors is essential as it allows for treatment plans that may reduce the risk of severe consequences and enhance patient outcomes. Predictive methods have been used to estimate the risk factor, but they often have drawbacks such as improper feature selection, overfitting, etc. To overcome this, a novel Deep Red Fox belief prediction system (DRFBPS) has been introduced and implemented in Python software. Initially, the data was collected and preprocessed to enhance its quality, and the relevant features were selected using red fox optimization. The selected features analyze the risk factors, and DRFBPS makes the prediction. The effectiveness of the DRFBPS model is validated using Accuracy, F score, Precision, AUC, Recall, and error rate. The findings demonstrate the use of DRFBPS as a practical tool in healthcare analytics by showing the rate at which it produces accurate and reliable predictions. Additionally, its application in healthcare systems, including clinical decisions and remote patient monitoring, proves its real-world applicability in enhancing early diagnosis and preventive care measures. The results prove DRFBPS to be a potential tool in healthcare analytics, providing a strong framework for predictive modeling in heart disease risk prediction.
Optimizing diabetic retinopathy detection with electric fish algorithm and bilinear convolutional networks
Pamula U., Pulipati V., Vijaya Suresh G., Jagannatha Reddy M.V., Bondala A.K., Mantena S.V., Vatambeti R.
Article, Scientific Reports, 2025, DOI Link
View abstract ⏷
Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, necessitating regular screenings to prevent its progression to severe stages. Manual diagnosis is labor-intensive and prone to inaccuracies, highlighting the need for automated, accurate detection methods. This study proposes a novel approach for early DR detection by integrating advanced machine learning techniques. The proposed system employs a three-phase methodology: initial image preprocessing, blood vessel segmentation using a Hopfield Neural Network (HNN), and feature extraction through an Attention Mechanism-based Capsule Network (AM-CapsuleNet). The features are optimized using a Taylor-based African Vulture Optimization Algorithm (AVOA) and classified using a Bilinear Convolutional Attention Network (BCAN). To enhance classification accuracy, the system introduces a hybrid Electric Fish Optimization Arithmetic Algorithm (EFAOA), which refines the exploration phase, ensuring rapid convergence. The model was evaluated on a balanced dataset from the APTOS 2019 Blindness Detection challenge, demonstrating superior performance in terms of accuracy and efficiency. The proposed system offers a robust solution for the early detection and classification of DR, potentially improving patient outcomes through timely and precise diagnosis.
Bio inspired feature selection and graph learning for sepsis risk stratification
Siri D., Kocherla R., Tumkunta S., Udayaraju P., Gogineni K.C., Mamidisetti G., Boddu N.
Article, Scientific Reports, 2025, DOI Link
View abstract ⏷
Sepsis remains a leading cause of mortality in critical care settings, necessitating timely and accurate risk stratification. However, existing machine learning models for sepsis prediction often suffer from poor interpretability, limited generalizability across diverse patient populations, and challenges in handling class imbalance and high-dimensional clinical data. To address these gaps, this study proposes a novel framework that integrates bio-inspired feature selection and graph-based deep learning for enhanced sepsis risk prediction. Using the MIMIC-IV dataset, we employ the Wolverine Optimization Algorithm (WoOA) to select clinically relevant features, followed by a Generative Pre-Training Graph Neural Network (GPT-GNN) that models complex patient relationships through self-supervised learning. To further improve predictive accuracy, the TOTO metaheuristic algorithm is applied for model fine-tuning. SMOTE is used to balance the dataset and mitigate bias toward the majority class. Experimental results show that our model outperforms traditional classifiers such as SVM, XGBoost, and LightGBM in terms of accuracy, AUC, and F1-score, while also providing interpretable mortality indicators. This research contributes a scalable and high-performing decision support tool for sepsis risk stratification in real-world clinical environments.
Clustering-based binary Grey Wolf Optimisation model with 6LDCNNet for prediction of heart disease using patient data
Kumar L.K., Suma K.G., Udayaraju P., Gundu V., Mantena S.V., Jagadesh B.N.
Article, Scientific Reports, 2025, DOI Link
View abstract ⏷
In recent years, the healthcare data system has expanded rapidly, allowing for the identification of important health trends and facilitating targeted preventative care. Heart disease remains a leading cause of death in developed countries, often leading to consequential outcomes such as dementia, which can be mitigated through early detection and treatment of cardiovascular issues. Continued research into preventing strokes and heart attacks is crucial. Utilizing the wealth of healthcare data related to cardiac ailments, a two-stage medical data classification and prediction model is proposed in this study. Initially, Binary Grey Wolf Optimization (BGWO) is used to cluster features, with the grouped information then utilized as input for the prediction model. An innovative 6-layered deep convolutional neural network (6LDCNNet) is designed for the classification of cardiac conditions. Hyper-parameter tuning for 6LDCNNet is achieved through an improved optimization method. The resulting model demonstrates promising performance on both the Cleveland dataset, achieving a convergence of 96% for assessing severity, and the echocardiography imaging dataset, with an impressive 98% convergence. This approach has the potential to aid physicians in diagnosing the severity of cardiac diseases, facilitating early interventions that can significantly reduce mortality associated with cardiovascular conditions.
Analyzing Public Sentiment on the Amazon Website: A GSK-Based Double Path Transformer Network Approach for Sentiment Analysis
Kumar L.K., Thatha V.N., Udayaraju P., Siri D., Kiran G.U., Jagadesh B.N., Vatambeti R.
Article, IEEE Access, 2024, DOI Link
View abstract ⏷
Sentiment Analysis (SA) holds considerable significance in comprehending public perspectives and conducting precise opinion-based evaluations, making it a prominent theme in natural language processing research. With the increasing trend of online shopping and social media usage, there is a constant influx of diverse data types such as images, videos, audio, and text. Notably, text stands out as the most crucial form of unstructured data, demanding heightened attention from researchers. Given the voluminous nature of data, various methodologies have been proposed to effectively mine big datasets for valuable insights. The challenge of accurately identifying polarity in extensive customer evaluations persists due to the intricacies associated with handling large textual datasets derived from reviews, comments, tweets, and posts. This study addresses this challenge by presenting a straightforward architecture, the Double Path Transformer Network (DPTN), designed to model both global and local information for comprehensive review categorization. To enhance the synergy between the attention path and the convolutional path, the study advocates a parallel design that combines a robust self-attention mechanism with a convolutional network. The research employs the gaining-sharing knowledge optimization (GSK) approach to fine-tune hyperparameters, thereby improving the model's classification accuracy. Additionally, the investigation demonstrates that optimization algorithms and deep learning collaboratively manage class imbalances with finesse, even in the absence of explicit measures for such concerns. In the experiment analysis of the proposed model ultimately achieved an accuracy of 95.
A Deep Learning-based Optimization Model for Advertisement Campaign
Gummadi V., Ramadevi N., Udayaraju P., Ravulu C., Seelam D.R., Swamy S.V.
Conference paper, Proceedings of the 5th International Conference on Smart Electronics and Communication, ICOSEC 2024, 2024, DOI Link
View abstract ⏷
Non-profit organizations often struggle to fully utilize the $10,000 online Ads grant to enhance their online presence and attract aligned donors due to a lack of expertise in digital marketing. This gap in effective utilization limits their potential reach and undermines their growth and mission impact. This proposal outlines a solution leveraging cloud computing and artificial intelligence (AI) to address this challenge. By integrating advanced cloud-based tools and AIdriven analytics, we can provide these organizations with tailored strategies and automated support to optimize their Google Ads campaigns. This approach will enable non-profits to maximize their advertising budget, refine targeting strategies, and improve engagement with prospective donors, ultimately fostering greater alignment with their causes and driving sustainable growth.
Exploring Spiking Neural Networks and Deep Learning Techniques for Occlusion Detection in AR and VR Images
Mounika B., Udayaraju P., Varma C.V., Narayana T.V., Jyothi P., Devi C.
Conference paper, Proceedings - 3rd International Conference on Advances in Computing, Communication and Applied Informatics, ACCAI 2024, 2024, DOI Link
View abstract ⏷
This research looks into whether it is possible to use SNNs along with deep learning methods to find occlusions in virtual reality and augmented reality images. The next step goes into more detail about the basic ideas behind SNNs and the benefits they offer, such as event-driven processing and low power usage, which are both very important for real-time augmented and virtual reality systems. After that, we'll talk about our one-of-a-kind occlusion recognition system, which uses both deep learning and SNNs. Utilizing both virtual and real-world AR and VR datasets, we conduct experiments to test how well our method works. These results show a big improvement in the accuracy of occlusion recognition compared to previous methods. We also find out how well our system works with computers and how many resources it needs. This shows that it can be used on AR and VR devices that don't have a lot of resources. In the end of this research, it is shown that spiking neural networks and deep learning methods can make it easier to find occlusions in AR/VR pictures. Our method improves augmented and virtual reality experiences by getting rid of this major problem. This opens up new possibilities in many areas, such as education, training, simulations, and gaming.
An Identity Verification Governance Public Data Access Model for Analyzing Public Data Access Governance using Artificial Neural Network
Sarabu V.R., Udayaraju P., Gummadi V., Ravulu C., Joginipalli S.K., Gurram S.C.
Conference paper, Proceedings of the 4th International Conference on Ubiquitous Computing and Intelligent Information Systems, ICUIS 2024, 2024, DOI Link
View abstract ⏷
The main objective of this paper is to design and implement AI algorithms for public data access governance. An identity Verification Governance Data Access (IVGDA) model uses the ANN algorithm to secure public data access governance. The ANN algorithm is implemented to address and analyse the problems of public data access governance, including computational complexity, data security, and management. Most public sectors, organizations, and institutions face many problems balancing transparency in security, privacy, and scalability for diverse datasets. The ANN algorithm proposed here leverages different datasets of regular guidelines, data patterns, and interactions with multiple people connected in public data applications. Since public data is transferred globally, it is essential to transmit with balanced privacy, transparency, and effective governance. This paper explains the efficiency of AI algorithms in creating a high-level security-based governance public data accessing model. A simulation is created in a Microsft Active Directory. The proposed IVGDA model is deployed to examine and evaluate the memberships and authorities of the users involved in the directory. The ANN analyses the account, profile, access rules and roles, data models, and other parameters, and the output is collected. Compared with the existing models, the IVGDA model establishes significant governance and outperforms others. In the simulation, various processes like user addition, deletion, group creation, alteration in the group, profile and all are used to evaluate the performance of the IVGDA model. Additionally, this paper contributes to data governance by providing an advanced computing algorithm for understanding and optimizing public data access governance.
NLP Based TAG Algorithm for Enhancing Customer Data Platform and Personalized Marketing
Gummadi V., Udayaraju P., Kolasani D., Kotaru C., Sayana R., Neethika A.
Conference paper, Proceedings of 5th International Conference on IoT Based Control Networks and Intelligent Systems, ICICNIS 2024, 2024, DOI Link
View abstract ⏷
As businesses strive to enhance decision-making and personalized marketing, the challenge of extracting valuable insights from vast datasets becomes paramount. Recent advancements in natural language processing (NLP), particularly Table-Augmented Generation (TAG), present a transformative approach by merging structured data with unstructured text generation. This integration enables businesses to derive actionable insights that improve marketing precision and operational strategies. This paper explores how TAG can effectively analyze extensive datasets, allowing organizations to create more targeted marketing initiatives and enhance customer engagement. By harnessing the power of TAG, businesses can generate personalized information that drives growth and fosters competitive advantages. The implications of TAG for strategic decision-making are examined, highlighting its role in optimizing resource utilization and facilitating data-driven storytelling. The findings of this research contribute to a deeper understanding of how advanced analytics can significantly enhance business effectiveness, laying the groundwork for future applications and research in the rapidly evolving landscape of personalized marketing and customer relationship management.
Enhancing Communication and Data Transmission Security in RAG Using Large Language Models
Gummadi V., Udayaraju P., Sarabu V.R., Ravulu C., Seelam D.R., Venkataramana S.
Conference paper, 4th International Conference on Sustainable Expert Systems, ICSES 2024 - Proceedings, 2024, DOI Link
View abstract ⏷
Retrieval-augmented generation (RAG) enhances large language models (LLMs) by integrating external knowledge sources, enabling more useful information and generating accurate responses. This paper explores RAG's architecture and applications, combining generator and retriever models to access and utilize vast external data repositories. While RAG holds significant promise for various Natural Language Processing (NLP) processes like dialogue generation, summarization, and question answering, it also presents unique security challenges that must be addressed to ensure system integrity and reliability. RAG systems face several security threats, including data poisoning, model manipulation, privacy leakage, biased information retrieval, and harmful outputs generation. Generally, in the traditional RAG application, security threat is one of the major concerns. To tighten the security system and enhance the efficiency of the model on processing more complex data this paper outlines key strategies for securing RAG-based applications to mitigate these risks paper outlines key strategies for securing RAG-based applications to mitigate these risks. Ensuring data security through filtering, sanitization, and provenance tracking can prevent data poisoning and enhance the quality of external knowledge sources. Strengthening model security via adversarial training, input validation, and anomaly detection improves resilience against manipulative attacks. Implementing output monitoring and filtering techniques, such as factual verification, language moderation, and bias detection, ensures the accuracy and safety of generated responses. Additionally, robust infrastructure and access control measures, including secure data storage, secure APIs, and regulated model access, protect against unauthorized access and manipulation. Moreover, this study analyzes various use cases for LLMs enhanced by RAG, including personalized recommendations, customer support automation, content creation, and advanced search functionalities. The role of vector databases in optimizing RAG-driven generative AI is also discussed, highlighting their ability to efficiently manage and retrieve large-scale data for improved response generation. By adhering to these security measures and leveraging best practices from leading industry sources such as Databricks, AWS, and Milvus, developers can ensure the robustness and trustworthiness of RAG-based systems across diverse applications.
Developing An Intelligent Framework For Optimizing Apache Spark Tasks To Improve The Efficiency of Memory Configuration
Sarabu V.R., Udayaraju P., Ravulu C., Joginipalli S.K., Kukkadapu S., Narayana T.V.
Conference paper, Proceedings - 2024 OITS International Conference on Information Technology, OCIT 2024, 2024, DOI Link
View abstract ⏷
Due to the massive data generated continuously in recent emerging applications, big data analytics must ease and speed up the process. Optimal configuration of memory and computational resources is critical for maximizing Apache Spark applications' performance and resource efficiency. This paper introduces an intelligent framework that dynamically generates tailored configuration parameters-including memory allocation, the number of executors, and cores per executor-based on the specific characteristics of a Spark job, such as data volume and execution plan complexity. The framework offers precise, data-driven recommendations that significantly enhance Spark job performance by leveraging historical job performance data, machine learning models, and heuristic-driven analysis. By automating this traditionally manual and time-consuming process, the framework improves resource utilization and job efficiency. It empowers users to achieve optimal configurations without requiring deep expertise in Spark tuning. This approach is particularly advantageous in large-scale data processing environments, where performance and efficiency are paramount.
Performance Comparison of Different Digital and Analog Filters Used for Biomedical Signal and Image Processing
Duraivelu H., Dhamodharan U.S.R., Udayaraju P., Prakash S.J., Murugesan S.
Article, Journal of Information Technology Management, 2024, DOI Link
View abstract ⏷
Getting highly accurate output in biomedical data processing concerning biomedical signals and images is impossible because biomedical data are generated from various electronic and electrical resources that can deliver the data with noise. Filtering is widely used for signal and image processing applications in medical, multimedia, communications, biomedical electronics, and computer vision. The biggest problem in biomedical signal and image processing is developing a perfect filter for the system. Digital filters are more advanced in precision and stability than analog filters. Digital filters are getting more attention due to the increasing advancements in digital technologies. Hence, most medical image and signal processing techniques use digital filters for preprocessing tasks. This paper briefly explains various filters used in medical image and signal processing. Matlab is a famous mathematical, analytical software with a platform and built-in tools to design filters and experiment with different inputs. Even though this paper implements filters like, Mean, Median, Weighted Average, Guassian, and Bilateral in Python to verify their performance, a suitable filter can be selected for biomedical applications by comparing their performance.
Hierarchical convolution neural network models for classifying the segmented OCT and OCTA images using U-Net model
Udayaraju P., Jeyanthi P., Sekhar B.V.D.S.
Article, Multimedia Tools and Applications, 2024, DOI Link
View abstract ⏷
The accurate prediction and identification of Alzheimer’s disease and Choroidal Neovascularization using Optical Coherence Tomography and Optical Coherence Tomography Angiography images. Advanced methods employed for early prediction and classifying sub types of Alzheimer’s disease and Choroidal Neovascularization, by leveraging a Hierarchical Convolutional Neural Network model from Optical Coherence Tomography and Optical Coherence Tomography Angiography images for Age-related macular degeneration. In Hierarchical Convolutional Neural Network, the U-Net model is used for segmenting the Optical Coherence Tomography and Optical Coherence Tomography Angiography images, and the Convolutional Neural Network-1 model is used for identifying Choroidal Neovascularization, the Convolutional Neural Network-2 model is used for predicting the types of Choroidal Neovascularization as Type-1, Type-2, and Type-3, and Convolutional Neural Network − 3 model is used for predicting the Alzheimer’s disease in Optical Coherence Tomography and Optical Coherence Tomography Angiography images. Each model of Hierarchical Convolutional Neural Network is implemented in Python software, used 65,000 AMED image and the results are verified against the existing methods, and Hierarchical Convolutional Neural Network model’s has superior accuracy 99.36% and reliability in classifying and detecting the Alzheimer’s disease and Choroidal Neovascularization. Type-1 CNVs are the most common, followed by Type-2 and Type-3. The suggested model for CNV types had an accuracy rate of over 99%. The approach significantly advances the early disease detection and diagnostic accuracy by improving the patient outcomes and efficient treatment plans.
Advances in Alzheimer’s Detection: A Multi-Learning Fusion Approach Using Choroidal Neovascularization Analysis
Conference paper, Proceedings of 5th International Conference on IoT Based Control Networks and Intelligent Systems, ICICNIS 2024, 2024, DOI Link
View abstract ⏷
Alzheimer's disease (AD) is one of the brain disorders diseases that have a significant impact on the daily life of an affected person. Recently, several studies have revealed that there is a strong connection between several retinal and choroidal pathologies, such as Choroidal Neovascularization (CNV), which leads to AD. This research mainly focused on detecting AD from CNV via optical coherence tomography (OCT) images. OCT is used primarily to diagnose retinal diseases, as it provides high-resolution images of retinal layers and highlights the abnormalities present in the retinal images, which leads to CNV. Linking the CNV with retinal features of Alzheimer's requires unique techniques and analysis, as these two brains and eyes are interconnected by sharing the vascular and neural systems. In this work, the pre-trained model ResNET-101 with transfer learning is used to find the abnormal patterns belonging to AD using OCT images. The proposed Multi-Learning Fusion Model (MLFM) combines residual layers with Xtensible Convolutional Neural Networks (X-CNNs), which detect abnormalities from OCT images. In this context, the training model's residual layers transform the features of vascular and neural systems. Furthermore, the MLFM exhibits high sensitivity in detecting subtle choroidal changes associated with AD progression. Finally, the quantitative results show that the proposed MLFM obtains an accuracy of 99.78% with accurate abnormal region detection.
PLANT DISEASE DETECTION USING DEEP MACHINE LEARNING ALGORITHM
Swetha D., Ratnagiri D., Sagar K.V., Raju G.N., Burra L.R., Udayaraju P., Devi A.G.
Article, Journal of Theoretical and Applied Information Technology, 2024,
View abstract ⏷
The world population is increasing rapidly. In order to cater the daily needs of an individual, grains and vegetable production are imperative. This paper is focused to establish a technology support to formers and to minimize the deceases in plant. Tomato and pepper bell leaves are considered to detect the deceases. Contrast limited adaptive histogram equalization (CLAHE) is applied to improve the contrast of the leaf image before processing with machine learning algorithm. The contrast limiting is considered with clip limit 40. Bi-cubic interpolation is applied to minimize the false edge of the leaf with neighbouring tails of the leaf. The qualitative parameters like absolute mean brightness error (AMBE), mean square error (MSE), peak signal to noise ratio, mean average error (MAE) and maximum deviation (MD) are analysed. MSE values achieved less than ‘1’ indicates contrast adjustment is good. CNN Classification is applied. The decease detection accuracy with CNN is increased to 95.6 percent with increasing epochs. The accuracy Vs epoch and Loss Vs Epoch analysis is done. Optimum Tunning of hyperparameters (β1), and (β2) is done in this study. The results achieved with this approach are best fit for plant decease finding to improve the yielding rate the crop.
Human activity-based anomaly detection and recognition by surveillance video using kernel local component analysis with classification by deep learning techniques
Praveena M.D.A., Udayaraju P., Chaitanya R.K., Jayaprakash S., Kalaiyarasi M., Ramesh S.
Article, Multimedia Tools and Applications, 2024, DOI Link
View abstract ⏷
Abnormal behavior methods have attempted to reduce execution time, computational complexity, efficiency, robustness against pixel occlusion, and generalizability. This research proposed a novel method in human activity-based anomaly detection and recognition by surveillance video utilizing DL methods. Input is collected as video and processed for noise removal and smoothening. Then kernel local component analysis extracts these video features for human activity monitoring. Then the extracted features are classified using Bayesian network-based spatiotemporal neural networks. The classified output shows the anomaly activities of the selected input surveillance video dataset. The simulation results are obtained for various crowd datasets regarding the mean average error, mean square error, training accuracy, validation accuracy, specificity, and F_measure. The proposed technique attained an MAE of 58%, MSE of 63%, specificity of 89%, and F-measure of 68%. training and validation accuracy of 92% and 96% respectively.
Enhanced stock market forecasting using dandelion optimization-driven 3D-CNN-GRU classification
Jagadesh B.N., RajaSekhar Reddy N.V., Udayaraju P., Damera V.K., Vatambeti R., Jagadeesh M.S., Koteswararao C.
Article, Scientific Reports, 2024, DOI Link
View abstract ⏷
The global interest in market prediction has driven the adoption of advanced technologies beyond traditional statistical models. This paper explores the use of machine learning and deep learning techniques for stock market forecasting. We propose a comprehensive approach that includes efficient feature selection, data preprocessing, and classification methodologies. The wavelet transform method is employed for data cleaning and noise reduction. Feature selection is optimized using the Dandelion Optimization Algorithm (DOA), identifying the most relevant input features. A novel hybrid model, 3D-CNN-GRU, integrating a 3D convolutional neural network with a gated recurrent unit, is developed for stock market data analysis. Hyperparameter tuning is facilitated by the Blood Coagulation Algorithm (BCA), enhancing model performance. Our methodology achieves a remarkable prediction accuracy of 99.14%, demonstrating robustness and efficacy in stock market forecasting applications. While our model shows significant promise, it is limited by the scope of the dataset, which includes only the Nifty 50 index. Broader implications of this work suggest that incorporating additional datasets and exploring different market scenarios could further validate and enhance the model's applicability. Future research could focus on implementing this approach in varied financial contexts to ensure robustness and generalizability.
Enhanced botnet detection in IoT networks using zebra optimization and dual-channel GAN classification
Shareef S.K.K., Chaitanya R.K., Chennupalli S., Chokkakula D., Kiran K.V.D., Pamula U., Vatambeti R.
Article, Scientific Reports, 2024, DOI Link
View abstract ⏷
The Internet of Things (IoT) permeates various sectors, including healthcare, smart cities, and agriculture, alongside critical infrastructure management. However, its susceptibility to malware due to limited processing power and security protocols poses significant challenges. Traditional antimalware solutions fall short in combating evolving threats. To address this, the research work developed a feature selection-based classification model. At first stage, a preprocessing stage enhances dataset quality through data smoothing and consistency improvement. Feature selection via the Zebra Optimization Algorithm (ZOA) reduces dimensionality, while a classification phase integrates the Graph Attention Network (GAN), specifically the Dual-channel GAN (DGAN). DGAN incorporates Node Attention Networks and Semantic Attention Networks to capture intricate IoT device interactions and detect anomalous behaviors like botnet activity. The model's accuracy is further boosted by leveraging both structural and semantic data with the Sooty Tern Optimization Algorithm (STOA) for hyperparameter tuning. The proposed STOA-DGAN model achieves an impressive 99.87% accuracy in botnet activity classification, showcasing robustness and reliability compared to existing approaches.
RETRACTED ARTICLE: A combined U-Net and multi-class support vector machine learning models for diabetic retinopathy macula edema segmentation and classification DME (Soft Computing, (2024))
Udayaraju P., Murthy K.S., Jeyanthi P., Raju B.V.S.R., Rajasri T., Ramadevi N.
Erratum, Soft Computing, 2024, DOI Link
View abstract ⏷
The publisher has retracted this article in agreement with the Editor-in-Chief. The article was submitted to be part of a guest-edited issue. An investigation by the publisher found a number of articles, including this one, with a number of concerns, including but not limited to compromised editorial handling and peer review process, inappropriate or irrelevant references or not being in scope of the journal or guest-edited issue. Based on the investigation’s findings, the publisher no longer has confidence in the results and conclusions of this article. Author Pamula Udayaraju disagrees with this retraction. Authors K. Durga bhavani, P. Jeyanthi, Srihari Varma Mantena, T. Rajasri, Bh Raju, N. Ramadevi have not responded to correspondence regarding this retraction.
Artificial neural network-based secured communication strategy for vehicular ad hoc network
Sekhar B.V.D.S., Udayaraju P., Kumar N.U., Sinduri K.B., Ramakrishna B., Babu B.S.S.V.R., Srinivas M.S.S.S.
Article, Soft Computing, 2023, DOI Link
View abstract ⏷
Vehicular ad hoc network (VANET) is an application-based network belonging to the class of mobile ad hoc networks. The nodes in the VANET are interconnected and communicate through wireless media and the Internet. This subsequently leads to data security issues. Several secured communication and routing protocols for VANET have been proposed so far. However, security complaints are still rising. The motivation is to provide a secured communication strategy (SCS) in which each node of VANET needs to be authenticated and authorized to participate in the communication network. The SCS comprises two phases, namely node authentication and authorization phases. The details of every node are collected and saved in the authentication phase and validated in the authorization phase. Whenever a new node is admitted into the VANET functionalities, it must fulfill the requirement of answering a set of credentials given by the admin. The logistic regression model is used to overcome the computation complexity. The simulation of SCS is carried out in NS2 software, and the results are verified in terms of throughput, packet delivery ratio, packet loss, and delay. The performance is evaluated by comparing the results with earlier methods to prove its efficiency.
An Integrated Learning Approach for Detecting Plant Diseases
Srinivasarao T., Priyanka B., Udayaraju P., Narayana T.V., Vinod Varma C., Srinivas L.V.
Conference paper, Smart Innovation, Systems and Technologies, 2023, DOI Link
View abstract ⏷
The health of the plant and the safety of the food are linked closely. Plant diseases become more complex to the farmers if they are not observed in the early stages. This will become more dangerous to the crop yielding and may reduce the production also. Several types of diseases are identified by the researchers that can cause huge losses to the farmers. Machine learning (ML) algorithms are most widely used to detect the accurate patterns of plant diseases but it is very complex to detect the plant diseases accurately. Deep learning (DL) is most widely used to process complex and large datasets efficiently. In this paper, an integrated learning approach (ILA) is introduced to detect plant diseases on leaves. ILA is the approach integrated with noise filters for removing noise from the leaf images and detecting the depth of the infected region from the leaf image. Advanced training is used to train on strong features of plant diseases. This will help us to increase the accurate detection of plant diseases. The experiments are conducted on two publically available datasets as Kaggle plant diseases dataset and the PlantVillage dataset is used and which consists of 87,848 leaf images containing healthy and disease infected plants. The performance is measured by using the sensitivity, specificity, accuracy, and detection rate.
Secured Communication Strategy for Vehicular Ad-Hoc Network
Udayaraju P., Prasanth Kumar G., Durga Bhavani K., Vinod Varma C., Narayana T.V., Jahnavi P.
Conference paper, Smart Innovation, Systems and Technologies, 2023, DOI Link
View abstract ⏷
Vehicular ad-hoc (VANET) is one of the mobile ad-hoc network's application-based networks. Any mobile, wireless communication between deployed nodes in the VANET is always possible. The uniqueness of VANET is that nodes can communicate with each other while moving. This lacks data security since any node can connect with any other node. Security issues are continuing on the rise after earlier studies providing protected communication and routing methods for VANET. A secured communication strategy (SCS) that requires each VANET node to be verified and authorized before they are permitted to enter the communication network is what the author is determined to provide. The node authentication and authorization stages make up the first two phases of the SCS. A set of credentials provided by the administrator must be answered before a new node may access VANET features. To overcome the computation complexity arising due to the ever-increasing number of vehicular nodes and communication data, and to increase the speed of SCS processes, this paper has used the logistic regression model. Throughput, packet delivery ratio, delay, and packet loss are all measured using the NS2 application to simulate SCS. Through comparing the outcomes with those from earlier approaches, the performance is examined for effectiveness.
GW- CNNDC: Gradient weighted CNN model for diagnosing COVID-19 using radiography X-ray images
Udayaraju P., Narayana T.V., Vemparala S.H., Srinivasarao C., Raju B.S.R.K.
Article, Measurement: Sensors, 2023, DOI Link
View abstract ⏷
COVID-19 is one of the dangerous viruses that cause death if the patient doesn't identify it in the early stages. Firstly, this virus is identified in China, Wuhan city. This virus spreads very fast compared with other viruses. Many tests are there for detecting this virus, and also side effects may find while testing this disease. Corona-virus tests are now rare; there are restricted COVID-19 testing units and they can't be made quickly enough, causing alarm. Thus, we want to depend on other determination measures. There are three distinct sorts of COVID-19 testing systems: RTPCR, CT, and CXR. There are certain limitations to RTPCR, which is the most time-consuming technique, and CT-scan results in exposure to radiation which may cause further diseases. So, to overcome these limitations, the CXR technique emits comparatively less radiation, and the patient need not be close to the medical staff. COVID-19 detection from CXR images has been tested using a diversity of pre-trained deep-learning algorithms, with the best methods being fine-tuned to maximize detection accuracy. In this work, the model called GW-CNNDC is presented. The Lung Radiography pictures are portioned utilizing the Enhanced CNN model, deployed with RESNET-50 Architecture with an image size of 255*255 pixels. Afterward, the Gradient Weighted model is applied, which shows the specific separation regardless of whether the individual is impacted by Covid-19 affected area. This framework can perform twofold class assignments with exactness and accuracy, precision, recall, F1-score, and Loss value, and the model turns out proficiently for huge datasets with less measure of time.
Developing a region-based energy-efficient IoT agriculture network using region- based clustering and shortest path routing for making sustainable agriculture environment
Priyanka B.H.D.D., Udayaraju P., Koppireddy C.S., Neethika A.
Article, Measurement: Sensors, 2023, DOI Link
View abstract ⏷
The current technological developments have paved the path for various fields to thrive, explore and enrich their current applications with the help of technology such as Artificial Intelligence (AI), Internet Technology, Wireless Technology, and the Internet Of Things (IoT). This research focused on providing energy-efficient software and IoT applications for sustainability. Sustainability is integrating the environmental, social, and economic resources to thrive healthy and different requirements to fulfill human needs simultaneously. It can also be referred to as the maintenance of the environment, especially the natural resources maintaining social equality and economic stability to create a healthy community for this generation and the next generation. For sustainability, this paper proposed a Region-based clustering and cluster-head election model to improve the Energy efficiency of IoT networks deployed in the Agriculture environment (REAN). The proposed methodology uses the Shortest Routing and Less Cost algorithm (SRLC) and Region Clustering and Cluster Head Selection (RCHS) algorithm to provide energy-efficient software and IoT application. The experimental results are used to verify the performance of IoT-based sustainable applications in a real-time environment.
Convolution neural network model for predicting various lesion-based diseases in diabetic macula edema in optical coherence tomography images
Saini D.J.B., Sivakami R., Venkatesh R., Raghava C.S., Sandeep Dwarkanath P., Anwer T.M.K., Smirani L.K., Ahammad S.H., Pamula U., Amzad Hossain M., Rashed A.N.Z.
Article, Biomedical Signal Processing and Control, 2023, DOI Link
View abstract ⏷
Diabetic Macular Edema (DME), a rare eye disease primarily found in diabetic patients, is due to the formation of fluid in the extra-cellular space of the macular area in the retina. In the earlier days, it was detected through fundus images that provided less accuracy and were difficult for early detection. Optical Coherence Tomography (OCT) is widely adopted to overcome such issues. It is an advanced imaging modality that provides a better view of the retinal structure. However, the medical professionals manually carried out the detection of DME from the OCT images. Advancement in machine learning algorithms has enabled easy processing of OCT images for DME detection. However, the machine learning algorithm provided less accuracy as it was limited to 2-dimensional datasets and their parameters. The lesions in the images were detected using the 3-dimensional structure of the OCT images. The deep learning algorithms consisted of various layers that increased the algorithm's efficiency and provided the necessary features and parameters that helped in the earlier detection of DME. It is also important to detect the types of DME like hemorrhages, Microaneurysms, and exudate. In this paper, a novel lesion-based CNN algorithm is proposed for the efficient detection of the lesions that help in better prediction. The proposed model is compared with other deep learning models, and the results show that the LCNN model provides improved accuracy than normal deep learning models like ResNet, VGG16, and Inception. The accuracy of deep learning models like AlexNet and Inception is increased to 96%.
A hybrid multilayered classification model with VGG-19 net for retinal diseases using optical coherence tomography images
Udayaraju P., Jeyanthi P., Sekhar B.V.D.S.
Article, Soft Computing, 2023, DOI Link
View abstract ⏷
Retinal diseases are most widely affected by various types of people without any proper reason. Nowadays, many retinal diseases are identified by experts. Detecting retinal diseases in the early stages is very important with better accuracy. Deep learning (DL) techniques are commonly used in the early prediction of retinal disorders. In DL, multiple layers accurately detect abnormalities in the retinal images. Various datasets are also present for this research work. This paper uses a hybrid multilayered classification (HMLC/CNN-VGG19). This system is developed to categorize four kinds of retinal disorders (age-related macular degeneration, choroidal neovascularization, Drusen, diabetic retinopathy, as well as typical cases). The proposed H.M.L.C. is applied to OCT to gather pictures from different data sources, like the U.C.I. repository, Kaggle, etc. The CNN and VGG-19 models used in the H.M.L.C. are implemented in Python over the datasets. The experimental results in terms of classification accuracy are verified. The classification accuracy is high since the H.M.L.C. used the advanced features from CNN and VGG-19 models. The performance is calculated using sensitivity, specificity, F1 score, and accuracy.
Sickle-shaped high gain and low profile based four port MIMO antenna for 5G and aeronautical mobile communication
Armghan A., Lavadiya S., Udayaraju P., Alsharari M., Aliqab K., Patel S.K.
Article, Scientific Reports, 2023, DOI Link
View abstract ⏷
The construction of the four-port MIMO antenna in the form of a sickle is provided in the article. Initially, the single port element is designed and optimized. Next, a structure with two ports is created, and lastly, a design with four ports is completed. This process is repeated until the design is optimized. Three types of parametric analysis are considered, including variations in length, widths of sickle-shaped patches, and varying sizes of DGS. The frequency range of 2–8 GHz is used for structural investigation. The − 18.77 dB of return loss was observed at 3.825 GHz for a single-element structure. The optimized one-port structure provides a return loss of − 19.79 dB at 3.825 GHz. The port design offers a bandwidth of 0.71 GHz (3.515–4.225). The four-port design represents two bands that are observed at 3 GHz and 5.43 GHz. Both bands provide the return loss at respectively − 19.79 dB and − 20.53 dB with bandwidths of 1.375 GHz (2.14–3.515) and 0.25 GHz (5.335–5.585). The healthy isolation among both transmittance and reflectance response is achieved. The low-profile material was used to create the design that was presented. The article includes a comparison of the findings that were measured and those that were simulated. The four-port design that has been shown offers a total gain of 15.93 dB, a peak co-polar value of 5.46 dB, a minimum return loss of − 20.53 dB, a peak field distribution of 46.43 A/m and a maximum bandwidth of 1.375 GHz. The values for all diversity parameters like ECC are near zero, the Negative value of TARC, Near to zero MEG, DG is almost 10 dB, and a zero value of CCL is achieved. All diversity parameter performance is within the allowable range. The design is well suited for 5G and aeronautical mobile communication applications.
Early Diagnosis of Age-Related Macular Degeneration (ARMD) Using Deep Learning
Conference paper, Smart Innovation, Systems and Technologies, 2022, DOI Link
View abstract ⏷
Retinal diseases become more complicated for the humans. Among this, age-related macular degeneration (ARMD) is the eye-related disease that may cause vision loss. ARMD consists of two types such as dry-ARMD and wet ARMD. The dry ARMD gets affected slowly, so there is no vision loss. The wet ARMD shows the impact on vision loss. If the person got infected for two eyes, they may loss their quality of life. The dry form of age-related macular degeneration tends to get worse slowly, so you can keep most of your vision. The wet form of macular degeneration is a leading cause of permanent vision loss. If it is in both eyes, it can hurt your quality of life. Early detection of ARMD can prevent the vision loss for elderly persons. Deep learning (DL) is an artificial intelligence (AI) technique that works better on human body parts by generating patterns for decision-making. In this paper, discussed several preprocessing techniques, feature extraction techniques and early diagnosis of ARMD diseases by using deep learning algorithms. The performance of various algorithms is discussed on optical coherence tomography (OCT) dry and wet images.
A review of different machine learning models to analyze collective behavior in social networks
Review, International Journal of Recent Technology and Engineering, 2019,
View abstract ⏷
In social networks, Collective behavior defines individual user or human behavior whenever they are exposed different types tasks in outside environments like social networks. Different types of social networks like face book, twitter and you tube are used to describe prediction of collective behavior of different users. So that in this paper, we describe basic study regarding different approaches used to predict behavior of users in different social dimensions. This paper also describes how social networks can be used to describe and predict sequential human behavior at his/her individual selection or preference. This paper presents different behavior patterns in online social networks, and also describes other tasks present in social networks with their recommendations and advertising perspective data analysis.
A survey on large scale bio-medical data implementation methods
Review, International Journal of Pharmaceutical Research, 2019,
View abstract ⏷
In the ongoing years, the volume of data that exists on the planet has risen significantly. Biomedical data will be data that are recorded from a living being that is utilized to help analyzing and diagnosis of a specific disease. In the same way as other different sorts of data, the volume biomedical data has likewise ascended over the most recent few years. With the end goal to process this huge measure of data, conventional preparing systems are not satisfactory and here is big data for the most part manages the capacity and handling of expansive scale and complex structure data sets for which the customary techniques end up being unfit. In this paper, we examine a few methodologies in preparing extensive measure of biomedical data. This paper will likewise examine a few varieties of biomedical data and the test that are confronted when handling those biomedical data in expansive sizes and survey and talk about big data application in four noteworthy biomedical sub disciplines: bioinformatics, clinical informatics, public health informatics, imaging informatics. Survey the ongoing advancement and leaps forward of big data applications in these human services areas and condense the difficulties and holes to enhance and advance big data applications in social insurance.
Secure and access control data monitoring in vehicular ad HOC network
Article, International Journal of Innovative Technology and Exploring Engineering, 2019, DOI Link
View abstract ⏷
Present days innovation identifies with internet of Vehicles (IOV) has been expanded to break down traffic in the board frameworks. It is utilized to portray traffic examination and improve proficiency of the vehicle traffic. The stage can take care of the issue of capacity, investigation and multi terminal dispersion of mass information, give traffic data administrations to traffic the board offices and people in general, it is a helpful endeavor to apply propelled data innovation to the transportation business. Causing the broad worries in the exploration network. To empower credible and classified correspondences among a gathering of haze hubs, in this paper, we propose a productive key exchange protocol based on cipher text policy attribute-based encryption (CP-ABE) to set up secure interchanges among the members. To accomplish classification, verification, certainty, and access control, we consolidate CP-ABE and computerized signature strategies. We investigate the productivity of our convention regarding security and execution. We likewise execute our convention and contrast it and the endorsement based plan to delineate its practicality.
A survey of methods for genome functional analysis in comparative genomics
Udayaraju P., Siva Varma P.B., Jeevana Sujitha M.
Article, International Journal of Engineering and Technology(UAE), 2018, DOI Link
View abstract ⏷
In biomedical technologies, Gene functional analysis is an emerging concept in understands the DNA sequence and gene product analysis and gene interaction in different real time medical applications. Finding data sequences of gene functionalities. There are many techniques have been used to progress functionality of functionality of genome analysis. In this paper, we present algorithmic, calculation oriented and mathematical comparison under analysis of genome. We develop techniques for dynamic and automatic calculation of Genome relations; these relations are enabled in automatic identification of orthodox for Genome from redundant Genes in yeast Genome. We present a method to identify automatic protein to protein interaction Based on related patterns related to specific presentations, we observe understand frame of functional proteins were developed to find Gene identification with accurate and reliable formations like sensitivity & specificity. We also present methods for systematic "denovo" identification of motifs. The techniques do not depend on previous information of gene operate and in that way stand out from the present literary works on computational design finding. Based on the genome-wide preservation styles of known elements, we designed three preservation requirements that we used to discover novel motifs. Our comparative results give comparative genomic to process our outstanding of any pieces. Our proposed techniques are flexible to verify comprehensive data genes and provide reliable research on complicated genomes on human specifications.