Admission Help Line

18900 00888

Admissions 2026 Open — Apply!

Faculty Mr P Udayaraju

Mr P Udayaraju

Assistant Professor (Ad-Hoc)

Department of Computer Science and Engineering

Contact Details

udayaraju.p@srmap.edu.in

Office Location

CV Raman Block, level 3, Room: 310, Cabin No: 1

Education

2024
PhD (Pursuing)
Sathyabama Institute of Science and Technology, Chennai.
2014
MTech (CST)
SRKR Engineering College Bhimavaram, Affiliated to JNTUK, AP
2010
BTech (CSE)
VR Siddhartha Engineering College Vijayawada, Affiliated to Acharya Nagarjuna University, AP

Experience

  • 11 year 9 months of work experience as an Assistant Professor in SRKR Engineering College, Bhimavaram, Andhra Pradesh, India.

Research Interest

  • A combined U-Net and multi-class support vector machine learning models for diabetic retinopathy macula edema segmentation and classification DME.
  • Convolution neural network model for predicting various lesion-based diseases in diabetic macula edema in optical coherence tomography images.
  • A hybrid multilayered classification model with VGG-19 net for retinal diseases using optical coherence tomography images.
  • Early Diagnosis of Age-Related Macular Degeneration (ARMD) Using Deep Learning.
  • Developing a region-based energy-efficient IoT agriculture network using region- based clustering and shortest path routing for making sustainable agriculture environment.
  • Artificial neural network-based secured communication strategy for vehicular ad hoc network

Awards

No data available

Memberships

No data available

Publications

  • Clustering-based binary Grey Wolf Optimisation model with 6LDCNNet for prediction of heart disease using patient data

    Mr P Udayaraju, Lella Kranthi Kumar., K G Suma.,Venkateswarlu Gundu., Srihari Varma Mantena., B N Jagadesh

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    In recent years, the healthcare data system has expanded rapidly, allowing for the identification of important health trends and facilitating targeted preventative care. Heart disease remains a leading cause of death in developed countries, often leading to consequential outcomes such as dementia, which can be mitigated through early detection and treatment of cardiovascular issues. Continued research into preventing strokes and heart attacks is crucial. Utilizing the wealth of healthcare data related to cardiac ailments, a two-stage medical data classification and prediction model is proposed in this study. Initially, Binary Grey Wolf Optimization (BGWO) is used to cluster features, with the grouped information then utilized as input for the prediction model. An innovative 6-layered deep convolutional neural network (6LDCNNet) is designed for the classification of cardiac conditions. Hyper-parameter tuning for 6LDCNNet is achieved through an improved optimization method. The resulting model demonstrates promising performance on both the Cleveland dataset, achieving a convergence of 96% for assessing severity, and the echocardiography imaging dataset, with an impressive 98% convergence. This approach has the potential to aid physicians in diagnosing the severity of cardiac diseases, facilitating early interventions that can significantly reduce mortality associated with cardiovascular conditions
  • Implementing Resource-Aware Scheduling Algorithm for Improving Cost Optimization in Cloud Computing

    Mr P Udayaraju, Dr Dilip Kumar Vallabhadas, Kiran Jagannadha Reddy., Venkata Pradeep Reddy Pamaiahgari

    Source Title: 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), DOI Link

    View abstract ⏷

    An uncountable number of computational resources are shared for various applications using a transformative technology called Cloud Computing. It is an emerging technology that can offer scalable and on-demand resources. However, cost optimization and resource allocation are critical issues due to the increasing number of cloud user and their request. It can be solved by optimising resource allocation by reducing the task/user queue. It is possible only if the requested resource is free; otherwise, it must schedule the same or relevant resource. This paper demonstrates a Resource-Aware Scheduling (RAS) algorithm to increase the speed of resource allocation by mapping the user request with the resource availability. The proposed algorithm examines the log information of the resources, workload behaviour, resource availability, price, task efficiency, and reduced wastage and allocates the resources dynamically. To do that, it maps user information with resource information and schedules based on availability and priority. The simulation-based experiment is carried out in a private cloud space and demonstrates the resource allocation, utilization, response time, and cost. It is also compared with the traditional scheduling algorithm to evaluate the performance of the RAS. The evaluation found that the RAS model is more suitable for cloud resource allocation.
  • Developing A Recommendation System for Medical Expert Searching Using Graph Neural Networks

    Mr P Udayaraju, Satish Chandra Cherukuri., Greeshma Suryadevara., Srinivasarao Tottempudi

    Source Title: 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), DOI Link

    View abstract ⏷

    Graph Neural Networks are one of the most robust paradigms for analyzing complex, related, and tightly coupled datasets. The main objective of this paper is to use any of the emerging data analytics models to develop a common recommendation system for consumer products used in daily human life. Thus, this paper implements the GNN model for interlinking various datasets of consumer products, like food, medicine, and others. The paper's novelty is interconnecting multiple datasets, interlinking them using GNN-based graphs, and extracting features to provide accurate predictions. The applications of GNNs are explored to understand the functionalities and capability of handling several heterogeneous datasets to develop a unified recommendation system. The existing recommendation systems struggled to obtain the inter-relationships among multiple consumer datasets and interconnecting heterogeneous datasets. In contrast, GNN can address the issues and challenges. A graph structure is created dynamically to interconnect all the consumer product data items based on structural, contextual, usage, and user opinion and experience. A benchmark dataset was obtained from UCI and Kaggle repositories, and GNN in Python was experimented with to understand its efficacy. The experimental outputs demonstrate that the GNN model is highly efficient in interconnecting heterogeneous datasets and creating similarity-aware recommendation systems.
  • Optimized machine learning mechanism for big data healthcare system to predict disease risk factor

    Mr P Udayaraju, Venkata Nagaraju Thatha., Silpa Chalichalamala., D Pramodh Krishna., Manjunath Chinthakunta., Srihari Varma Mantena., Shariff Vahiduddin., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Heart disease is becoming more and more common in modern society because of factors like stress, inadequate diets, etc. Early identification of heart disease risk factors is essential as it allows for treatment plans that may reduce the risk of severe consequences and enhance patient outcomes. Predictive methods have been used to estimate the risk factor, but they often have drawbacks such as improper feature selection, overfitting, etc. To overcome this, a novel Deep Red Fox belief prediction system (DRFBPS) has been introduced and implemented in Python software. Initially, the data was collected and preprocessed to enhance its quality, and the relevant features were selected using red fox optimization. The selected features analyze the risk factors, and DRFBPS makes the prediction. The effectiveness of the DRFBPS model is validated using Accuracy, F score, Precision, AUC, Recall, and error rate. The findings demonstrate the use of DRFBPS as a practical tool in healthcare analytics by showing the rate at which it produces accurate and reliable predictions. Additionally, its application in healthcare systems, including clinical decisions and remote patient monitoring, proves its real-world applicability in enhancing early diagnosis and preventive care measures. The results prove DRFBPS to be a potential tool in healthcare analytics, providing a strong framework for predictive modeling in heart disease risk prediction
  • Optimizing diabetic retinopathy detection with electric fish algorithm and bilinear convolutional networks

    Mr P Udayaraju, Pamula Udayaraju., Venkateswararao Pulipati., G Vijaya Suresh., M V Jagannatha Reddy., Anil Kumar Bondala., Srihari Varma Mantena., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, necessitating regular screenings to prevent its progression to severe stages. Manual diagnosis is labor-intensive and prone to inaccuracies, highlighting the need for automated, accurate detection methods. This study proposes a novel approach for early DR detection by integrating advanced machine learning techniques. The proposed system employs a three-phase methodology: initial image preprocessing, blood vessel segmentation using a Hopfield Neural Network (HNN), and feature extraction through an Attention Mechanism-based Capsule Network (AM-CapsuleNet). The features are optimized using a Taylor-based African Vulture Optimization Algorithm (AVOA) and classified using a Bilinear Convolutional Attention Network (BCAN). To enhance classification accuracy, the system introduces a hybrid Electric Fish Optimization Arithmetic Algorithm (EFAOA), which refines the exploration phase, ensuring rapid convergence. The model was evaluated on a balanced dataset from the APTOS 2019 Blindness Detection challenge, demonstrating superior performance in terms of accuracy and efficiency. The proposed system offers a robust solution for the early detection and classification of DR, potentially improving patient outcomes through timely and precise diagnosis
  • A Comprehensive Analysis of Botnet Detection Techniques in IoT Networks

    Mr P Udayaraju, Archana Kalidindi., Krishna Mohan Buddaraju., Vinod Varma Ch., V Sivaramaraju Vetukuri., Jahnavi P

    Source Title: Algorithms in Advanced Artificial Intelligence, DOI Link

    View abstract ⏷

    The expansion of the number of devices connected to the Internet has made it increasingly difficult toensure network security for IoT systems. The term botnet stands for networks formed by infected systems under controlby attackers and these entities pose significant threats to the world of IoT due to their ability in carrying out large-scaleorganized attacks. Specialized methods addressing IoT-specific features are needed in order to detect and fight backthese malicious networks. This paper is designed to deal with the situation where we need a special set of strategiesthat differ from those used in generic environments to be able not only to identify but also take appropriate actionagainst such adversarial networks, particularly within an IoT context. Give a complete account of the strategies used fordetection of botnets in the IoT networks. There are three main methods used for detection i.e. signature based, anomalybased, and machine learning-based techniques. The limitations are not far off from these current detection mechanisms
  • Bio inspired feature selection and graph learning for sepsis risk stratification

    Mr P Udayaraju, D Siri|Raviteja Kocherla|Sudharshan Tumkunta|GOGINENI KRISHNA CHAITANYA|Gowtham Mamidisetti|Nanditha Boddu

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Sepsis remains a leading cause of mortality in critical care settings, necessitating timely and accurate risk stratification. However, existing machine learning models for sepsis prediction often suffer from poor interpretability, limited generalizability across diverse patient populations, and challenges in handling class imbalance and high-dimensional clinical data. To address these gaps, this study proposes a novel framework that integrates bio-inspired feature selection and graph-based deep learning for enhanced sepsis risk prediction. Using the MIMIC-IV dataset, we employ the Wolverine Optimization Algorithm (WoOA) to select clinically relevant features, followed by a Generative Pre-Training Graph Neural Network (GPT-GNN) that models complex patient relationships through self-supervised learning. To further improve predictive accuracy, the TOTO metaheuristic algorithm is applied for model fine-tuning. SMOTE is used to balance the dataset and mitigate bias toward the majority class. Experimental results show that our model outperforms traditional classifiers such as SVM, XGBoost, and LightGBM in terms of accuracy, AUC, and F1-score, while also providing interpretable mortality indicators. This research contributes a scalable and high-performing decision support tool for sepsis risk stratification in real-world clinical environments
  • Histopathological image based breast cancer diagnosis using deep learning and bio inspired optimization

    Mr P Udayaraju, Venkata Nagaraju Thatha|M. Ganesh Karthik|Venu Gopal Gaddam|D Pramodh Krishna|S Venkataramana|Kranthi Kumar Lella

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Breast cancer diagnosis remains a crucial challenge in medical research, necessitating accurate and automated detection methods. This study introduces an advanced deep learning framework for histopathological image classification, integrating AlexNet and Gated Recurrent Unit (GRU) networks, optimized using the Hippopotamus Optimization Algorithm (HOA). Initially, DenseNet-41 extracts intricate spatial features from histopathological images. These features are then processed by the hybrid AlexNet-GRU model, leveraging AlexNet’s robust feature extraction and GRU’s sequential learning capabilities. HOA is employed to fine-tune hyperparameters, ensuring optimal model performance. The proposed approach is evaluated on benchmark datasets (BreakHis and BACH), achieving a classification accuracy of 99.60%, surpassing existing state-of-the-art models. The results demonstrate the efficacy of integrating deep learning with bio-inspired optimization techniques in breast cancer detection. This research offers a robust and computationally efficient framework for improving early diagnosis and clinical decision-making, potentially enhancing patient outcomes.
  • A Hybrid Machine Learning Model for Analyzing the Dynamic Behavior of the Cloud Data for Optimal Resource Allocation and Scheduling to Enhance Cost Optimization

    Mr P Udayaraju, Harsha Kamma| GOTTUMUKKALA SANTHI|Sudharshan Tumkunta

    Source Title: 2025 International Conference on Inventive Computation Technologies (ICICT), DOI Link

    View abstract ⏷

    The challenges of efficient resource allocation and scheduling in cloud computing and the dynamic and unpredictable nature of cloud data, which leads to suboptimal cost management, continue to be serious problems. The aim of this work is to devise a machine learning model-that is a hybrid-as the best solution for resource allocation and scheduling in cloud computing so that the costs can be minimized in a dynamic cloud environment while working efficiently. This differs from the traditional approaches that have trouble dealing with several factors at the same time. Instead, this work utilizes both supervised and reinforcement learning methodologies so as to devise an integrated solution. In detail, the Long Short-Term Memory (LSTM) networks are employed to provide an accurate forecast of the workload's time series while the Deep Q-Networks (DQN) allow for smart decision-making on how to distribute the resources in the best way. The system is always on the lookout for cloud operations, not just gathering real-time data on the workload fluctuations but also on the resource requests and utilization patterns to build an adaptive scheduling model, which in turn leads to enhancements in cost-efficient service quality. The experiment outcomes validate the model presented in this paper as it manages effectively the problem of underutilization and over-provisioning with a 25% reduction in costs compared to the traditional methods of scheduling. This work merges predictive analytics and intelligent resource management, thus facilitating cloud computing to be better at cost, scalability, and high-performance in highly dynamic environments
  • Intelligent Industrial IoT: A Data-Driven Approach for Smart Manufacturing and Predictive Maintenance

    Mr P Udayaraju, Suresh Putteti| GOTTUMUKKALA SANTHI|Gowthamkumar Reddy Mittoor|Cheruku Nagamani

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    The fast growth of the Industrial Internet of Things (IIoT) has changed modern manufacturing by allowing real-time data collection, smart automation, and predictive analysis. However, the large amount of data from industrial sensors and machines needs efficient processing, analysis, and decision-making to improve efficiency and reduce downtime. This study introduces an Intelligent Industrial IoT (I-IIoT) system that combines edge computing, artificial intelligence (AI), and big data analysis to support smart manufacturing and predictive maintenance. The proposed model uses machine learning (ML) and deep learning (DL) to identify equipment issues, predict failures, and improve production. A cloud-edge hybrid setup processes data in real time, reducing delays and making the system more responsive. A blockchain-based data-sharing method ensures data security, privacy, and smooth communication between IIoT systems. Comparisons with traditional maintenance methods reveal significant improvements, such as over 95% accuracy in predictions, a 30% reduction in downtime, and 25% better use of resources. These results suggest that data-driven IIoT solutions can transform industrial operations by improving automation, security, and decision-making, leading to the next level of smart factories
  • Application of Artificial Intelligence: Recurrent Neural Network and Convolution Neural Network Performs in Industrial Design and Optimization

    Mr P Udayaraju, Venkata Pradeep Reddy Pamaiahgari| GOTTUMUKKALA SANTHI|Kiran Jagannadha Reddy|Sudharshan Tumkunta

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    Efficiency is supreme in Industrial Engineering. Many advanced AI algorithms are generally used in the industrial environment to monitor, identify, and detect machinery conditions and other related activities. The primary challenging task is accurate fault detection, which is essential to classifying the defects in manufacturing. Hence, choosing a suitable AI algorithm for monitoring the industrial environment is most important. This paper intends to solve the problem of automatically optimizing specific industrial products' design, structure, and process; it selects advanced AI models, such as RNN and CNN. The paper will address the optimization issue using neural networks in the Recurrent Neural Network and Convolutional Neural Network architectures. The goal behind the CNN model is to implement fault detection, selection of construction materials, and validation of the design through CAD methods utilizing feature extraction and pattern recognition. The RNN model assists the user by
  • Implementing A Two-Stage Authentication and Authorization Protocol for Improving the User Level Security in Cloud Applications

    Mr P Udayaraju, Sravani Thota|K Aruna Kumari

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    Cloud applications have transformed the data into various stored, processed, and accessed forms. However, they remain vulnerable to cyber threats, including unauthorized access and security breaches. Traditional login methods, such as passwords, often fail to provide strong protection against attacks like credential theft, brute force attempts, and session hijacking. This paper introduces a Two-Stage Authentication and Authorization Protocol (TS-AAP) to improve security. The first stage uses multi-factor authentication (MFA), requiring users to verify their identity with a combination of credentials, such as passwords, one-time passwords (OTPs), or biometric authentication. The second stage applies role-based access control (RBAC) and behaviour analysis to ensure that only authorized users can access specific cloud resources. This approach follows predefined security policies and continuously monitors user activity in real-time. By combining these two layers of security, TS-AAP effectively prevents unauthorized access, reduces identity spoofing risks, and strengthens data protection in cloud applications. Experimental results show that the system improves authentication accuracy to 98.2%, lowers unauthorized access attempts by 90%, and reduces authentication time to just 1.8 seconds. These findings confirm that TS-AAP is a reliable and efficient solution for enhancing cloud security and protecting sensitive data from modern cyber threats
  • Enhanced botnet detection in IoT networks using zebra optimization and dual-channel GAN classification

    Mr P Udayaraju, Sk Khaja Shareef., R Krishna Chaitanya., Srinivasulu Chennupalli., Devi Chokkakula., Kasula Venkata Durga Kiran., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    The Internet of Things (IoT) permeates various sectors, including healthcare, smart cities, and agriculture, alongside critical infrastructure management. However, its susceptibility to malware due to limited processing power and security protocols poses significant challenges. Traditional antimalware solutions fall short in combating evolving threats. To address this, the research work developed a feature selection-based classification model. At first stage, a preprocessing stage enhances dataset quality through data smoothing and consistency improvement. Feature selection via the Zebra Optimization Algorithm (ZOA) reduces dimensionality, while a classification phase integrates the Graph Attention Network (GAN), specifically the Dual-channel GAN (DGAN). DGAN incorporates Node Attention Networks and Semantic Attention Networks to capture intricate IoT device interactions and detect anomalous behaviors like botnet activity. The model's accuracy is further boosted by leveraging both structural and semantic data with the Sooty Tern Optimization Algorithm (STOA) for hyperparameter tuning. The proposed STOA-DGAN model achieves an impressive 99.87% accuracy in botnet activity classification, showcasing robustness and reliability compared to existing approaches.
  • Enhanced stock market forecasting using dandelion optimization-driven 3D-CNN-GRU classification

    Mr P Udayaraju, Jagadesh B N., Rajasekhar Reddy N V., Damera V K., Vatambeti R., Jagadeesh M S., Koteswararao C

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    The global interest in market prediction has driven the adoption of advanced technologies beyond traditional statistical models. This paper explores the use of machine learning and deep learning techniques for stock market forecasting. We propose a comprehensive approach that includes efficient feature selection, data preprocessing, and classification methodologies. The wavelet transform method is employed for data cleaning and noise reduction. Feature selection is optimized using the Dandelion Optimization Algorithm (DOA), identifying the most relevant input features. A novel hybrid model, 3D-CNN-GRU, integrating a 3D convolutional neural network with a gated recurrent unit, is developed for stock market data analysis. Hyperparameter tuning is facilitated by the Blood Coagulation Algorithm (BCA), enhancing model performance. Our methodology achieves a remarkable prediction accuracy of 99.14%, demonstrating robustness and efficacy in stock market forecasting applications. While our model shows significant promise, it is limited by the scope of the dataset, which includes only the Nifty 50 index. Broader implications of this work suggest that incorporating additional datasets and exploring different market scenarios could further validate and enhance the model's applicability. Future research could focus on implementing this approach in varied financial contexts to ensure robustness and generalizability. © The Author(s) 2024.
  • Enhancing Communication and Data Transmission Security in RAG Using Large Language Models

    Mr P Udayaraju, Venkata Gummadi., Venkata Rahul Sarabu., Chaitanya Ravulu., Dhanunjay Reddy Seelam., S Venkataramana

    Source Title: 2024 4th International Conference on Sustainable Expert Systems (ICSES), DOI Link

    View abstract ⏷

    Retrieval-augmented generation (RAG) enhances large language models (LLMs) by integrating external knowledge sources, enabling more useful information and generating accurate responses. This paper explores RAG's architecture and applications, combining generator and retriever models to access and utilize vast external data repositories. While RAG holds significant promise for various Natural Language Processing (NLP) processes like dialogue generation, summarization, and question answering, it also presents unique security challenges that must be addressed to ensure system integrity and reliability. RAG systems face several security threats, including data poisoning, model manipulation, privacy leakage, biased information retrieval, and harmful outputs generation. Generally, in the traditional RAG application, security threat is one of the major concerns. To tighten the security system and enhance the efficiency of the model on processing more complex data this paper outlines key strategies for securing RAG-based applications to mitigate these risks paper outlines key strategies for securing RAG-based applications to mitigate these risks. Ensuring data security through filtering, sanitization, and provenance tracking can prevent data poisoning and enhance the quality of external knowledge sources. Strengthening model security via adversarial training, input validation, and anomaly detection improves resilience against manipulative attacks. Implementing output monitoring and filtering techniques, such as factual verification, language moderation, and bias detection, ensures the accuracy and safety of generated responses. Additionally, robust infrastructure and access control measures, including secure data storage, secure APIs, and regulated model access, protect against unauthorized access and manipulation. Moreover, this study analyzes various use cases for LLMs enhanced by RAG, including personalized recommendations, customer support automation, content creation, and advanced search functionalities. The role of vector databases in optimizing RAG-driven generative AI is also discussed, highlighting their ability to efficiently manage and retrieve large-scale data for improved response generation. By adhering to these security measures and leveraging best practices from leading industry sources such as Databricks, AWS, and Milvus, developers can ensure the robustness and trustworthiness of RAG-based systems across diverse applications.
  • An Identity Verification Governance Public Data Access Model for Analyzing Public Data Access Governance using Artificial Neural Network

    Mr P Udayaraju, Venkata Rahul Sarabu., Venkata Gummadi., Chaithanya Ravulu., Shravan Kumar Joginipalli., Sai Charan Gurram

    Source Title: 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS), DOI Link

    View abstract ⏷

    The main objective of this paper is to design and implement AI algorithms for public data access governance. An identity Verification Governance Data Access (IVGDA) model uses the ANN algorithm to secure public data access governance. The ANN algorithm is implemented to address and analyse the problems of public data access governance, including computational complexity, data security, and management. Most public sectors, organizations, and institutions face many problems balancing transparency in security, privacy, and scalability for diverse datasets. The ANN algorithm proposed here leverages different datasets of regular guidelines, data patterns, and interactions with multiple people connected in public data applications. Since public data is transferred globally, it is essential to transmit with balanced privacy, transparency, and effective governance. This paper explains the efficiency of AI algorithms in creating a high-level security-based governance public data accessing model. A simulation is created in a Microsft Active Directory. The proposed IVGDA model is deployed to examine and evaluate the memberships and authorities of the users involved in the directory. The ANN analyses the account, profile, access rules and roles, data models, and other parameters, and the output is collected. Compared with the existing models, the IVGDA model establishes significant governance and outperforms others. In the simulation, various processes like user addition, deletion, group creation, alteration in the group, profile and all are used to evaluate the performance of the IVGDA model. Additionally, this paper contributes to data governance by providing an advanced computing algorithm for understanding and optimizing public data access governance.

Patents

Projects

Scholars

Interests

  • Deep Learning
  • Image Processing

Thought Leaderships

There are no Thought Leaderships associated with this faculty.

Top Achievements

Education
2010
BTech (CSE)
VR Siddhartha Engineering College Vijayawada, Affiliated to Acharya Nagarjuna University, AP
2014
MTech (CST)
SRKR Engineering College Bhimavaram, Affiliated to JNTUK, AP
2024
PhD (Pursuing)
Sathyabama Institute of Science and Technology, Chennai.
Experience
  • 11 year 9 months of work experience as an Assistant Professor in SRKR Engineering College, Bhimavaram, Andhra Pradesh, India.
Research Interests
  • A combined U-Net and multi-class support vector machine learning models for diabetic retinopathy macula edema segmentation and classification DME.
  • Convolution neural network model for predicting various lesion-based diseases in diabetic macula edema in optical coherence tomography images.
  • A hybrid multilayered classification model with VGG-19 net for retinal diseases using optical coherence tomography images.
  • Early Diagnosis of Age-Related Macular Degeneration (ARMD) Using Deep Learning.
  • Developing a region-based energy-efficient IoT agriculture network using region- based clustering and shortest path routing for making sustainable agriculture environment.
  • Artificial neural network-based secured communication strategy for vehicular ad hoc network
Awards & Fellowships
No data available
Memberships
No data available
Publications
  • Clustering-based binary Grey Wolf Optimisation model with 6LDCNNet for prediction of heart disease using patient data

    Mr P Udayaraju, Lella Kranthi Kumar., K G Suma.,Venkateswarlu Gundu., Srihari Varma Mantena., B N Jagadesh

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    In recent years, the healthcare data system has expanded rapidly, allowing for the identification of important health trends and facilitating targeted preventative care. Heart disease remains a leading cause of death in developed countries, often leading to consequential outcomes such as dementia, which can be mitigated through early detection and treatment of cardiovascular issues. Continued research into preventing strokes and heart attacks is crucial. Utilizing the wealth of healthcare data related to cardiac ailments, a two-stage medical data classification and prediction model is proposed in this study. Initially, Binary Grey Wolf Optimization (BGWO) is used to cluster features, with the grouped information then utilized as input for the prediction model. An innovative 6-layered deep convolutional neural network (6LDCNNet) is designed for the classification of cardiac conditions. Hyper-parameter tuning for 6LDCNNet is achieved through an improved optimization method. The resulting model demonstrates promising performance on both the Cleveland dataset, achieving a convergence of 96% for assessing severity, and the echocardiography imaging dataset, with an impressive 98% convergence. This approach has the potential to aid physicians in diagnosing the severity of cardiac diseases, facilitating early interventions that can significantly reduce mortality associated with cardiovascular conditions
  • Implementing Resource-Aware Scheduling Algorithm for Improving Cost Optimization in Cloud Computing

    Mr P Udayaraju, Dr Dilip Kumar Vallabhadas, Kiran Jagannadha Reddy., Venkata Pradeep Reddy Pamaiahgari

    Source Title: 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), DOI Link

    View abstract ⏷

    An uncountable number of computational resources are shared for various applications using a transformative technology called Cloud Computing. It is an emerging technology that can offer scalable and on-demand resources. However, cost optimization and resource allocation are critical issues due to the increasing number of cloud user and their request. It can be solved by optimising resource allocation by reducing the task/user queue. It is possible only if the requested resource is free; otherwise, it must schedule the same or relevant resource. This paper demonstrates a Resource-Aware Scheduling (RAS) algorithm to increase the speed of resource allocation by mapping the user request with the resource availability. The proposed algorithm examines the log information of the resources, workload behaviour, resource availability, price, task efficiency, and reduced wastage and allocates the resources dynamically. To do that, it maps user information with resource information and schedules based on availability and priority. The simulation-based experiment is carried out in a private cloud space and demonstrates the resource allocation, utilization, response time, and cost. It is also compared with the traditional scheduling algorithm to evaluate the performance of the RAS. The evaluation found that the RAS model is more suitable for cloud resource allocation.
  • Developing A Recommendation System for Medical Expert Searching Using Graph Neural Networks

    Mr P Udayaraju, Satish Chandra Cherukuri., Greeshma Suryadevara., Srinivasarao Tottempudi

    Source Title: 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), DOI Link

    View abstract ⏷

    Graph Neural Networks are one of the most robust paradigms for analyzing complex, related, and tightly coupled datasets. The main objective of this paper is to use any of the emerging data analytics models to develop a common recommendation system for consumer products used in daily human life. Thus, this paper implements the GNN model for interlinking various datasets of consumer products, like food, medicine, and others. The paper's novelty is interconnecting multiple datasets, interlinking them using GNN-based graphs, and extracting features to provide accurate predictions. The applications of GNNs are explored to understand the functionalities and capability of handling several heterogeneous datasets to develop a unified recommendation system. The existing recommendation systems struggled to obtain the inter-relationships among multiple consumer datasets and interconnecting heterogeneous datasets. In contrast, GNN can address the issues and challenges. A graph structure is created dynamically to interconnect all the consumer product data items based on structural, contextual, usage, and user opinion and experience. A benchmark dataset was obtained from UCI and Kaggle repositories, and GNN in Python was experimented with to understand its efficacy. The experimental outputs demonstrate that the GNN model is highly efficient in interconnecting heterogeneous datasets and creating similarity-aware recommendation systems.
  • Optimized machine learning mechanism for big data healthcare system to predict disease risk factor

    Mr P Udayaraju, Venkata Nagaraju Thatha., Silpa Chalichalamala., D Pramodh Krishna., Manjunath Chinthakunta., Srihari Varma Mantena., Shariff Vahiduddin., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Heart disease is becoming more and more common in modern society because of factors like stress, inadequate diets, etc. Early identification of heart disease risk factors is essential as it allows for treatment plans that may reduce the risk of severe consequences and enhance patient outcomes. Predictive methods have been used to estimate the risk factor, but they often have drawbacks such as improper feature selection, overfitting, etc. To overcome this, a novel Deep Red Fox belief prediction system (DRFBPS) has been introduced and implemented in Python software. Initially, the data was collected and preprocessed to enhance its quality, and the relevant features were selected using red fox optimization. The selected features analyze the risk factors, and DRFBPS makes the prediction. The effectiveness of the DRFBPS model is validated using Accuracy, F score, Precision, AUC, Recall, and error rate. The findings demonstrate the use of DRFBPS as a practical tool in healthcare analytics by showing the rate at which it produces accurate and reliable predictions. Additionally, its application in healthcare systems, including clinical decisions and remote patient monitoring, proves its real-world applicability in enhancing early diagnosis and preventive care measures. The results prove DRFBPS to be a potential tool in healthcare analytics, providing a strong framework for predictive modeling in heart disease risk prediction
  • Optimizing diabetic retinopathy detection with electric fish algorithm and bilinear convolutional networks

    Mr P Udayaraju, Pamula Udayaraju., Venkateswararao Pulipati., G Vijaya Suresh., M V Jagannatha Reddy., Anil Kumar Bondala., Srihari Varma Mantena., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, necessitating regular screenings to prevent its progression to severe stages. Manual diagnosis is labor-intensive and prone to inaccuracies, highlighting the need for automated, accurate detection methods. This study proposes a novel approach for early DR detection by integrating advanced machine learning techniques. The proposed system employs a three-phase methodology: initial image preprocessing, blood vessel segmentation using a Hopfield Neural Network (HNN), and feature extraction through an Attention Mechanism-based Capsule Network (AM-CapsuleNet). The features are optimized using a Taylor-based African Vulture Optimization Algorithm (AVOA) and classified using a Bilinear Convolutional Attention Network (BCAN). To enhance classification accuracy, the system introduces a hybrid Electric Fish Optimization Arithmetic Algorithm (EFAOA), which refines the exploration phase, ensuring rapid convergence. The model was evaluated on a balanced dataset from the APTOS 2019 Blindness Detection challenge, demonstrating superior performance in terms of accuracy and efficiency. The proposed system offers a robust solution for the early detection and classification of DR, potentially improving patient outcomes through timely and precise diagnosis
  • A Comprehensive Analysis of Botnet Detection Techniques in IoT Networks

    Mr P Udayaraju, Archana Kalidindi., Krishna Mohan Buddaraju., Vinod Varma Ch., V Sivaramaraju Vetukuri., Jahnavi P

    Source Title: Algorithms in Advanced Artificial Intelligence, DOI Link

    View abstract ⏷

    The expansion of the number of devices connected to the Internet has made it increasingly difficult toensure network security for IoT systems. The term botnet stands for networks formed by infected systems under controlby attackers and these entities pose significant threats to the world of IoT due to their ability in carrying out large-scaleorganized attacks. Specialized methods addressing IoT-specific features are needed in order to detect and fight backthese malicious networks. This paper is designed to deal with the situation where we need a special set of strategiesthat differ from those used in generic environments to be able not only to identify but also take appropriate actionagainst such adversarial networks, particularly within an IoT context. Give a complete account of the strategies used fordetection of botnets in the IoT networks. There are three main methods used for detection i.e. signature based, anomalybased, and machine learning-based techniques. The limitations are not far off from these current detection mechanisms
  • Bio inspired feature selection and graph learning for sepsis risk stratification

    Mr P Udayaraju, D Siri|Raviteja Kocherla|Sudharshan Tumkunta|GOGINENI KRISHNA CHAITANYA|Gowtham Mamidisetti|Nanditha Boddu

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Sepsis remains a leading cause of mortality in critical care settings, necessitating timely and accurate risk stratification. However, existing machine learning models for sepsis prediction often suffer from poor interpretability, limited generalizability across diverse patient populations, and challenges in handling class imbalance and high-dimensional clinical data. To address these gaps, this study proposes a novel framework that integrates bio-inspired feature selection and graph-based deep learning for enhanced sepsis risk prediction. Using the MIMIC-IV dataset, we employ the Wolverine Optimization Algorithm (WoOA) to select clinically relevant features, followed by a Generative Pre-Training Graph Neural Network (GPT-GNN) that models complex patient relationships through self-supervised learning. To further improve predictive accuracy, the TOTO metaheuristic algorithm is applied for model fine-tuning. SMOTE is used to balance the dataset and mitigate bias toward the majority class. Experimental results show that our model outperforms traditional classifiers such as SVM, XGBoost, and LightGBM in terms of accuracy, AUC, and F1-score, while also providing interpretable mortality indicators. This research contributes a scalable and high-performing decision support tool for sepsis risk stratification in real-world clinical environments
  • Histopathological image based breast cancer diagnosis using deep learning and bio inspired optimization

    Mr P Udayaraju, Venkata Nagaraju Thatha|M. Ganesh Karthik|Venu Gopal Gaddam|D Pramodh Krishna|S Venkataramana|Kranthi Kumar Lella

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Breast cancer diagnosis remains a crucial challenge in medical research, necessitating accurate and automated detection methods. This study introduces an advanced deep learning framework for histopathological image classification, integrating AlexNet and Gated Recurrent Unit (GRU) networks, optimized using the Hippopotamus Optimization Algorithm (HOA). Initially, DenseNet-41 extracts intricate spatial features from histopathological images. These features are then processed by the hybrid AlexNet-GRU model, leveraging AlexNet’s robust feature extraction and GRU’s sequential learning capabilities. HOA is employed to fine-tune hyperparameters, ensuring optimal model performance. The proposed approach is evaluated on benchmark datasets (BreakHis and BACH), achieving a classification accuracy of 99.60%, surpassing existing state-of-the-art models. The results demonstrate the efficacy of integrating deep learning with bio-inspired optimization techniques in breast cancer detection. This research offers a robust and computationally efficient framework for improving early diagnosis and clinical decision-making, potentially enhancing patient outcomes.
  • A Hybrid Machine Learning Model for Analyzing the Dynamic Behavior of the Cloud Data for Optimal Resource Allocation and Scheduling to Enhance Cost Optimization

    Mr P Udayaraju, Harsha Kamma| GOTTUMUKKALA SANTHI|Sudharshan Tumkunta

    Source Title: 2025 International Conference on Inventive Computation Technologies (ICICT), DOI Link

    View abstract ⏷

    The challenges of efficient resource allocation and scheduling in cloud computing and the dynamic and unpredictable nature of cloud data, which leads to suboptimal cost management, continue to be serious problems. The aim of this work is to devise a machine learning model-that is a hybrid-as the best solution for resource allocation and scheduling in cloud computing so that the costs can be minimized in a dynamic cloud environment while working efficiently. This differs from the traditional approaches that have trouble dealing with several factors at the same time. Instead, this work utilizes both supervised and reinforcement learning methodologies so as to devise an integrated solution. In detail, the Long Short-Term Memory (LSTM) networks are employed to provide an accurate forecast of the workload's time series while the Deep Q-Networks (DQN) allow for smart decision-making on how to distribute the resources in the best way. The system is always on the lookout for cloud operations, not just gathering real-time data on the workload fluctuations but also on the resource requests and utilization patterns to build an adaptive scheduling model, which in turn leads to enhancements in cost-efficient service quality. The experiment outcomes validate the model presented in this paper as it manages effectively the problem of underutilization and over-provisioning with a 25% reduction in costs compared to the traditional methods of scheduling. This work merges predictive analytics and intelligent resource management, thus facilitating cloud computing to be better at cost, scalability, and high-performance in highly dynamic environments
  • Intelligent Industrial IoT: A Data-Driven Approach for Smart Manufacturing and Predictive Maintenance

    Mr P Udayaraju, Suresh Putteti| GOTTUMUKKALA SANTHI|Gowthamkumar Reddy Mittoor|Cheruku Nagamani

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    The fast growth of the Industrial Internet of Things (IIoT) has changed modern manufacturing by allowing real-time data collection, smart automation, and predictive analysis. However, the large amount of data from industrial sensors and machines needs efficient processing, analysis, and decision-making to improve efficiency and reduce downtime. This study introduces an Intelligent Industrial IoT (I-IIoT) system that combines edge computing, artificial intelligence (AI), and big data analysis to support smart manufacturing and predictive maintenance. The proposed model uses machine learning (ML) and deep learning (DL) to identify equipment issues, predict failures, and improve production. A cloud-edge hybrid setup processes data in real time, reducing delays and making the system more responsive. A blockchain-based data-sharing method ensures data security, privacy, and smooth communication between IIoT systems. Comparisons with traditional maintenance methods reveal significant improvements, such as over 95% accuracy in predictions, a 30% reduction in downtime, and 25% better use of resources. These results suggest that data-driven IIoT solutions can transform industrial operations by improving automation, security, and decision-making, leading to the next level of smart factories
  • Application of Artificial Intelligence: Recurrent Neural Network and Convolution Neural Network Performs in Industrial Design and Optimization

    Mr P Udayaraju, Venkata Pradeep Reddy Pamaiahgari| GOTTUMUKKALA SANTHI|Kiran Jagannadha Reddy|Sudharshan Tumkunta

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    Efficiency is supreme in Industrial Engineering. Many advanced AI algorithms are generally used in the industrial environment to monitor, identify, and detect machinery conditions and other related activities. The primary challenging task is accurate fault detection, which is essential to classifying the defects in manufacturing. Hence, choosing a suitable AI algorithm for monitoring the industrial environment is most important. This paper intends to solve the problem of automatically optimizing specific industrial products' design, structure, and process; it selects advanced AI models, such as RNN and CNN. The paper will address the optimization issue using neural networks in the Recurrent Neural Network and Convolutional Neural Network architectures. The goal behind the CNN model is to implement fault detection, selection of construction materials, and validation of the design through CAD methods utilizing feature extraction and pattern recognition. The RNN model assists the user by
  • Implementing A Two-Stage Authentication and Authorization Protocol for Improving the User Level Security in Cloud Applications

    Mr P Udayaraju, Sravani Thota|K Aruna Kumari

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    Cloud applications have transformed the data into various stored, processed, and accessed forms. However, they remain vulnerable to cyber threats, including unauthorized access and security breaches. Traditional login methods, such as passwords, often fail to provide strong protection against attacks like credential theft, brute force attempts, and session hijacking. This paper introduces a Two-Stage Authentication and Authorization Protocol (TS-AAP) to improve security. The first stage uses multi-factor authentication (MFA), requiring users to verify their identity with a combination of credentials, such as passwords, one-time passwords (OTPs), or biometric authentication. The second stage applies role-based access control (RBAC) and behaviour analysis to ensure that only authorized users can access specific cloud resources. This approach follows predefined security policies and continuously monitors user activity in real-time. By combining these two layers of security, TS-AAP effectively prevents unauthorized access, reduces identity spoofing risks, and strengthens data protection in cloud applications. Experimental results show that the system improves authentication accuracy to 98.2%, lowers unauthorized access attempts by 90%, and reduces authentication time to just 1.8 seconds. These findings confirm that TS-AAP is a reliable and efficient solution for enhancing cloud security and protecting sensitive data from modern cyber threats
  • Enhanced botnet detection in IoT networks using zebra optimization and dual-channel GAN classification

    Mr P Udayaraju, Sk Khaja Shareef., R Krishna Chaitanya., Srinivasulu Chennupalli., Devi Chokkakula., Kasula Venkata Durga Kiran., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    The Internet of Things (IoT) permeates various sectors, including healthcare, smart cities, and agriculture, alongside critical infrastructure management. However, its susceptibility to malware due to limited processing power and security protocols poses significant challenges. Traditional antimalware solutions fall short in combating evolving threats. To address this, the research work developed a feature selection-based classification model. At first stage, a preprocessing stage enhances dataset quality through data smoothing and consistency improvement. Feature selection via the Zebra Optimization Algorithm (ZOA) reduces dimensionality, while a classification phase integrates the Graph Attention Network (GAN), specifically the Dual-channel GAN (DGAN). DGAN incorporates Node Attention Networks and Semantic Attention Networks to capture intricate IoT device interactions and detect anomalous behaviors like botnet activity. The model's accuracy is further boosted by leveraging both structural and semantic data with the Sooty Tern Optimization Algorithm (STOA) for hyperparameter tuning. The proposed STOA-DGAN model achieves an impressive 99.87% accuracy in botnet activity classification, showcasing robustness and reliability compared to existing approaches.
  • Enhanced stock market forecasting using dandelion optimization-driven 3D-CNN-GRU classification

    Mr P Udayaraju, Jagadesh B N., Rajasekhar Reddy N V., Damera V K., Vatambeti R., Jagadeesh M S., Koteswararao C

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    The global interest in market prediction has driven the adoption of advanced technologies beyond traditional statistical models. This paper explores the use of machine learning and deep learning techniques for stock market forecasting. We propose a comprehensive approach that includes efficient feature selection, data preprocessing, and classification methodologies. The wavelet transform method is employed for data cleaning and noise reduction. Feature selection is optimized using the Dandelion Optimization Algorithm (DOA), identifying the most relevant input features. A novel hybrid model, 3D-CNN-GRU, integrating a 3D convolutional neural network with a gated recurrent unit, is developed for stock market data analysis. Hyperparameter tuning is facilitated by the Blood Coagulation Algorithm (BCA), enhancing model performance. Our methodology achieves a remarkable prediction accuracy of 99.14%, demonstrating robustness and efficacy in stock market forecasting applications. While our model shows significant promise, it is limited by the scope of the dataset, which includes only the Nifty 50 index. Broader implications of this work suggest that incorporating additional datasets and exploring different market scenarios could further validate and enhance the model's applicability. Future research could focus on implementing this approach in varied financial contexts to ensure robustness and generalizability. © The Author(s) 2024.
  • Enhancing Communication and Data Transmission Security in RAG Using Large Language Models

    Mr P Udayaraju, Venkata Gummadi., Venkata Rahul Sarabu., Chaitanya Ravulu., Dhanunjay Reddy Seelam., S Venkataramana

    Source Title: 2024 4th International Conference on Sustainable Expert Systems (ICSES), DOI Link

    View abstract ⏷

    Retrieval-augmented generation (RAG) enhances large language models (LLMs) by integrating external knowledge sources, enabling more useful information and generating accurate responses. This paper explores RAG's architecture and applications, combining generator and retriever models to access and utilize vast external data repositories. While RAG holds significant promise for various Natural Language Processing (NLP) processes like dialogue generation, summarization, and question answering, it also presents unique security challenges that must be addressed to ensure system integrity and reliability. RAG systems face several security threats, including data poisoning, model manipulation, privacy leakage, biased information retrieval, and harmful outputs generation. Generally, in the traditional RAG application, security threat is one of the major concerns. To tighten the security system and enhance the efficiency of the model on processing more complex data this paper outlines key strategies for securing RAG-based applications to mitigate these risks paper outlines key strategies for securing RAG-based applications to mitigate these risks. Ensuring data security through filtering, sanitization, and provenance tracking can prevent data poisoning and enhance the quality of external knowledge sources. Strengthening model security via adversarial training, input validation, and anomaly detection improves resilience against manipulative attacks. Implementing output monitoring and filtering techniques, such as factual verification, language moderation, and bias detection, ensures the accuracy and safety of generated responses. Additionally, robust infrastructure and access control measures, including secure data storage, secure APIs, and regulated model access, protect against unauthorized access and manipulation. Moreover, this study analyzes various use cases for LLMs enhanced by RAG, including personalized recommendations, customer support automation, content creation, and advanced search functionalities. The role of vector databases in optimizing RAG-driven generative AI is also discussed, highlighting their ability to efficiently manage and retrieve large-scale data for improved response generation. By adhering to these security measures and leveraging best practices from leading industry sources such as Databricks, AWS, and Milvus, developers can ensure the robustness and trustworthiness of RAG-based systems across diverse applications.
  • An Identity Verification Governance Public Data Access Model for Analyzing Public Data Access Governance using Artificial Neural Network

    Mr P Udayaraju, Venkata Rahul Sarabu., Venkata Gummadi., Chaithanya Ravulu., Shravan Kumar Joginipalli., Sai Charan Gurram

    Source Title: 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS), DOI Link

    View abstract ⏷

    The main objective of this paper is to design and implement AI algorithms for public data access governance. An identity Verification Governance Data Access (IVGDA) model uses the ANN algorithm to secure public data access governance. The ANN algorithm is implemented to address and analyse the problems of public data access governance, including computational complexity, data security, and management. Most public sectors, organizations, and institutions face many problems balancing transparency in security, privacy, and scalability for diverse datasets. The ANN algorithm proposed here leverages different datasets of regular guidelines, data patterns, and interactions with multiple people connected in public data applications. Since public data is transferred globally, it is essential to transmit with balanced privacy, transparency, and effective governance. This paper explains the efficiency of AI algorithms in creating a high-level security-based governance public data accessing model. A simulation is created in a Microsft Active Directory. The proposed IVGDA model is deployed to examine and evaluate the memberships and authorities of the users involved in the directory. The ANN analyses the account, profile, access rules and roles, data models, and other parameters, and the output is collected. Compared with the existing models, the IVGDA model establishes significant governance and outperforms others. In the simulation, various processes like user addition, deletion, group creation, alteration in the group, profile and all are used to evaluate the performance of the IVGDA model. Additionally, this paper contributes to data governance by providing an advanced computing algorithm for understanding and optimizing public data access governance.
Contact Details

udayaraju.p@srmap.edu.in

Scholars
Interests

  • Deep Learning
  • Image Processing

Education
2010
BTech (CSE)
VR Siddhartha Engineering College Vijayawada, Affiliated to Acharya Nagarjuna University, AP
2014
MTech (CST)
SRKR Engineering College Bhimavaram, Affiliated to JNTUK, AP
2024
PhD (Pursuing)
Sathyabama Institute of Science and Technology, Chennai.
Experience
  • 11 year 9 months of work experience as an Assistant Professor in SRKR Engineering College, Bhimavaram, Andhra Pradesh, India.
Research Interests
  • A combined U-Net and multi-class support vector machine learning models for diabetic retinopathy macula edema segmentation and classification DME.
  • Convolution neural network model for predicting various lesion-based diseases in diabetic macula edema in optical coherence tomography images.
  • A hybrid multilayered classification model with VGG-19 net for retinal diseases using optical coherence tomography images.
  • Early Diagnosis of Age-Related Macular Degeneration (ARMD) Using Deep Learning.
  • Developing a region-based energy-efficient IoT agriculture network using region- based clustering and shortest path routing for making sustainable agriculture environment.
  • Artificial neural network-based secured communication strategy for vehicular ad hoc network
Awards & Fellowships
No data available
Memberships
No data available
Publications
  • Clustering-based binary Grey Wolf Optimisation model with 6LDCNNet for prediction of heart disease using patient data

    Mr P Udayaraju, Lella Kranthi Kumar., K G Suma.,Venkateswarlu Gundu., Srihari Varma Mantena., B N Jagadesh

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    In recent years, the healthcare data system has expanded rapidly, allowing for the identification of important health trends and facilitating targeted preventative care. Heart disease remains a leading cause of death in developed countries, often leading to consequential outcomes such as dementia, which can be mitigated through early detection and treatment of cardiovascular issues. Continued research into preventing strokes and heart attacks is crucial. Utilizing the wealth of healthcare data related to cardiac ailments, a two-stage medical data classification and prediction model is proposed in this study. Initially, Binary Grey Wolf Optimization (BGWO) is used to cluster features, with the grouped information then utilized as input for the prediction model. An innovative 6-layered deep convolutional neural network (6LDCNNet) is designed for the classification of cardiac conditions. Hyper-parameter tuning for 6LDCNNet is achieved through an improved optimization method. The resulting model demonstrates promising performance on both the Cleveland dataset, achieving a convergence of 96% for assessing severity, and the echocardiography imaging dataset, with an impressive 98% convergence. This approach has the potential to aid physicians in diagnosing the severity of cardiac diseases, facilitating early interventions that can significantly reduce mortality associated with cardiovascular conditions
  • Implementing Resource-Aware Scheduling Algorithm for Improving Cost Optimization in Cloud Computing

    Mr P Udayaraju, Dr Dilip Kumar Vallabhadas, Kiran Jagannadha Reddy., Venkata Pradeep Reddy Pamaiahgari

    Source Title: 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), DOI Link

    View abstract ⏷

    An uncountable number of computational resources are shared for various applications using a transformative technology called Cloud Computing. It is an emerging technology that can offer scalable and on-demand resources. However, cost optimization and resource allocation are critical issues due to the increasing number of cloud user and their request. It can be solved by optimising resource allocation by reducing the task/user queue. It is possible only if the requested resource is free; otherwise, it must schedule the same or relevant resource. This paper demonstrates a Resource-Aware Scheduling (RAS) algorithm to increase the speed of resource allocation by mapping the user request with the resource availability. The proposed algorithm examines the log information of the resources, workload behaviour, resource availability, price, task efficiency, and reduced wastage and allocates the resources dynamically. To do that, it maps user information with resource information and schedules based on availability and priority. The simulation-based experiment is carried out in a private cloud space and demonstrates the resource allocation, utilization, response time, and cost. It is also compared with the traditional scheduling algorithm to evaluate the performance of the RAS. The evaluation found that the RAS model is more suitable for cloud resource allocation.
  • Developing A Recommendation System for Medical Expert Searching Using Graph Neural Networks

    Mr P Udayaraju, Satish Chandra Cherukuri., Greeshma Suryadevara., Srinivasarao Tottempudi

    Source Title: 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), DOI Link

    View abstract ⏷

    Graph Neural Networks are one of the most robust paradigms for analyzing complex, related, and tightly coupled datasets. The main objective of this paper is to use any of the emerging data analytics models to develop a common recommendation system for consumer products used in daily human life. Thus, this paper implements the GNN model for interlinking various datasets of consumer products, like food, medicine, and others. The paper's novelty is interconnecting multiple datasets, interlinking them using GNN-based graphs, and extracting features to provide accurate predictions. The applications of GNNs are explored to understand the functionalities and capability of handling several heterogeneous datasets to develop a unified recommendation system. The existing recommendation systems struggled to obtain the inter-relationships among multiple consumer datasets and interconnecting heterogeneous datasets. In contrast, GNN can address the issues and challenges. A graph structure is created dynamically to interconnect all the consumer product data items based on structural, contextual, usage, and user opinion and experience. A benchmark dataset was obtained from UCI and Kaggle repositories, and GNN in Python was experimented with to understand its efficacy. The experimental outputs demonstrate that the GNN model is highly efficient in interconnecting heterogeneous datasets and creating similarity-aware recommendation systems.
  • Optimized machine learning mechanism for big data healthcare system to predict disease risk factor

    Mr P Udayaraju, Venkata Nagaraju Thatha., Silpa Chalichalamala., D Pramodh Krishna., Manjunath Chinthakunta., Srihari Varma Mantena., Shariff Vahiduddin., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Heart disease is becoming more and more common in modern society because of factors like stress, inadequate diets, etc. Early identification of heart disease risk factors is essential as it allows for treatment plans that may reduce the risk of severe consequences and enhance patient outcomes. Predictive methods have been used to estimate the risk factor, but they often have drawbacks such as improper feature selection, overfitting, etc. To overcome this, a novel Deep Red Fox belief prediction system (DRFBPS) has been introduced and implemented in Python software. Initially, the data was collected and preprocessed to enhance its quality, and the relevant features were selected using red fox optimization. The selected features analyze the risk factors, and DRFBPS makes the prediction. The effectiveness of the DRFBPS model is validated using Accuracy, F score, Precision, AUC, Recall, and error rate. The findings demonstrate the use of DRFBPS as a practical tool in healthcare analytics by showing the rate at which it produces accurate and reliable predictions. Additionally, its application in healthcare systems, including clinical decisions and remote patient monitoring, proves its real-world applicability in enhancing early diagnosis and preventive care measures. The results prove DRFBPS to be a potential tool in healthcare analytics, providing a strong framework for predictive modeling in heart disease risk prediction
  • Optimizing diabetic retinopathy detection with electric fish algorithm and bilinear convolutional networks

    Mr P Udayaraju, Pamula Udayaraju., Venkateswararao Pulipati., G Vijaya Suresh., M V Jagannatha Reddy., Anil Kumar Bondala., Srihari Varma Mantena., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, necessitating regular screenings to prevent its progression to severe stages. Manual diagnosis is labor-intensive and prone to inaccuracies, highlighting the need for automated, accurate detection methods. This study proposes a novel approach for early DR detection by integrating advanced machine learning techniques. The proposed system employs a three-phase methodology: initial image preprocessing, blood vessel segmentation using a Hopfield Neural Network (HNN), and feature extraction through an Attention Mechanism-based Capsule Network (AM-CapsuleNet). The features are optimized using a Taylor-based African Vulture Optimization Algorithm (AVOA) and classified using a Bilinear Convolutional Attention Network (BCAN). To enhance classification accuracy, the system introduces a hybrid Electric Fish Optimization Arithmetic Algorithm (EFAOA), which refines the exploration phase, ensuring rapid convergence. The model was evaluated on a balanced dataset from the APTOS 2019 Blindness Detection challenge, demonstrating superior performance in terms of accuracy and efficiency. The proposed system offers a robust solution for the early detection and classification of DR, potentially improving patient outcomes through timely and precise diagnosis
  • A Comprehensive Analysis of Botnet Detection Techniques in IoT Networks

    Mr P Udayaraju, Archana Kalidindi., Krishna Mohan Buddaraju., Vinod Varma Ch., V Sivaramaraju Vetukuri., Jahnavi P

    Source Title: Algorithms in Advanced Artificial Intelligence, DOI Link

    View abstract ⏷

    The expansion of the number of devices connected to the Internet has made it increasingly difficult toensure network security for IoT systems. The term botnet stands for networks formed by infected systems under controlby attackers and these entities pose significant threats to the world of IoT due to their ability in carrying out large-scaleorganized attacks. Specialized methods addressing IoT-specific features are needed in order to detect and fight backthese malicious networks. This paper is designed to deal with the situation where we need a special set of strategiesthat differ from those used in generic environments to be able not only to identify but also take appropriate actionagainst such adversarial networks, particularly within an IoT context. Give a complete account of the strategies used fordetection of botnets in the IoT networks. There are three main methods used for detection i.e. signature based, anomalybased, and machine learning-based techniques. The limitations are not far off from these current detection mechanisms
  • Bio inspired feature selection and graph learning for sepsis risk stratification

    Mr P Udayaraju, D Siri|Raviteja Kocherla|Sudharshan Tumkunta|GOGINENI KRISHNA CHAITANYA|Gowtham Mamidisetti|Nanditha Boddu

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Sepsis remains a leading cause of mortality in critical care settings, necessitating timely and accurate risk stratification. However, existing machine learning models for sepsis prediction often suffer from poor interpretability, limited generalizability across diverse patient populations, and challenges in handling class imbalance and high-dimensional clinical data. To address these gaps, this study proposes a novel framework that integrates bio-inspired feature selection and graph-based deep learning for enhanced sepsis risk prediction. Using the MIMIC-IV dataset, we employ the Wolverine Optimization Algorithm (WoOA) to select clinically relevant features, followed by a Generative Pre-Training Graph Neural Network (GPT-GNN) that models complex patient relationships through self-supervised learning. To further improve predictive accuracy, the TOTO metaheuristic algorithm is applied for model fine-tuning. SMOTE is used to balance the dataset and mitigate bias toward the majority class. Experimental results show that our model outperforms traditional classifiers such as SVM, XGBoost, and LightGBM in terms of accuracy, AUC, and F1-score, while also providing interpretable mortality indicators. This research contributes a scalable and high-performing decision support tool for sepsis risk stratification in real-world clinical environments
  • Histopathological image based breast cancer diagnosis using deep learning and bio inspired optimization

    Mr P Udayaraju, Venkata Nagaraju Thatha|M. Ganesh Karthik|Venu Gopal Gaddam|D Pramodh Krishna|S Venkataramana|Kranthi Kumar Lella

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    Breast cancer diagnosis remains a crucial challenge in medical research, necessitating accurate and automated detection methods. This study introduces an advanced deep learning framework for histopathological image classification, integrating AlexNet and Gated Recurrent Unit (GRU) networks, optimized using the Hippopotamus Optimization Algorithm (HOA). Initially, DenseNet-41 extracts intricate spatial features from histopathological images. These features are then processed by the hybrid AlexNet-GRU model, leveraging AlexNet’s robust feature extraction and GRU’s sequential learning capabilities. HOA is employed to fine-tune hyperparameters, ensuring optimal model performance. The proposed approach is evaluated on benchmark datasets (BreakHis and BACH), achieving a classification accuracy of 99.60%, surpassing existing state-of-the-art models. The results demonstrate the efficacy of integrating deep learning with bio-inspired optimization techniques in breast cancer detection. This research offers a robust and computationally efficient framework for improving early diagnosis and clinical decision-making, potentially enhancing patient outcomes.
  • A Hybrid Machine Learning Model for Analyzing the Dynamic Behavior of the Cloud Data for Optimal Resource Allocation and Scheduling to Enhance Cost Optimization

    Mr P Udayaraju, Harsha Kamma| GOTTUMUKKALA SANTHI|Sudharshan Tumkunta

    Source Title: 2025 International Conference on Inventive Computation Technologies (ICICT), DOI Link

    View abstract ⏷

    The challenges of efficient resource allocation and scheduling in cloud computing and the dynamic and unpredictable nature of cloud data, which leads to suboptimal cost management, continue to be serious problems. The aim of this work is to devise a machine learning model-that is a hybrid-as the best solution for resource allocation and scheduling in cloud computing so that the costs can be minimized in a dynamic cloud environment while working efficiently. This differs from the traditional approaches that have trouble dealing with several factors at the same time. Instead, this work utilizes both supervised and reinforcement learning methodologies so as to devise an integrated solution. In detail, the Long Short-Term Memory (LSTM) networks are employed to provide an accurate forecast of the workload's time series while the Deep Q-Networks (DQN) allow for smart decision-making on how to distribute the resources in the best way. The system is always on the lookout for cloud operations, not just gathering real-time data on the workload fluctuations but also on the resource requests and utilization patterns to build an adaptive scheduling model, which in turn leads to enhancements in cost-efficient service quality. The experiment outcomes validate the model presented in this paper as it manages effectively the problem of underutilization and over-provisioning with a 25% reduction in costs compared to the traditional methods of scheduling. This work merges predictive analytics and intelligent resource management, thus facilitating cloud computing to be better at cost, scalability, and high-performance in highly dynamic environments
  • Intelligent Industrial IoT: A Data-Driven Approach for Smart Manufacturing and Predictive Maintenance

    Mr P Udayaraju, Suresh Putteti| GOTTUMUKKALA SANTHI|Gowthamkumar Reddy Mittoor|Cheruku Nagamani

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    The fast growth of the Industrial Internet of Things (IIoT) has changed modern manufacturing by allowing real-time data collection, smart automation, and predictive analysis. However, the large amount of data from industrial sensors and machines needs efficient processing, analysis, and decision-making to improve efficiency and reduce downtime. This study introduces an Intelligent Industrial IoT (I-IIoT) system that combines edge computing, artificial intelligence (AI), and big data analysis to support smart manufacturing and predictive maintenance. The proposed model uses machine learning (ML) and deep learning (DL) to identify equipment issues, predict failures, and improve production. A cloud-edge hybrid setup processes data in real time, reducing delays and making the system more responsive. A blockchain-based data-sharing method ensures data security, privacy, and smooth communication between IIoT systems. Comparisons with traditional maintenance methods reveal significant improvements, such as over 95% accuracy in predictions, a 30% reduction in downtime, and 25% better use of resources. These results suggest that data-driven IIoT solutions can transform industrial operations by improving automation, security, and decision-making, leading to the next level of smart factories
  • Application of Artificial Intelligence: Recurrent Neural Network and Convolution Neural Network Performs in Industrial Design and Optimization

    Mr P Udayaraju, Venkata Pradeep Reddy Pamaiahgari| GOTTUMUKKALA SANTHI|Kiran Jagannadha Reddy|Sudharshan Tumkunta

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    Efficiency is supreme in Industrial Engineering. Many advanced AI algorithms are generally used in the industrial environment to monitor, identify, and detect machinery conditions and other related activities. The primary challenging task is accurate fault detection, which is essential to classifying the defects in manufacturing. Hence, choosing a suitable AI algorithm for monitoring the industrial environment is most important. This paper intends to solve the problem of automatically optimizing specific industrial products' design, structure, and process; it selects advanced AI models, such as RNN and CNN. The paper will address the optimization issue using neural networks in the Recurrent Neural Network and Convolutional Neural Network architectures. The goal behind the CNN model is to implement fault detection, selection of construction materials, and validation of the design through CAD methods utilizing feature extraction and pattern recognition. The RNN model assists the user by
  • Implementing A Two-Stage Authentication and Authorization Protocol for Improving the User Level Security in Cloud Applications

    Mr P Udayaraju, Sravani Thota|K Aruna Kumari

    Source Title: 2025 Third International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), DOI Link

    View abstract ⏷

    Cloud applications have transformed the data into various stored, processed, and accessed forms. However, they remain vulnerable to cyber threats, including unauthorized access and security breaches. Traditional login methods, such as passwords, often fail to provide strong protection against attacks like credential theft, brute force attempts, and session hijacking. This paper introduces a Two-Stage Authentication and Authorization Protocol (TS-AAP) to improve security. The first stage uses multi-factor authentication (MFA), requiring users to verify their identity with a combination of credentials, such as passwords, one-time passwords (OTPs), or biometric authentication. The second stage applies role-based access control (RBAC) and behaviour analysis to ensure that only authorized users can access specific cloud resources. This approach follows predefined security policies and continuously monitors user activity in real-time. By combining these two layers of security, TS-AAP effectively prevents unauthorized access, reduces identity spoofing risks, and strengthens data protection in cloud applications. Experimental results show that the system improves authentication accuracy to 98.2%, lowers unauthorized access attempts by 90%, and reduces authentication time to just 1.8 seconds. These findings confirm that TS-AAP is a reliable and efficient solution for enhancing cloud security and protecting sensitive data from modern cyber threats
  • Enhanced botnet detection in IoT networks using zebra optimization and dual-channel GAN classification

    Mr P Udayaraju, Sk Khaja Shareef., R Krishna Chaitanya., Srinivasulu Chennupalli., Devi Chokkakula., Kasula Venkata Durga Kiran., Ramesh Vatambeti

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    The Internet of Things (IoT) permeates various sectors, including healthcare, smart cities, and agriculture, alongside critical infrastructure management. However, its susceptibility to malware due to limited processing power and security protocols poses significant challenges. Traditional antimalware solutions fall short in combating evolving threats. To address this, the research work developed a feature selection-based classification model. At first stage, a preprocessing stage enhances dataset quality through data smoothing and consistency improvement. Feature selection via the Zebra Optimization Algorithm (ZOA) reduces dimensionality, while a classification phase integrates the Graph Attention Network (GAN), specifically the Dual-channel GAN (DGAN). DGAN incorporates Node Attention Networks and Semantic Attention Networks to capture intricate IoT device interactions and detect anomalous behaviors like botnet activity. The model's accuracy is further boosted by leveraging both structural and semantic data with the Sooty Tern Optimization Algorithm (STOA) for hyperparameter tuning. The proposed STOA-DGAN model achieves an impressive 99.87% accuracy in botnet activity classification, showcasing robustness and reliability compared to existing approaches.
  • Enhanced stock market forecasting using dandelion optimization-driven 3D-CNN-GRU classification

    Mr P Udayaraju, Jagadesh B N., Rajasekhar Reddy N V., Damera V K., Vatambeti R., Jagadeesh M S., Koteswararao C

    Source Title: Scientific Reports, Quartile: Q1, DOI Link

    View abstract ⏷

    The global interest in market prediction has driven the adoption of advanced technologies beyond traditional statistical models. This paper explores the use of machine learning and deep learning techniques for stock market forecasting. We propose a comprehensive approach that includes efficient feature selection, data preprocessing, and classification methodologies. The wavelet transform method is employed for data cleaning and noise reduction. Feature selection is optimized using the Dandelion Optimization Algorithm (DOA), identifying the most relevant input features. A novel hybrid model, 3D-CNN-GRU, integrating a 3D convolutional neural network with a gated recurrent unit, is developed for stock market data analysis. Hyperparameter tuning is facilitated by the Blood Coagulation Algorithm (BCA), enhancing model performance. Our methodology achieves a remarkable prediction accuracy of 99.14%, demonstrating robustness and efficacy in stock market forecasting applications. While our model shows significant promise, it is limited by the scope of the dataset, which includes only the Nifty 50 index. Broader implications of this work suggest that incorporating additional datasets and exploring different market scenarios could further validate and enhance the model's applicability. Future research could focus on implementing this approach in varied financial contexts to ensure robustness and generalizability. © The Author(s) 2024.
  • Enhancing Communication and Data Transmission Security in RAG Using Large Language Models

    Mr P Udayaraju, Venkata Gummadi., Venkata Rahul Sarabu., Chaitanya Ravulu., Dhanunjay Reddy Seelam., S Venkataramana

    Source Title: 2024 4th International Conference on Sustainable Expert Systems (ICSES), DOI Link

    View abstract ⏷

    Retrieval-augmented generation (RAG) enhances large language models (LLMs) by integrating external knowledge sources, enabling more useful information and generating accurate responses. This paper explores RAG's architecture and applications, combining generator and retriever models to access and utilize vast external data repositories. While RAG holds significant promise for various Natural Language Processing (NLP) processes like dialogue generation, summarization, and question answering, it also presents unique security challenges that must be addressed to ensure system integrity and reliability. RAG systems face several security threats, including data poisoning, model manipulation, privacy leakage, biased information retrieval, and harmful outputs generation. Generally, in the traditional RAG application, security threat is one of the major concerns. To tighten the security system and enhance the efficiency of the model on processing more complex data this paper outlines key strategies for securing RAG-based applications to mitigate these risks paper outlines key strategies for securing RAG-based applications to mitigate these risks. Ensuring data security through filtering, sanitization, and provenance tracking can prevent data poisoning and enhance the quality of external knowledge sources. Strengthening model security via adversarial training, input validation, and anomaly detection improves resilience against manipulative attacks. Implementing output monitoring and filtering techniques, such as factual verification, language moderation, and bias detection, ensures the accuracy and safety of generated responses. Additionally, robust infrastructure and access control measures, including secure data storage, secure APIs, and regulated model access, protect against unauthorized access and manipulation. Moreover, this study analyzes various use cases for LLMs enhanced by RAG, including personalized recommendations, customer support automation, content creation, and advanced search functionalities. The role of vector databases in optimizing RAG-driven generative AI is also discussed, highlighting their ability to efficiently manage and retrieve large-scale data for improved response generation. By adhering to these security measures and leveraging best practices from leading industry sources such as Databricks, AWS, and Milvus, developers can ensure the robustness and trustworthiness of RAG-based systems across diverse applications.
  • An Identity Verification Governance Public Data Access Model for Analyzing Public Data Access Governance using Artificial Neural Network

    Mr P Udayaraju, Venkata Rahul Sarabu., Venkata Gummadi., Chaithanya Ravulu., Shravan Kumar Joginipalli., Sai Charan Gurram

    Source Title: 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS), DOI Link

    View abstract ⏷

    The main objective of this paper is to design and implement AI algorithms for public data access governance. An identity Verification Governance Data Access (IVGDA) model uses the ANN algorithm to secure public data access governance. The ANN algorithm is implemented to address and analyse the problems of public data access governance, including computational complexity, data security, and management. Most public sectors, organizations, and institutions face many problems balancing transparency in security, privacy, and scalability for diverse datasets. The ANN algorithm proposed here leverages different datasets of regular guidelines, data patterns, and interactions with multiple people connected in public data applications. Since public data is transferred globally, it is essential to transmit with balanced privacy, transparency, and effective governance. This paper explains the efficiency of AI algorithms in creating a high-level security-based governance public data accessing model. A simulation is created in a Microsft Active Directory. The proposed IVGDA model is deployed to examine and evaluate the memberships and authorities of the users involved in the directory. The ANN analyses the account, profile, access rules and roles, data models, and other parameters, and the output is collected. Compared with the existing models, the IVGDA model establishes significant governance and outperforms others. In the simulation, various processes like user addition, deletion, group creation, alteration in the group, profile and all are used to evaluate the performance of the IVGDA model. Additionally, this paper contributes to data governance by providing an advanced computing algorithm for understanding and optimizing public data access governance.
Contact Details

udayaraju.p@srmap.edu.in

Scholars