Faculty Dr Neeraj Kumar Sharma

Dr Neeraj Kumar Sharma

Assistant Professor

Department of Computer Science and Engineering

Contact Details

neerajkumar.s@srmap.edu.in

Office Location

SR Block Level 6, Cabin No: 8

Education

2018
National Institute of Technology Karnataka, Surathkal
India
2010
M.E.
IET, DAVV Indore
India
2005
B.E.
RGPV Bhopal
India

Personal Website

Experience

  • 2017-2020-Professor-RAIT, D.Y. Patil University, Navi Maumbai
  • 2014-2017-Research Assistant- NITK, Surathkal
  • 2006-2014-Assistant Professor-Shri Vaishnav Institute of Technology Indore

Research Interest

  • Multi-Objective energy efficient resources allocation at cloud data center.
  • Workload predication in dynamic cloud environment using machine learning approaches.
  • Cloud optimization problem formulation and its solution using bio-inspired/soft computing

Awards

  • 2008 – Faculty Research Grant Award – Microsoft

Memberships

  • Indian Society for Technical Education (ISTE) Life Time Membership

Publications

  • Deep learning BiLSTM and Branch-and-Bound based multi-objective virtual machine allocation and migration with profit, energy, and SLA constraints

    Sharma N.K., Bojjagani S., Uyyala R., Maurya A.K., Kumari S.

    Article, Sustainable Computing: Informatics and Systems, 2025, DOI Link

    View abstract ⏷

    This paper highlights a novel approach to address multiple networking-based VM allocation and migration objectives at the cloud data center. The proposed approach in this paper is structured into three distinct phases: firstly, we employ a Bi-Directional Long Short Term Memory (BiLSTM) model to predict Virtual Machines (VMs) instance's prices. Subsequently, we formulate the problem of allocating VMs to Physical Machines (PMs) and switches in a network-aware cloud data center environment as a multi-objective optimization task, employing Linear Programming (LP) techniques. For optimal allocation of VMs, we leverage the Branch-and-Bound (BaB) technique. In the third phase, we implement a VM migration strategy sensitive to SLA requirements and energy consumption considerations. The results, conducted using the CloudSim simulator, demonstrate the efficacy of our approach, showcasing a substantial 35% reduction in energy consumption, a remarkable decrease in SLA violations, and a notable 18% increase in the cloud data center's profit. Finally, the proposed multi-objective approach reduces energy consumption and SLA violation and makes the data center sustainable.
  • Unveiling Android security testing: A Comprehensive overview of techniques, challenges, and mitigation strategies

    Palutla D.V., Bojjagani S., Mula S.C.R., Uyyala R., Sharma N.K., Morampudi M.K., Khan M.K.

    Review, Computers and Electrical Engineering, 2025, DOI Link

    View abstract ⏷

    With the rapid growth of Android applications, ensuring robust security has become a critical concern. Traditional Vulnerability Assessment and Penetration Testing (VAPT) approaches, though effective across platforms, often fall short in addressing Android-specific security challenges. This paper presents a comprehensive review of security testing methods tailored to the Android ecosystem, including static and dynamic analysis, hybrid approaches, network communication testing, reverse engineering, malware detection, and permission-based assessments. Android's open-source nature, device fragmentation, and inconsistent security policies introduce unique vulnerabilities that require specialized testing strategies. By examining current tools, methodologies, and best practices, this review identifies recurring gaps in the Android application security testing process. It highlights the need for more adaptable and thorough testing frameworks. The insights provided are valuable to developers, researchers, and security professionals aiming to strengthen Android app security. Ultimately, this work underscores the importance of tailoring security assessment practices to the evolving threat landscape of the Android platform, thereby contributing to the development of safer and more resilient applications.
  • A Novel Convolutional Neural Network Architecture for Gender and Age Prediction

    Sharma N.K., Induri A., Reddy Priya M., Choubey P.

    Conference paper, Lecture Notes in Networks and Systems, 2025, DOI Link

    View abstract ⏷

    Soft biometrics is a field in which a person's gender, height, and weight are determined using a digital machine. This is due to the increasing number of real-world applications in day-to-day life with improved wireless technology. Soft biometrics generally consist of gender, age, ethnicity, height, and facial dimensions. In this paper, the authors propose a state-of-the-art Convolutional Neural Network (CNN) to classify gender and estimate age based on human face images. We also tune the hyperparameters of the CNN to compute the best result for the gender and age of a human face. Experimental results reveal that our proposed methodology achieved 82% accuracy for gender and 78% accuracy for age.
  • Deciphering the Role of Functional Ion Channels in Cancer Stem Cells (CSCs) and Their Therapeutic Implications

    Samanta K., Reddy G.S.V.S.R., Sharma N.K., Kar P.

    Review, International Journal of Molecular Sciences, 2025, DOI Link

    View abstract ⏷

    Despite advances in medicine, cancer remains one of the foremost global health concerns. Conventional treatments like surgery, radiotherapy, and chemotherapy have advanced with the emergence of targeted and immunotherapy approaches. However, therapeutic resistance and relapse remain major barriers to long-term success in cancer treatment, often driven by cancer stem cells (CSCs). These rare, resilient cells can survive therapy and drive tumour regrowth, urging deeper investigation into the mechanisms underlying their persistence. CSCs express ion channels typical of excitable tissues, which, beyond electrophysiology, critically regulate CSC fate. However, the underlying regulatory mechanisms of these channels in CSCs remain largely unexplored and poorly understood. Nevertheless, the therapeutic potential of targeting CSC ion channels is immense, as it offers a powerful strategy to disrupt vital signalling pathways involved in numerous pathological conditions. In this review, we explore the diverse repertoire of ion channels expressed in CSCs and highlight recent mechanistic insights into how these channels modulate CSC behaviours, dynamics, and functions. We present a concise overview of ion channel-mediated CSC regulation, emphasizing their potential as novel diagnostic markers and therapeutic targets, and identifying key areas for future research.
  • Dynamic Threshold-based DDoS Detection and Prevention for Network Function Virtualization (NFV) in Digital Twin Environment

    Bojjagani S., Surya Nagi Reddy N., Medasani S.S., Umar M., Reddy C.A., Sharma N.K.

    Book chapter, Blockchain and Digital Twin Enabled IoT Networks: Privacy and Security Perspectives, 2024, DOI Link

    View abstract ⏷

    Digital Twin (DT) technology is a digital illustration of a physical object or system; this technology has paid much attention to IoT, healthcare, automotive manufacturing, construction of buildings, and even cities. However, these applications may also have serious security pitfalls in DT deployment. Distributed Denial of Service (DDoS) attacks significantly threaten the availability and stability of computer networks and services. Detecting and mitigating these attacks on time is crucial for maintaining network security. This chapter aims to develop an algorithmic-based approach for detecting and preventing DDoS attacks in the initial stages of Network Function Virtualisation (NFV). The proposed model involves the Network traffic collected from a sender in various monitoring points within the network infrastructure. Then the traffic is analysed by extracting relevant information like from which source the traffic is coming, Transmission Control Protocol (TCP), three-way handshake details, packet size, and traffic volume. The developed model is deployed in real time to monitor incoming network traffic. It analyses the extracted features and compares them with the learned patterns to identify potential Distributed Denial of Service attacks (DDoS). Alerts and notifications are generated, and warning notifications will be given to the source node. Upon detection of a DDoS attack, appropriate mitigation strategies are implemented to protect the network infrastructure and services. These may include traffic filtering; and rate limiting to mitigate the attack’s impact and ensure critical resource availability. The performance metrics, such as detection accuracy, false positive rate, and response time, will be measured to assess the reliability and efficiency of the solution, by developing an algorithmic model that can effectively detect and mitigate Distributed Denial of Service attacks. This chapter aims to enhance network security and ensure the uninterrupted availability of online services for digital twin environments, even in the face of evolving and sophisticated cyber threats.
  • BCECBN: Blockchain-enabled P2P Secure File Sharing System Over Cloudlet Networks

    Sumanth P., Bojjagani S., Poojitha P., Bharani P., Krishna T.G., Sharma N.K.

    Book chapter, Blockchain and Digital Twin Enabled IoT Networks: Privacy and Security Perspectives, 2024, DOI Link

    View abstract ⏷

    Cloud computing is a relatively recent technological development that has steadily gained popularity over the last few years. The data sharing between legitimate entities in peer-to-peer (P2P) file systems in the cloud is the most challenging task. Nowadays, many individuals utilize document destruction programs. Sharing files created with these programs is one way for individuals to make money online. Some websites, like Chegg and Scribd, provide a forum for academics and freelancers to share their work with the public. To participate in these programs, the users first become members. And those who wish to access the files must still pay for the application rather than the original author. In this chapter, we developed a novel approach for a secure file-sharing system called blockchain-enabled and cloudlet-based networks (BCECBN). It integrates a blockchain for secure transactions and access to shared files. The proposed model is protected against various attacks performed by the adversary. In addition, it provides a solution for the above-discussed examples; users may easily share and trade data with one another and save and share data over the internet with no effort. In addition, this research will examine a cloud-based secure file-sharing system and exchange the data using digital media. Each user performs a transaction along the blockchain to get access to the files.
  • Federated Learning-based Big Data Analytics For The Education System

    Surapaneni P., Bojjagani S., Sharma N.K.

    Conference paper, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

    View abstract ⏷

    This paper proposes a novel approach to enhancing education systems by integrating federated learning techniques with big data analytics. Traditional data analysis methods in educational settings often need help regarding data privacy, security, and scalability. Federated learning addresses these issues by enabling collaborative model training across distributed datasets without data centralization, thus preserving the privacy of sensitive information. By harnessing the vast amounts of educational data generated from various sources such as online learning platforms, student information systems, and academic applications, federated learning empowers educational institutions to derive valuable insights while respecting data privacy regulations. Leveraging the collective intelligence of decentralized data sources, federated learning algorithms facilitate the development of robust predictive models for student performance, personalized learning recommendations, and early intervention strategies. Moreover, federated learning enables continuous model improvement by aggregating local model updates from participating institutions, ensuring adaptability to evolving educational landscapes. This paper explores the technical foundations of federated learning, its application in education systems, and its potential benefits in improving learning outcomes and fostering data-driven decision-making in education. Through a comprehensive review of existing literature and case studies, this research aims to provide insights into the opportunities and challenges associated with implementing federated learning-based big data analytics in education systems, ultimately paving the way for a more efficient and personalized approach to education.
  • Reliable and privacy-preserving multi-instance iris verification using Paillier homomorphic encryption and one-digit checksum

    Morampudi M.K., Gonthina N., Bojjagani S., Sharma N.K., Veeraiah D.

    Article, Signal, Image and Video Processing, 2024, DOI Link

    View abstract ⏷

    The utilization of a biometric authentication system (BAS) for reliable automatic human recognition has increased exponentially in recent years over traditional authentication systems. Since the biometric traits are irrevocable, two important issues such as security and privacy still need to be addressed in BAS. Researchers explore homomorphic encryption (HE) to propose several privacy-preserving BAS. However, the correctness of the evaluated results computed by the cloud server on the protected templates is still an open research challenge. These methods are able to conserve the privacy of biometric templates but unable to check the correctness of computed result results in false reject or accept. To overcome this issue, we suggest a reliable and privacy-preserving verifiable multi-instance iris verification system using Paillier HE and one-digit checksum (PVMIAPO). Modified local random projection is implemented on the fused iris template to produce the reduced template. Later, Paillier HE is applied on the reduced template to create the protected template. The result returned by the third party server is verified using the one-digit checksum. The efficiency of PVMIAPO is verified by experimenting with it on SDUMLA-HMT, IITD, and CASIA-V3-Interval iris databases. PVMIAPO gratifies the irreversibility, diversity, and revocability properties. PVMIAPO also obtains fair performance in contrast to the existing methods.
  • Secure privacy-enhanced fast authentication and key management for IoMT-enabled smart healthcare systems

    Bojjagani S., Brabin D., Kumar K., Sharma N.K., Batta U.

    Article, Computing, 2024, DOI Link

    View abstract ⏷

    The smart healthcare system advancements have introduced the Internet of Things, enabling technologies to improve the quality of medical services. The main idea of these healthcare systems is to provide data security, interaction between entities, efficient data transfer, and sustainability. However, privacy concerning patient information is a fundamental problem in smart healthcare systems. Many authentications and critical management protocols exist in the literature for healthcare systems, but ensuring security still needs to be improved. Even if security is achieved, it still requires fast communication and computations. In this paper, we have introduced a new secure privacy-enhanced fast authentication key management scheme that effectively applies to lightweight resource-constrained devices in healthcare systems to overcome the issue. The proposed framework is applicable for quick authentication, efficient key management between the entities, and minimising computation and communication overheads. We verified our proposed framework with formal and informal verification using BAN logic, Scyther simulation, and the Drozer tool. The simulation and tool verification shows that the proposed system is free from well-known attacks, reducing communication and computation costs compared to the existing healthcare systems.
  • Mechanical element’s remaining useful life prediction using a hybrid approach of CNN and LSTM

    Sharma N.K., Bojjagani S.

    Article, Multimedia Tools and Applications, 2024, DOI Link

    View abstract ⏷

    For the safety and reliability of the system, Remaining Useful Life (RUL) prediction is considered in many industries. The traditional machine learning techniques must provide more feature representation and adaptive feature extraction. Deep learning techniques like Long Short-Term Memory (LSTM) achieved an excellent performance for RUL prediction. However, the LSTM network mainly relies on the past few data, which may only capture some contextual information. This paper proposes a hybrid combination of Convolution Neural Network (CNN) and LSTM (CNN+LSTM) to solve this problem. The proposed hybrid model predicts how long a machine can operate without breaking down. In the proposed work, 1D horizontal and vertical signals of the mechanical bearing are first converted to 2D images using Continuous Wavelet Transform (CWT). These 2D images are applied to CNN for key feature extraction. Ultimately, these key features are applied to the LSTM deep neural network for predicting the RUL of a mechanical bearing. A PRONOSTIA data is utilized to demonstrate the performance of the proposed model and compare the proposed model with other state-of-the-art methods. Experimental results show that our proposed CNN+LSTM-based hybrid model achieved higher accuracy (98%) with better robustness than existing methods.
  • Selective Weighting and Prediction Error Expansion for High-Fidelity Images

    Uyyala R., Bojjagani S., Sharma N.K., Chithaluru P., Akuri S.R.C.M.

    Article, SN Computer Science, 2024, DOI Link

    View abstract ⏷

    Reversible data hiding (RDH) based on prediction error expansion (PEE) needs a reliable predictor to forecast the pixel. The hidden information is inserted into the original cover image pixels using the Prediction Error (PE). To improve the accuracy of pixel predictions for cover images, there are a number of algorithms available in the literature. Based on the different gradient estimations, several academics have suggested prediction methods. More research on this gradient-based pixel prediction method is presented in this article. In order to improve exploration gradient estimates, we have looked at a number of local contexts surrounding the current pixel. It has been stated that experiments have been conducted to evaluate the effect of different neighborhood sizes on gradient estimation. Additionally, we investigate two methods for choosing paths according to gradient magnitudes. To incorporate the data into the initial pixels, a new embedding technique called Prediction Error Expansion has been suggested. In the context of reversible data concealment, experimental results point towards a better gradient based prediction employing an prediction embedding technique.
  • A Novel Energy Efficient Multi-Dimensional Virtual Machines Allocation and Migration at the Cloud Data Center

    Sharma N.K., Bojjagani S., Reddy Y.C.A.P., Vivekanandan M., Srinivasan J., Maurya A.K.

    Article, IEEE Access, 2023, DOI Link

    View abstract ⏷

    Due to the rapid utilization of cloud services, the energy consumption of cloud data centres is increasing dramatically. These cloud services are provided by Virtual Machines (VMs) through the cloud data center. Therefore, energy-aware VMs allocation and migration are essential tasks in the cloud environment. This paper proposes a Branch-and-Price based energy-efficient VMs allocation algorithm and a Multi-Dimensional Virtual Machine Migration (MDVMM) algorithm at the cloud data center. The Branch-and-Price based VMs allocation algorithm reduces energy consumption and wastage of resources by selecting the optimal number of energy-efficient PMs at the cloud data center. The proposed MDVMM algorithm saves energy consumption and avoids the Service Level Agreement (SLA) violation by performing an optimal number of VMs migrations. The experimental results demonstrate that our proposed Branch-and-Price based VMs allocation with VMs migration algorithms saves more than 31% energy consumption and improves 21.7% average resource utilization over existing state-of-the-art techniques with a 95% confidence interval. The performance of the proposed approaches outperforms in terms of SLA violation, VMs migration, and Energy SLA Violation (ESV) combined metrics over existing state-of-the-art VMs allocation and migration algorithms.
  • The use of IoT-based wearable devices to ensure secure lightweight payments in FinTech applications

    Bojjagani S., Seelam N.R., Sharma N.K., Uyyala R., Akuri S.R.C.M., Maurya A.K.

    Article, Journal of King Saud University - Computer and Information Sciences, 2023, DOI Link

    View abstract ⏷

    Daily digital payments in Financial Technology (FinTech) are growing exponentially. A huge demand is for developing secure, lightweight cryptography protocols for wearable IoT-based devices. The devices hold the consumer information and transit functions in a secure environment to provide authentication and confidentiality using contactless Near-Field Communication (NFC) or Bluetooth technologies. On the other hand, Security breaches have been observed in various dimensions, especially in wearable payment technologies. In this paper, we developed a threat model in the proposed framework and how to mitigate these attacks. This study accepts the three-authentication factor, as biometrics is one of the user's most vital authentication mechanisms. The scheme uses an “Elliptic Curve Integrated Encryption Scheme (ECIES)”, “Elliptic Curve Digital Signature Algorithm (ECDSA)” and “Advanced Encryption Standard (AES)” to encrypt the messages between the entities to ensure higher security. The security analysis of the proposed scheme is demonstrated through the Real-or-Random oracle model (RoR) and Scyther's widely accepted model-checking tools. Finally, we present a comparative summary based on security features, communication cost, and computation overhead of existing methods, specifying that the proposed framework is secure and efficient for all kinds of remote and proximity payments, such as mini, macro, and micro-payments, using wearable devices.
  • Crowd Management System Based on Hybrid Combination of LSTM and CNN

    Sharma N.K., Krishna G.V.A., Kumar V.B., Kumar T.S.P., Rakesh C.H., Reddy R.C.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Automatic recognition of violence and nonviolence activities in the crowd management system is a broad area of interest in today’s scenario. In this paper, we propose a hybrid combination of the Convolution Neural Networks (CNNs), and Long Short-Term Memory (LSTM) model to recognize violence/nonviolence activities in a crowded area. In the proposed approach a stream of video is applied to a pretrained Darknet-19 network, then a CNN with LSTM network is used to extract spatial and temporal features from the video. In the end, these spatial features are applied to a fully connected layer to identify the violence/nonviolence condition. The experimental results show that 98.1% accuracy was achieved in the case of video, and 97.8% accuracy was achieved in the case of the image frame by our proposed violence/nonviolence detection model.
  • Music Generation Using Deep Learning

    Vemula D.R., Tripathi S.K., Sharma N.K., Hussain M.M., Swamy U.R., Polavarapu B.L.

    Conference paper, Lecture Notes in Electrical Engineering, 2023, DOI Link

    View abstract ⏷

    In this paper, we explore the usage of char-RNN which is special type of recurrent neural network (RNN) in generating music pieces and propose an approach to do so. First, we train a model using existing music data. The generating model mimics the music patterns in such a way that we humans enjoy. The generated model does not replicate the training data but understands and creates patterns to generate new music. We generate honest quality music which should be good and melodious to hear. By tuning, the generated music can be beneficial for composers, film makers, artists in their tasks, and it can also be sold by companies or individuals. In our paper, we focus more on char ABC-notation because it is reliable to represent music using just sequence of characters. We use bidirectional long short-term memory (LSTM) which takes input as music sequences and observer that the proposed model has more accuracy compared with other models.
  • Software Fault Prediction Using Deep Neural Networks

    Mohana Ramya Y., Deepthi K., Vamsai A., Juhi Sai A., Sharma N., Ramachandra Reddy B.

    Conference paper, Lecture Notes in Electrical Engineering, 2023, DOI Link

    View abstract ⏷

    Software failure prediction is the process of building models that software interpreters can use to detect faulty constructs early in the software development life cycle. Faults are the main source of time consumption and cost less over the life cycle of applications. Early failure prediction increases device consistency and reliability and decreases the expense of software development. However, machine learning techniques are also valuable in detecting software bugs. There are various machine learning techniques for finding bugs, ambiguities, and faulty software. In this paper, we direct an exploratory review to assess the performance of popular techniques including logistical regression, decision tree, random forest algorithm, SVM algorithms, and DNN. Our experiment is performed on various types of datasets (jedit, Tomcat, Tomcat-1, Xalan, Xerces, and prop-6). The experimental results show that DNN produces a better accuracy among all techniques used above.
  • Output Power Prediction of Solar Photovoltaic Panel Using Machine Learning Approach

    Tripathi A.K., Sharma N.K., Pavan J., Bojjagania S.

    Article, International Journal of Electrical and Electronics Research, 2022, DOI Link

    View abstract ⏷

    Solar power-based photovoltaic energy conversion could be considered one of the best sustainable sources of electric power generation. Thus, the prediction of the output power of the photovoltaic panel becomes necessary for its ef ficient utilization. The main aim of this paper is to predict the output power of solar photovoltaic panels using different machine learning algorithms based on the various input parameters such as ambient temperature, solar radiation, panel surface temperature, rel ative humidity and time of the day. Three different machine learning algorithms namely, multiple regression, support vector machine regression and gaussian regression were considered, for the prediction of output power, and compared on the basis of results obtained by different machine learning algorithms. The outcomes of this study showed that the multiple linear regression algorithm provides better performance with the result of mean absolute error, mean squared error, coefficient of determination and accuracy of 0.04505, 0.00431, 0.9981 and 0.99997 respectively, whereas the support vector machine regression had the worst prediction performance. Moreover, the predicted responses are in great understanding with the actual values indicating that the purposed machine learning algorithms are quite appropriate for predicting the output power of solar photovoltaic panels under different environmental conditions.
  • A Novel Heart Disease Prediction Approach Using the Hybrid Combination of GA, PSO, and CNN

    Sharma N.K., Ramchandra Reddy B., Monika Chowdary M., Rani Durga Prasanna Swetha Y., Rishitha Varma B., Bharat C.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Heart disease is one of the foremost health problems nowadays, and deadliest human disease around the world. It is the main reason for the enormous range of deaths in the world over the previous few decades. Therefore, there is a need to diagnose it in an exceeding specific time to avoid abandoned dangers. In this paper, we propose a hybrid approach to heart disease prediction by using a given range of feature vectors. Furthermore, a comparison of several classifiers for the prediction of heart disease cases with a minimum number of feature vectors are carried out. We proposed two different optimization algorithms like genetic algorithm (GA), and particle swarm optimization (PSO) for feature selection, and convolution neural network (CNN) for classification. The hybrid of GA and CNN is known as genetic neural network (GCNN), and hybrid of PSO and CNN now as particle neural network (PCNN). The experimental results show that accuracy values obtained by PCNN is approximately 82% and GCNN is 75.51%.
  • A machine learning approach to software model refactoring

    Sidhu B.K., Singh K., Sharma N.

    Article, International Journal of Computers and Applications, 2022, DOI Link

    View abstract ⏷

    Good software quality is a consequence of good design. Model refactoring counteracts erosion of the software design at an early stage in the software development project complying with the model-driven engineering paradigm. Traditional model refactoring approaches work at the surface level by using threshold values of model metrics as indicators of suboptimal design and carry out localized corrections. Through this paper, it is proposed that identifying design flaws at a higher level of granularity will save from the vicious cycle of small refactoring operations and their cascaded side-effects. The notion of functional decomposition, as an anomalous design tendency and a dominant cause of design, smells in object-oriented software, is introduced. It is suggested that refactoring operations targeted at signs of functional decomposition instead of atomic smells achieve substantial improvement in design within a concise quality assurance procedure. The idea is realized using a deep neural network that learns to recognize the presence of functional decomposition in UML models of object-oriented software. The presented approach uses data science methods to gain insight into multidimensional software design features and uses the experience gained to generalize subtle relationships among architectural components.
  • BAT algorithm based feature selection: Application in credit scoring

    Tripathi D., Ramachandra Reddy B., Padmanabha Reddy Y.C.A., Shukla A.K., Kumar R.K., Sharma N.K.

    Article, Journal of Intelligent and Fuzzy Systems, 2021, DOI Link

    View abstract ⏷

    Credit scoring plays a vital role for financial institutions to estimate the risk associated with a credit applicant applied for credit product. It is estimated based on applicants' credentials and directly affects to viability of issuing institutions. However, there may be a large number of irrelevant features in the credit scoring dataset. Due to irrelevant features, the credit scoring models may lead to poorer classification performances and higher complexity. So, by removing redundant and irrelevant features may overcome the problem with large number of features. In this work, we emphasized on the role of feature selection to enhance the predictive performance of credit scoring model. Towards to feature selection, Binary BAT optimization technique is utilized with a novel fitness function. Further, proposed approach aggregated with 'Radial Basis Function Neural Network (RBFN)', 'Support Vector Machine (SVM)' and 'Random Forest (RF)' for classification. Proposed approach is validated on four bench-marked credit scoring datasets obtained from UCI repository. Further, the comprehensive investigational results analysis are directed to show the comparative performance of the classification tasks with features selected by various approaches and other state-of-the-art approaches for credit scoring.
  • Multi-Objective Energy Efficient Virtual Machines Allocation at the Cloud Data Center

    Sharma N.K., Reddy G.R.M.

    Article, IEEE Transactions on Services Computing, 2019, DOI Link

    View abstract ⏷

    Due to the growing demand of cloud services, allocation of energy efficient resources (CPU, memory, storage, etc.) and resources utilization are the major challenging issues of a large cloud data center. In this paper, we propose an Euclidean distance based multi-objective resources allocation in the form of virtual machines (VMs) and designed the VM migration policy at the data center. Further the allocation of VMs to Physical Machines (PMs) is carried out by our proposed hybrid approach of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) referred to as HGAPSO. The proposed HGAPSO based resources allocation and VMs migration not only saves the energy consumption and minimizes the wastage of resources but also avoids SLA violation at the cloud data center. To check the performance of the proposed HGAPSO algorithm and VMs migration technique in the form of energy consumption, resources utilization and SLA violation, we performed the extended amount of experiment in both heterogeneous and homogeneous data center environments. To check the performance of proposed HGAPSO with VM migration, we compared our proposed work with branch-and-bound based exact algorithm. The experimental results show the superiority of HGAPSO and VMs migration technique over exact algorithm in terms of energy efficiency, optimal resources utilization, and SLA violation.
  • Hetero-material CPTFET with high-frequency and linearity analysis for ultra-low power applications

    Yadav D.S., Sharma D., Tirkey S., Sharma D.G., Bajpai S., Soni D., Yadav S., Aslam Md., Sharma N.

    Article, Micro and Nano Letters, 2018, DOI Link

    View abstract ⏷

    In this work, the authors have focused on increasing the current driving capability, speed of operation, suppression of parasitic capacitance and ambipolarity of the charge plasma tunnel field effect transistor (CPTFET). Gate dielectric and hetero-material engineering are employed in the CPTFET to obtain better drain current. Introduction of high-k dielectric increases the injection of charge carriers in the intrinsic body while a low-energy bandgap III–V material reduces the tunnelling width leading to the increased rate of band-to-band tunnelling of electrons and thus, enhancing the ON-state current of the device. Hence, the proposed device shows superior performance when operated in regime of DC and high frequency. For reducing the ambipolar conduction in the device, a widely used concept of underlapping of gate electrode is employed which reduces the leakage current in the device. Further, to determine the reliability of the device at high frequency, an analysis of linearity parameters is carried out. The proposed device is highly reliable to function at high-frequency regime. Therefore, the overall introduction of gate dielectric engineering, hetero-material engineering and underlapping of gate electrode improves the performance and characteristics of CPTFET.
  • Memory-based load balancing algorithm in structured peer-to-peer system

    Raghu G., Sharma N.K., Domanal S.G., Ram Mohana Reddy G.

    Conference paper, Advances in Intelligent Systems and Computing, 2018, DOI Link

    View abstract ⏷

    There are several load balancing techniques which are popular used in Structured Peer-to-Peer (SPTP) systems to distribute the load among the systems. Most of the protocols are concentrating on load sharing in SPTP Systems that lead to the performance degeneration in terms of processing delay and processing time due to the lack of resources utilization. The proposed work is related to the sender-initiated load balancing algorithms which are based on the memory. Further to check the performance of the proposed load balancing algorithm, the experimental results carried out in the real-time environment with different type of network topologies in distributed environment. The proposed work performed better over existing load balancing algorithm such as Earliest Completion Load Balancing (ECLB) and First Come First Serve (FCFS) in terms of processing delay and execution time.
  • Energy efficient quality of service aware virtual machine migration in cloud computing

    Sharma N.K., Sharma P., Guddeti R.M.R.

    Conference paper, Proceedings of the 4th IEEE International Conference on Recent Advances in Information Technology, RAIT 2018, 2018, DOI Link

    View abstract ⏷

    This paper deals with mulit-objective (network aware, energy efficient, and Service Level Agreement (SLA) aware) Virtual Machines (VMs) migration at the cloud data center. The proposed VMs migration technique migrate the VMs from the underutilized PMs to the energy efficient Physical Machines (PMs) at the cloud data center. Further, the multi-objective VMs migration technique not only reduces the power consumption of PMs and switches but also guarantees the quality of service by maintaining the SLA at the cloud data center. Our proposed VMs migration approach can find the good balance between three conflict objectives as compared to other algorithms. Further, the cloudsim based experimental results demonstrate the superiority of our proposed multi-objective VMs migration technique in terms of energy efficiency and also reduces the SLA violation over state-of-the-art VMs migration techniques such as Interquartile Range (IQR), and Random VMs migration techniques at the cloud data center.
  • A novel hetero-material gate-underlap electrically doped TFET for improving DC/RF and ambipolar behaviour

    Yadav S., Sharma D., Chandan B.V., Aslam M., Soni D., Sharma N.

    Article, Superlattices and Microstructures, 2018, DOI Link

    View abstract ⏷

    In this article, the impact of gate-underlap with hetero material (low band gap) has been investigated in terms of DC and Analog/RF parameters by proposed device named as hetero material gate-underlap electrically doped TFET (HM-GUL-ED-TFET). Gate-underlap resolves the problem of ambipolarity, gate leakage current (Ig) and slightly improves the gate to drain capacitance, but DC performance is almost unaffected. Further, the use of low band gap material (Si0.5Ge) in proposed device causes a drastic improvement in the DC as well as RF figures of merit. We have investigated the Si0.5Ge as a suitable candidate among different low band gap materials. In addition, the sensitivity of gate-underlap in terms of gate to drain inversion and parasitic capacitances has been studied for HM-GUL-ED-TFET. Further, relatively it is observed that gate-underlap is a better way than drain-underlap in the proposed structure to improve Analog/RF performances without degrading the DC parameters of device. Additionally, hetero-junction alignment analysis has been done for fabrication feasibility.
  • A new structure of electrically doped TFET for improving electronic characteristics

    Yadav S., Madhukar R., Sharma D., Aslam M., Soni D., Sharma N.

    Article, Applied Physics A: Materials Science and Processing, 2018, DOI Link

    View abstract ⏷

    This article put forward a novel device structure of electrically doped tunnel field effect transistor to improve DC and RF performance with suppressed ambipolarity and gate leakage. For suppressing gate leakage and ambipolarity, gate underlapping has been presented, which does not significantly affect the Analog/RF parameters of the device. Further, for improving the device performance a novel initiative of implanting a T-shaped metal layer under gate electrode at source/channel interface with high-k dielectric material has been investigated in the proposed structure. In addition, optimization of gate and electrical drain underlapping is investigated in comparative manner for proposed structure.
  • GWOTS: Grey Wolf Optimization Based Task Scheduling at the Green Cloud Data Center

    Natesha B.V., Kumar Sharma N., Domanal S., Reddy Guddeti R.M.

    Conference paper, Proceedings - 2018 14th International Conference on Semantics, Knowledge and Grids, SKG 2018, 2018, DOI Link

    View abstract ⏷

    Task Scheduling is a key challenging issue of Infrastructure as a Service (IaaS) based cloud data center and it is well-known NP-complete problem. As the number of users' requests increases then the load on the cloud data center will also increase gradually. To manage the heavy load on the cloud data center, in this paper, we propose multiobjective Grey Wolf Optimization (GWO) technique for task scheduling. The main objective of our proposed GWO based scheduling algorithm is to achieve optimum utilization of cloud resources for reducing both the energy consumption of the data center and total makespan of the scheduler for the given list of tasks while providing the services as requested by the users. Our proposed scheduling algorithm is compared with non meta-heuristic algorithms (First-Come-First-Serve (FCFS) and Modified Throttle (MT)), and meta-heuristic algorithms (Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Cat Swarm Optimization (CSO)). Experimental results demonstrate that the proposed GWO based scheduler outperforms all algorithms considered for performance evaluation in terms of makespan for the list of tasks, resource utilization and energy consumption.
  • Study of metal strip insertion and its optimization in doping less TFET

    Yadav D.S., Verma A., Sharma D., Sharma N.

    Article, Superlattices and Microstructures, 2018, DOI Link

    View abstract ⏷

    In this manuscript, a novel structure for dopingless tunnel field effect transistor (DL TFET) is introduced which comprises of metal strip (MS) in the oxide region near source/channel junction. Gate and drain work function engineering with hetero gate dielectric is used in this device which enhances the ON-state current, reduces the ambipolarity and also improves the RF performance. The increment in ON-state current with high subthreshold swing is achieved by the effect of MS. Steeper tunneling junction is achieved with the help of aforementioned modifications at the S/C junction. In this way, we increase the rate of tunneling at this interface and because of this, the threshold voltage of the proposed device reduces drastically. Further, study of ambipolarity suppression is done by using underlap of gate electrode near drain/channel (D/C) interface. Furthermore, temperature variation effect also incorporated in this manuscript, where study related to the threshold voltage and ON-state current is analysed by TCAD simulation. Moreover, most optimized MS length and work function are concluded in this paper for all simulations. Optimization for MS length and workfunction is analysed using TCAD simulation tool and shown in tabular form with one table showing effect of different work functions of MS on threshold voltage, ON-state current and SS, whereas another table shows the effect of MS length variations on RF parameters.
  • Multi-Objective Resources Allocation Using Improved Genetic Algorithm at Cloud Data Center

    Sharma N.K., Guddeti R.M.R.

    Conference paper, Proceedings - 2016 IEEE International Conference on Cloud Computing in Emerging Markets, CCEM 2016, 2017, DOI Link

    View abstract ⏷

    In this paper, a new novel Improved Genetic Algorithm (IGA) is proposed to determine the near optimal solution for multi-objective resources allocation at the green cloud data center of smart grid. However, instead of randomly generating the initial chromosomes for crossover and mutation operations the modified first decreasing (MFD) technique generates better solution for the initial population. The proposed work saves the energy consumption, minimizes the resource wastage, and reduce the algorithm's computation time at the cloud data center. The Cloud-sim simulator based experimental results show that our proposed approach improves the performance of the data center in terms of energy efficiency and average resources utilization when compared to the state-of-the-art VMs allocation approaches i.e. First Fit, Modified First Decreasing (MFD) and, Grouping Genetic Algorithm (GGA).
  • On demand Virtual Machine allocation and migration at cloud data center using Hybrid of Cat Swarm Optimization and Genetic Algorithm

    Sharma N.K., Guddeti R.M.R.

    Conference paper, Proceedings on 5th International Conference on Eco-Friendly Computing and Communication Systems, ICECCS 2016, 2017, DOI Link

    View abstract ⏷

    This paper deals with the energy saving at the data center using energy aware Virtual Machines (VMs) allocation and migration. The multi-objective based VMs allocation using Hybrid Genetic Cat Swarm Optimization (HGACSO) algorithm saves the energy consumption as well as also reduces resource wastage. Further consolidating VMs onto the minimal number of Physical Machines (PMs) using energy efficient VMs migration, we can shut down idle PMs for enhancing the energy efficiency at a cloud data center. The experimental results show that our proposed HGACSO VM allocation and energy efficient VM migration techniques achieved the energy efficiency and minimization of resource wastage.
  • A novel approach for multi-dimensional variable sized virtual machine allocation and migration at cloud data center

    Sharma N.K., Reddy G.R.M.

    Conference paper, 2017 9th International Conference on Communication Systems and Networks, COMSNETS 2017, 2017, DOI Link

    View abstract ⏷

    In this paper, we propose a branch-and-bound based exact algorithm for allocating multi-dimensional variable sized VMs at the cloud data center. Further, an energy efficient VMs migration technique is proposed to reduce the energy consumption and thus avoids the Service Level Agreement (SLA) violation at the cloud data center.
  • Novel energy efficient virtual machine allocation at data center using Genetic algorithm

    Sharma N.K., Ram Mohana Reddy G.

    Conference paper, 2015 3rd International Conference on Signal Processing, Communication and Networking, ICSCN 2015, 2015, DOI Link

    View abstract ⏷

    Increased resources utilization from clients in a smart computing environment poses a greater challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can save the energy of data center as well as avoiding the service level agreement (SLA) violation. This paper deals with the design of an energy efficient algorithm for optimized resources allocation at data center using combined approach of Dynamic Voltage Frequency Scaling (DVFS) and Genetic algorithm (GA). The performance of the proposed energy efficient algorithm is compared with DVFS. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy over a specified workload with 0% SLA violation.
  • A novel energy efficient resource allocation using hybrid approach of genetic DVFS with bin packing

    Sharma N.K., Reddy G.R.M.

    Conference paper, Proceedings - 2015 5th International Conference on Communication Systems and Network Technologies, CSNT 2015, 2015, DOI Link

    View abstract ⏷

    Increased resources utilization from several clients in a smart computing environment poses a key challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can reduce the energy consumption of the data center and also avoid the service level agreement (SLA) violation. This paper deals with the development of an energy efficient algorithm for optimal resources allocation at the data center using hybrid approach of the Dynamic Voltage Frequency Scaling (DVFS), Genetic algorithm (GA) and Bin Packing techniques. The performance of the proposed hybrid approach is compared with Genetic Algorithm, DVFS with Bin Packing, DVFS without Bin Packing techniques. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy as compared to the DVFS with Bin Packing technique over a specified workload with 0% SLA violation.
  • Adoption of knowledge management practices in software engineering organizations: A survey of software engineers’ perceptions

    Sharma N., Singh K., Goyal D.P.

    Conference paper, Proceedings - 2012 2nd International Conference on Advanced Computing and Communication Technologies, ACCT 2012, 2012, DOI Link

    View abstract ⏷

    Most businesses rely on the fact that their employees possess relevant knowledge and that they can apply it to the task at hand. But a serious problem exists, this knowledge is not owned by the organisation as such. Rather the knowledge is owned and controlled by its employees. The success or failure of the KM systems and the various KM practices claimed to be adopted by software engineering organisations can be best judged from software engineers' point of view as they are the ones who are the first-hand users of the KM systems and technologies. © 2012 IEEE.

Patents

Projects

Scholars

Doctoral Scholars

  • Mr Chiranjeevi Koganti

Interests

  • Artificial Intelligence
  • Cloud Computing
  • Distributed Computing
  • Machine Learning

Thought Leaderships

There are no Thought Leaderships associated with this faculty.

Top Achievements

Research Area

No research areas found for this faculty.

Recent Updates

No recent updates found.

Education
2005
B.E.
RGPV Bhopal
India
2010
M.E.
IET, DAVV Indore
India
2018
National Institute of Technology Karnataka, Surathkal
India
Experience
  • 2017-2020-Professor-RAIT, D.Y. Patil University, Navi Maumbai
  • 2014-2017-Research Assistant- NITK, Surathkal
  • 2006-2014-Assistant Professor-Shri Vaishnav Institute of Technology Indore
Research Interests
  • Multi-Objective energy efficient resources allocation at cloud data center.
  • Workload predication in dynamic cloud environment using machine learning approaches.
  • Cloud optimization problem formulation and its solution using bio-inspired/soft computing
Awards & Fellowships
  • 2008 – Faculty Research Grant Award – Microsoft
Memberships
  • Indian Society for Technical Education (ISTE) Life Time Membership
Publications
  • Deep learning BiLSTM and Branch-and-Bound based multi-objective virtual machine allocation and migration with profit, energy, and SLA constraints

    Sharma N.K., Bojjagani S., Uyyala R., Maurya A.K., Kumari S.

    Article, Sustainable Computing: Informatics and Systems, 2025, DOI Link

    View abstract ⏷

    This paper highlights a novel approach to address multiple networking-based VM allocation and migration objectives at the cloud data center. The proposed approach in this paper is structured into three distinct phases: firstly, we employ a Bi-Directional Long Short Term Memory (BiLSTM) model to predict Virtual Machines (VMs) instance's prices. Subsequently, we formulate the problem of allocating VMs to Physical Machines (PMs) and switches in a network-aware cloud data center environment as a multi-objective optimization task, employing Linear Programming (LP) techniques. For optimal allocation of VMs, we leverage the Branch-and-Bound (BaB) technique. In the third phase, we implement a VM migration strategy sensitive to SLA requirements and energy consumption considerations. The results, conducted using the CloudSim simulator, demonstrate the efficacy of our approach, showcasing a substantial 35% reduction in energy consumption, a remarkable decrease in SLA violations, and a notable 18% increase in the cloud data center's profit. Finally, the proposed multi-objective approach reduces energy consumption and SLA violation and makes the data center sustainable.
  • Unveiling Android security testing: A Comprehensive overview of techniques, challenges, and mitigation strategies

    Palutla D.V., Bojjagani S., Mula S.C.R., Uyyala R., Sharma N.K., Morampudi M.K., Khan M.K.

    Review, Computers and Electrical Engineering, 2025, DOI Link

    View abstract ⏷

    With the rapid growth of Android applications, ensuring robust security has become a critical concern. Traditional Vulnerability Assessment and Penetration Testing (VAPT) approaches, though effective across platforms, often fall short in addressing Android-specific security challenges. This paper presents a comprehensive review of security testing methods tailored to the Android ecosystem, including static and dynamic analysis, hybrid approaches, network communication testing, reverse engineering, malware detection, and permission-based assessments. Android's open-source nature, device fragmentation, and inconsistent security policies introduce unique vulnerabilities that require specialized testing strategies. By examining current tools, methodologies, and best practices, this review identifies recurring gaps in the Android application security testing process. It highlights the need for more adaptable and thorough testing frameworks. The insights provided are valuable to developers, researchers, and security professionals aiming to strengthen Android app security. Ultimately, this work underscores the importance of tailoring security assessment practices to the evolving threat landscape of the Android platform, thereby contributing to the development of safer and more resilient applications.
  • A Novel Convolutional Neural Network Architecture for Gender and Age Prediction

    Sharma N.K., Induri A., Reddy Priya M., Choubey P.

    Conference paper, Lecture Notes in Networks and Systems, 2025, DOI Link

    View abstract ⏷

    Soft biometrics is a field in which a person's gender, height, and weight are determined using a digital machine. This is due to the increasing number of real-world applications in day-to-day life with improved wireless technology. Soft biometrics generally consist of gender, age, ethnicity, height, and facial dimensions. In this paper, the authors propose a state-of-the-art Convolutional Neural Network (CNN) to classify gender and estimate age based on human face images. We also tune the hyperparameters of the CNN to compute the best result for the gender and age of a human face. Experimental results reveal that our proposed methodology achieved 82% accuracy for gender and 78% accuracy for age.
  • Deciphering the Role of Functional Ion Channels in Cancer Stem Cells (CSCs) and Their Therapeutic Implications

    Samanta K., Reddy G.S.V.S.R., Sharma N.K., Kar P.

    Review, International Journal of Molecular Sciences, 2025, DOI Link

    View abstract ⏷

    Despite advances in medicine, cancer remains one of the foremost global health concerns. Conventional treatments like surgery, radiotherapy, and chemotherapy have advanced with the emergence of targeted and immunotherapy approaches. However, therapeutic resistance and relapse remain major barriers to long-term success in cancer treatment, often driven by cancer stem cells (CSCs). These rare, resilient cells can survive therapy and drive tumour regrowth, urging deeper investigation into the mechanisms underlying their persistence. CSCs express ion channels typical of excitable tissues, which, beyond electrophysiology, critically regulate CSC fate. However, the underlying regulatory mechanisms of these channels in CSCs remain largely unexplored and poorly understood. Nevertheless, the therapeutic potential of targeting CSC ion channels is immense, as it offers a powerful strategy to disrupt vital signalling pathways involved in numerous pathological conditions. In this review, we explore the diverse repertoire of ion channels expressed in CSCs and highlight recent mechanistic insights into how these channels modulate CSC behaviours, dynamics, and functions. We present a concise overview of ion channel-mediated CSC regulation, emphasizing their potential as novel diagnostic markers and therapeutic targets, and identifying key areas for future research.
  • Dynamic Threshold-based DDoS Detection and Prevention for Network Function Virtualization (NFV) in Digital Twin Environment

    Bojjagani S., Surya Nagi Reddy N., Medasani S.S., Umar M., Reddy C.A., Sharma N.K.

    Book chapter, Blockchain and Digital Twin Enabled IoT Networks: Privacy and Security Perspectives, 2024, DOI Link

    View abstract ⏷

    Digital Twin (DT) technology is a digital illustration of a physical object or system; this technology has paid much attention to IoT, healthcare, automotive manufacturing, construction of buildings, and even cities. However, these applications may also have serious security pitfalls in DT deployment. Distributed Denial of Service (DDoS) attacks significantly threaten the availability and stability of computer networks and services. Detecting and mitigating these attacks on time is crucial for maintaining network security. This chapter aims to develop an algorithmic-based approach for detecting and preventing DDoS attacks in the initial stages of Network Function Virtualisation (NFV). The proposed model involves the Network traffic collected from a sender in various monitoring points within the network infrastructure. Then the traffic is analysed by extracting relevant information like from which source the traffic is coming, Transmission Control Protocol (TCP), three-way handshake details, packet size, and traffic volume. The developed model is deployed in real time to monitor incoming network traffic. It analyses the extracted features and compares them with the learned patterns to identify potential Distributed Denial of Service attacks (DDoS). Alerts and notifications are generated, and warning notifications will be given to the source node. Upon detection of a DDoS attack, appropriate mitigation strategies are implemented to protect the network infrastructure and services. These may include traffic filtering; and rate limiting to mitigate the attack’s impact and ensure critical resource availability. The performance metrics, such as detection accuracy, false positive rate, and response time, will be measured to assess the reliability and efficiency of the solution, by developing an algorithmic model that can effectively detect and mitigate Distributed Denial of Service attacks. This chapter aims to enhance network security and ensure the uninterrupted availability of online services for digital twin environments, even in the face of evolving and sophisticated cyber threats.
  • BCECBN: Blockchain-enabled P2P Secure File Sharing System Over Cloudlet Networks

    Sumanth P., Bojjagani S., Poojitha P., Bharani P., Krishna T.G., Sharma N.K.

    Book chapter, Blockchain and Digital Twin Enabled IoT Networks: Privacy and Security Perspectives, 2024, DOI Link

    View abstract ⏷

    Cloud computing is a relatively recent technological development that has steadily gained popularity over the last few years. The data sharing between legitimate entities in peer-to-peer (P2P) file systems in the cloud is the most challenging task. Nowadays, many individuals utilize document destruction programs. Sharing files created with these programs is one way for individuals to make money online. Some websites, like Chegg and Scribd, provide a forum for academics and freelancers to share their work with the public. To participate in these programs, the users first become members. And those who wish to access the files must still pay for the application rather than the original author. In this chapter, we developed a novel approach for a secure file-sharing system called blockchain-enabled and cloudlet-based networks (BCECBN). It integrates a blockchain for secure transactions and access to shared files. The proposed model is protected against various attacks performed by the adversary. In addition, it provides a solution for the above-discussed examples; users may easily share and trade data with one another and save and share data over the internet with no effort. In addition, this research will examine a cloud-based secure file-sharing system and exchange the data using digital media. Each user performs a transaction along the blockchain to get access to the files.
  • Federated Learning-based Big Data Analytics For The Education System

    Surapaneni P., Bojjagani S., Sharma N.K.

    Conference paper, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

    View abstract ⏷

    This paper proposes a novel approach to enhancing education systems by integrating federated learning techniques with big data analytics. Traditional data analysis methods in educational settings often need help regarding data privacy, security, and scalability. Federated learning addresses these issues by enabling collaborative model training across distributed datasets without data centralization, thus preserving the privacy of sensitive information. By harnessing the vast amounts of educational data generated from various sources such as online learning platforms, student information systems, and academic applications, federated learning empowers educational institutions to derive valuable insights while respecting data privacy regulations. Leveraging the collective intelligence of decentralized data sources, federated learning algorithms facilitate the development of robust predictive models for student performance, personalized learning recommendations, and early intervention strategies. Moreover, federated learning enables continuous model improvement by aggregating local model updates from participating institutions, ensuring adaptability to evolving educational landscapes. This paper explores the technical foundations of federated learning, its application in education systems, and its potential benefits in improving learning outcomes and fostering data-driven decision-making in education. Through a comprehensive review of existing literature and case studies, this research aims to provide insights into the opportunities and challenges associated with implementing federated learning-based big data analytics in education systems, ultimately paving the way for a more efficient and personalized approach to education.
  • Reliable and privacy-preserving multi-instance iris verification using Paillier homomorphic encryption and one-digit checksum

    Morampudi M.K., Gonthina N., Bojjagani S., Sharma N.K., Veeraiah D.

    Article, Signal, Image and Video Processing, 2024, DOI Link

    View abstract ⏷

    The utilization of a biometric authentication system (BAS) for reliable automatic human recognition has increased exponentially in recent years over traditional authentication systems. Since the biometric traits are irrevocable, two important issues such as security and privacy still need to be addressed in BAS. Researchers explore homomorphic encryption (HE) to propose several privacy-preserving BAS. However, the correctness of the evaluated results computed by the cloud server on the protected templates is still an open research challenge. These methods are able to conserve the privacy of biometric templates but unable to check the correctness of computed result results in false reject or accept. To overcome this issue, we suggest a reliable and privacy-preserving verifiable multi-instance iris verification system using Paillier HE and one-digit checksum (PVMIAPO). Modified local random projection is implemented on the fused iris template to produce the reduced template. Later, Paillier HE is applied on the reduced template to create the protected template. The result returned by the third party server is verified using the one-digit checksum. The efficiency of PVMIAPO is verified by experimenting with it on SDUMLA-HMT, IITD, and CASIA-V3-Interval iris databases. PVMIAPO gratifies the irreversibility, diversity, and revocability properties. PVMIAPO also obtains fair performance in contrast to the existing methods.
  • Secure privacy-enhanced fast authentication and key management for IoMT-enabled smart healthcare systems

    Bojjagani S., Brabin D., Kumar K., Sharma N.K., Batta U.

    Article, Computing, 2024, DOI Link

    View abstract ⏷

    The smart healthcare system advancements have introduced the Internet of Things, enabling technologies to improve the quality of medical services. The main idea of these healthcare systems is to provide data security, interaction between entities, efficient data transfer, and sustainability. However, privacy concerning patient information is a fundamental problem in smart healthcare systems. Many authentications and critical management protocols exist in the literature for healthcare systems, but ensuring security still needs to be improved. Even if security is achieved, it still requires fast communication and computations. In this paper, we have introduced a new secure privacy-enhanced fast authentication key management scheme that effectively applies to lightweight resource-constrained devices in healthcare systems to overcome the issue. The proposed framework is applicable for quick authentication, efficient key management between the entities, and minimising computation and communication overheads. We verified our proposed framework with formal and informal verification using BAN logic, Scyther simulation, and the Drozer tool. The simulation and tool verification shows that the proposed system is free from well-known attacks, reducing communication and computation costs compared to the existing healthcare systems.
  • Mechanical element’s remaining useful life prediction using a hybrid approach of CNN and LSTM

    Sharma N.K., Bojjagani S.

    Article, Multimedia Tools and Applications, 2024, DOI Link

    View abstract ⏷

    For the safety and reliability of the system, Remaining Useful Life (RUL) prediction is considered in many industries. The traditional machine learning techniques must provide more feature representation and adaptive feature extraction. Deep learning techniques like Long Short-Term Memory (LSTM) achieved an excellent performance for RUL prediction. However, the LSTM network mainly relies on the past few data, which may only capture some contextual information. This paper proposes a hybrid combination of Convolution Neural Network (CNN) and LSTM (CNN+LSTM) to solve this problem. The proposed hybrid model predicts how long a machine can operate without breaking down. In the proposed work, 1D horizontal and vertical signals of the mechanical bearing are first converted to 2D images using Continuous Wavelet Transform (CWT). These 2D images are applied to CNN for key feature extraction. Ultimately, these key features are applied to the LSTM deep neural network for predicting the RUL of a mechanical bearing. A PRONOSTIA data is utilized to demonstrate the performance of the proposed model and compare the proposed model with other state-of-the-art methods. Experimental results show that our proposed CNN+LSTM-based hybrid model achieved higher accuracy (98%) with better robustness than existing methods.
  • Selective Weighting and Prediction Error Expansion for High-Fidelity Images

    Uyyala R., Bojjagani S., Sharma N.K., Chithaluru P., Akuri S.R.C.M.

    Article, SN Computer Science, 2024, DOI Link

    View abstract ⏷

    Reversible data hiding (RDH) based on prediction error expansion (PEE) needs a reliable predictor to forecast the pixel. The hidden information is inserted into the original cover image pixels using the Prediction Error (PE). To improve the accuracy of pixel predictions for cover images, there are a number of algorithms available in the literature. Based on the different gradient estimations, several academics have suggested prediction methods. More research on this gradient-based pixel prediction method is presented in this article. In order to improve exploration gradient estimates, we have looked at a number of local contexts surrounding the current pixel. It has been stated that experiments have been conducted to evaluate the effect of different neighborhood sizes on gradient estimation. Additionally, we investigate two methods for choosing paths according to gradient magnitudes. To incorporate the data into the initial pixels, a new embedding technique called Prediction Error Expansion has been suggested. In the context of reversible data concealment, experimental results point towards a better gradient based prediction employing an prediction embedding technique.
  • A Novel Energy Efficient Multi-Dimensional Virtual Machines Allocation and Migration at the Cloud Data Center

    Sharma N.K., Bojjagani S., Reddy Y.C.A.P., Vivekanandan M., Srinivasan J., Maurya A.K.

    Article, IEEE Access, 2023, DOI Link

    View abstract ⏷

    Due to the rapid utilization of cloud services, the energy consumption of cloud data centres is increasing dramatically. These cloud services are provided by Virtual Machines (VMs) through the cloud data center. Therefore, energy-aware VMs allocation and migration are essential tasks in the cloud environment. This paper proposes a Branch-and-Price based energy-efficient VMs allocation algorithm and a Multi-Dimensional Virtual Machine Migration (MDVMM) algorithm at the cloud data center. The Branch-and-Price based VMs allocation algorithm reduces energy consumption and wastage of resources by selecting the optimal number of energy-efficient PMs at the cloud data center. The proposed MDVMM algorithm saves energy consumption and avoids the Service Level Agreement (SLA) violation by performing an optimal number of VMs migrations. The experimental results demonstrate that our proposed Branch-and-Price based VMs allocation with VMs migration algorithms saves more than 31% energy consumption and improves 21.7% average resource utilization over existing state-of-the-art techniques with a 95% confidence interval. The performance of the proposed approaches outperforms in terms of SLA violation, VMs migration, and Energy SLA Violation (ESV) combined metrics over existing state-of-the-art VMs allocation and migration algorithms.
  • The use of IoT-based wearable devices to ensure secure lightweight payments in FinTech applications

    Bojjagani S., Seelam N.R., Sharma N.K., Uyyala R., Akuri S.R.C.M., Maurya A.K.

    Article, Journal of King Saud University - Computer and Information Sciences, 2023, DOI Link

    View abstract ⏷

    Daily digital payments in Financial Technology (FinTech) are growing exponentially. A huge demand is for developing secure, lightweight cryptography protocols for wearable IoT-based devices. The devices hold the consumer information and transit functions in a secure environment to provide authentication and confidentiality using contactless Near-Field Communication (NFC) or Bluetooth technologies. On the other hand, Security breaches have been observed in various dimensions, especially in wearable payment technologies. In this paper, we developed a threat model in the proposed framework and how to mitigate these attacks. This study accepts the three-authentication factor, as biometrics is one of the user's most vital authentication mechanisms. The scheme uses an “Elliptic Curve Integrated Encryption Scheme (ECIES)”, “Elliptic Curve Digital Signature Algorithm (ECDSA)” and “Advanced Encryption Standard (AES)” to encrypt the messages between the entities to ensure higher security. The security analysis of the proposed scheme is demonstrated through the Real-or-Random oracle model (RoR) and Scyther's widely accepted model-checking tools. Finally, we present a comparative summary based on security features, communication cost, and computation overhead of existing methods, specifying that the proposed framework is secure and efficient for all kinds of remote and proximity payments, such as mini, macro, and micro-payments, using wearable devices.
  • Crowd Management System Based on Hybrid Combination of LSTM and CNN

    Sharma N.K., Krishna G.V.A., Kumar V.B., Kumar T.S.P., Rakesh C.H., Reddy R.C.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Automatic recognition of violence and nonviolence activities in the crowd management system is a broad area of interest in today’s scenario. In this paper, we propose a hybrid combination of the Convolution Neural Networks (CNNs), and Long Short-Term Memory (LSTM) model to recognize violence/nonviolence activities in a crowded area. In the proposed approach a stream of video is applied to a pretrained Darknet-19 network, then a CNN with LSTM network is used to extract spatial and temporal features from the video. In the end, these spatial features are applied to a fully connected layer to identify the violence/nonviolence condition. The experimental results show that 98.1% accuracy was achieved in the case of video, and 97.8% accuracy was achieved in the case of the image frame by our proposed violence/nonviolence detection model.
  • Music Generation Using Deep Learning

    Vemula D.R., Tripathi S.K., Sharma N.K., Hussain M.M., Swamy U.R., Polavarapu B.L.

    Conference paper, Lecture Notes in Electrical Engineering, 2023, DOI Link

    View abstract ⏷

    In this paper, we explore the usage of char-RNN which is special type of recurrent neural network (RNN) in generating music pieces and propose an approach to do so. First, we train a model using existing music data. The generating model mimics the music patterns in such a way that we humans enjoy. The generated model does not replicate the training data but understands and creates patterns to generate new music. We generate honest quality music which should be good and melodious to hear. By tuning, the generated music can be beneficial for composers, film makers, artists in their tasks, and it can also be sold by companies or individuals. In our paper, we focus more on char ABC-notation because it is reliable to represent music using just sequence of characters. We use bidirectional long short-term memory (LSTM) which takes input as music sequences and observer that the proposed model has more accuracy compared with other models.
  • Software Fault Prediction Using Deep Neural Networks

    Mohana Ramya Y., Deepthi K., Vamsai A., Juhi Sai A., Sharma N., Ramachandra Reddy B.

    Conference paper, Lecture Notes in Electrical Engineering, 2023, DOI Link

    View abstract ⏷

    Software failure prediction is the process of building models that software interpreters can use to detect faulty constructs early in the software development life cycle. Faults are the main source of time consumption and cost less over the life cycle of applications. Early failure prediction increases device consistency and reliability and decreases the expense of software development. However, machine learning techniques are also valuable in detecting software bugs. There are various machine learning techniques for finding bugs, ambiguities, and faulty software. In this paper, we direct an exploratory review to assess the performance of popular techniques including logistical regression, decision tree, random forest algorithm, SVM algorithms, and DNN. Our experiment is performed on various types of datasets (jedit, Tomcat, Tomcat-1, Xalan, Xerces, and prop-6). The experimental results show that DNN produces a better accuracy among all techniques used above.
  • Output Power Prediction of Solar Photovoltaic Panel Using Machine Learning Approach

    Tripathi A.K., Sharma N.K., Pavan J., Bojjagania S.

    Article, International Journal of Electrical and Electronics Research, 2022, DOI Link

    View abstract ⏷

    Solar power-based photovoltaic energy conversion could be considered one of the best sustainable sources of electric power generation. Thus, the prediction of the output power of the photovoltaic panel becomes necessary for its ef ficient utilization. The main aim of this paper is to predict the output power of solar photovoltaic panels using different machine learning algorithms based on the various input parameters such as ambient temperature, solar radiation, panel surface temperature, rel ative humidity and time of the day. Three different machine learning algorithms namely, multiple regression, support vector machine regression and gaussian regression were considered, for the prediction of output power, and compared on the basis of results obtained by different machine learning algorithms. The outcomes of this study showed that the multiple linear regression algorithm provides better performance with the result of mean absolute error, mean squared error, coefficient of determination and accuracy of 0.04505, 0.00431, 0.9981 and 0.99997 respectively, whereas the support vector machine regression had the worst prediction performance. Moreover, the predicted responses are in great understanding with the actual values indicating that the purposed machine learning algorithms are quite appropriate for predicting the output power of solar photovoltaic panels under different environmental conditions.
  • A Novel Heart Disease Prediction Approach Using the Hybrid Combination of GA, PSO, and CNN

    Sharma N.K., Ramchandra Reddy B., Monika Chowdary M., Rani Durga Prasanna Swetha Y., Rishitha Varma B., Bharat C.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Heart disease is one of the foremost health problems nowadays, and deadliest human disease around the world. It is the main reason for the enormous range of deaths in the world over the previous few decades. Therefore, there is a need to diagnose it in an exceeding specific time to avoid abandoned dangers. In this paper, we propose a hybrid approach to heart disease prediction by using a given range of feature vectors. Furthermore, a comparison of several classifiers for the prediction of heart disease cases with a minimum number of feature vectors are carried out. We proposed two different optimization algorithms like genetic algorithm (GA), and particle swarm optimization (PSO) for feature selection, and convolution neural network (CNN) for classification. The hybrid of GA and CNN is known as genetic neural network (GCNN), and hybrid of PSO and CNN now as particle neural network (PCNN). The experimental results show that accuracy values obtained by PCNN is approximately 82% and GCNN is 75.51%.
  • A machine learning approach to software model refactoring

    Sidhu B.K., Singh K., Sharma N.

    Article, International Journal of Computers and Applications, 2022, DOI Link

    View abstract ⏷

    Good software quality is a consequence of good design. Model refactoring counteracts erosion of the software design at an early stage in the software development project complying with the model-driven engineering paradigm. Traditional model refactoring approaches work at the surface level by using threshold values of model metrics as indicators of suboptimal design and carry out localized corrections. Through this paper, it is proposed that identifying design flaws at a higher level of granularity will save from the vicious cycle of small refactoring operations and their cascaded side-effects. The notion of functional decomposition, as an anomalous design tendency and a dominant cause of design, smells in object-oriented software, is introduced. It is suggested that refactoring operations targeted at signs of functional decomposition instead of atomic smells achieve substantial improvement in design within a concise quality assurance procedure. The idea is realized using a deep neural network that learns to recognize the presence of functional decomposition in UML models of object-oriented software. The presented approach uses data science methods to gain insight into multidimensional software design features and uses the experience gained to generalize subtle relationships among architectural components.
  • BAT algorithm based feature selection: Application in credit scoring

    Tripathi D., Ramachandra Reddy B., Padmanabha Reddy Y.C.A., Shukla A.K., Kumar R.K., Sharma N.K.

    Article, Journal of Intelligent and Fuzzy Systems, 2021, DOI Link

    View abstract ⏷

    Credit scoring plays a vital role for financial institutions to estimate the risk associated with a credit applicant applied for credit product. It is estimated based on applicants' credentials and directly affects to viability of issuing institutions. However, there may be a large number of irrelevant features in the credit scoring dataset. Due to irrelevant features, the credit scoring models may lead to poorer classification performances and higher complexity. So, by removing redundant and irrelevant features may overcome the problem with large number of features. In this work, we emphasized on the role of feature selection to enhance the predictive performance of credit scoring model. Towards to feature selection, Binary BAT optimization technique is utilized with a novel fitness function. Further, proposed approach aggregated with 'Radial Basis Function Neural Network (RBFN)', 'Support Vector Machine (SVM)' and 'Random Forest (RF)' for classification. Proposed approach is validated on four bench-marked credit scoring datasets obtained from UCI repository. Further, the comprehensive investigational results analysis are directed to show the comparative performance of the classification tasks with features selected by various approaches and other state-of-the-art approaches for credit scoring.
  • Multi-Objective Energy Efficient Virtual Machines Allocation at the Cloud Data Center

    Sharma N.K., Reddy G.R.M.

    Article, IEEE Transactions on Services Computing, 2019, DOI Link

    View abstract ⏷

    Due to the growing demand of cloud services, allocation of energy efficient resources (CPU, memory, storage, etc.) and resources utilization are the major challenging issues of a large cloud data center. In this paper, we propose an Euclidean distance based multi-objective resources allocation in the form of virtual machines (VMs) and designed the VM migration policy at the data center. Further the allocation of VMs to Physical Machines (PMs) is carried out by our proposed hybrid approach of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) referred to as HGAPSO. The proposed HGAPSO based resources allocation and VMs migration not only saves the energy consumption and minimizes the wastage of resources but also avoids SLA violation at the cloud data center. To check the performance of the proposed HGAPSO algorithm and VMs migration technique in the form of energy consumption, resources utilization and SLA violation, we performed the extended amount of experiment in both heterogeneous and homogeneous data center environments. To check the performance of proposed HGAPSO with VM migration, we compared our proposed work with branch-and-bound based exact algorithm. The experimental results show the superiority of HGAPSO and VMs migration technique over exact algorithm in terms of energy efficiency, optimal resources utilization, and SLA violation.
  • Hetero-material CPTFET with high-frequency and linearity analysis for ultra-low power applications

    Yadav D.S., Sharma D., Tirkey S., Sharma D.G., Bajpai S., Soni D., Yadav S., Aslam Md., Sharma N.

    Article, Micro and Nano Letters, 2018, DOI Link

    View abstract ⏷

    In this work, the authors have focused on increasing the current driving capability, speed of operation, suppression of parasitic capacitance and ambipolarity of the charge plasma tunnel field effect transistor (CPTFET). Gate dielectric and hetero-material engineering are employed in the CPTFET to obtain better drain current. Introduction of high-k dielectric increases the injection of charge carriers in the intrinsic body while a low-energy bandgap III–V material reduces the tunnelling width leading to the increased rate of band-to-band tunnelling of electrons and thus, enhancing the ON-state current of the device. Hence, the proposed device shows superior performance when operated in regime of DC and high frequency. For reducing the ambipolar conduction in the device, a widely used concept of underlapping of gate electrode is employed which reduces the leakage current in the device. Further, to determine the reliability of the device at high frequency, an analysis of linearity parameters is carried out. The proposed device is highly reliable to function at high-frequency regime. Therefore, the overall introduction of gate dielectric engineering, hetero-material engineering and underlapping of gate electrode improves the performance and characteristics of CPTFET.
  • Memory-based load balancing algorithm in structured peer-to-peer system

    Raghu G., Sharma N.K., Domanal S.G., Ram Mohana Reddy G.

    Conference paper, Advances in Intelligent Systems and Computing, 2018, DOI Link

    View abstract ⏷

    There are several load balancing techniques which are popular used in Structured Peer-to-Peer (SPTP) systems to distribute the load among the systems. Most of the protocols are concentrating on load sharing in SPTP Systems that lead to the performance degeneration in terms of processing delay and processing time due to the lack of resources utilization. The proposed work is related to the sender-initiated load balancing algorithms which are based on the memory. Further to check the performance of the proposed load balancing algorithm, the experimental results carried out in the real-time environment with different type of network topologies in distributed environment. The proposed work performed better over existing load balancing algorithm such as Earliest Completion Load Balancing (ECLB) and First Come First Serve (FCFS) in terms of processing delay and execution time.
  • Energy efficient quality of service aware virtual machine migration in cloud computing

    Sharma N.K., Sharma P., Guddeti R.M.R.

    Conference paper, Proceedings of the 4th IEEE International Conference on Recent Advances in Information Technology, RAIT 2018, 2018, DOI Link

    View abstract ⏷

    This paper deals with mulit-objective (network aware, energy efficient, and Service Level Agreement (SLA) aware) Virtual Machines (VMs) migration at the cloud data center. The proposed VMs migration technique migrate the VMs from the underutilized PMs to the energy efficient Physical Machines (PMs) at the cloud data center. Further, the multi-objective VMs migration technique not only reduces the power consumption of PMs and switches but also guarantees the quality of service by maintaining the SLA at the cloud data center. Our proposed VMs migration approach can find the good balance between three conflict objectives as compared to other algorithms. Further, the cloudsim based experimental results demonstrate the superiority of our proposed multi-objective VMs migration technique in terms of energy efficiency and also reduces the SLA violation over state-of-the-art VMs migration techniques such as Interquartile Range (IQR), and Random VMs migration techniques at the cloud data center.
  • A novel hetero-material gate-underlap electrically doped TFET for improving DC/RF and ambipolar behaviour

    Yadav S., Sharma D., Chandan B.V., Aslam M., Soni D., Sharma N.

    Article, Superlattices and Microstructures, 2018, DOI Link

    View abstract ⏷

    In this article, the impact of gate-underlap with hetero material (low band gap) has been investigated in terms of DC and Analog/RF parameters by proposed device named as hetero material gate-underlap electrically doped TFET (HM-GUL-ED-TFET). Gate-underlap resolves the problem of ambipolarity, gate leakage current (Ig) and slightly improves the gate to drain capacitance, but DC performance is almost unaffected. Further, the use of low band gap material (Si0.5Ge) in proposed device causes a drastic improvement in the DC as well as RF figures of merit. We have investigated the Si0.5Ge as a suitable candidate among different low band gap materials. In addition, the sensitivity of gate-underlap in terms of gate to drain inversion and parasitic capacitances has been studied for HM-GUL-ED-TFET. Further, relatively it is observed that gate-underlap is a better way than drain-underlap in the proposed structure to improve Analog/RF performances without degrading the DC parameters of device. Additionally, hetero-junction alignment analysis has been done for fabrication feasibility.
  • A new structure of electrically doped TFET for improving electronic characteristics

    Yadav S., Madhukar R., Sharma D., Aslam M., Soni D., Sharma N.

    Article, Applied Physics A: Materials Science and Processing, 2018, DOI Link

    View abstract ⏷

    This article put forward a novel device structure of electrically doped tunnel field effect transistor to improve DC and RF performance with suppressed ambipolarity and gate leakage. For suppressing gate leakage and ambipolarity, gate underlapping has been presented, which does not significantly affect the Analog/RF parameters of the device. Further, for improving the device performance a novel initiative of implanting a T-shaped metal layer under gate electrode at source/channel interface with high-k dielectric material has been investigated in the proposed structure. In addition, optimization of gate and electrical drain underlapping is investigated in comparative manner for proposed structure.
  • GWOTS: Grey Wolf Optimization Based Task Scheduling at the Green Cloud Data Center

    Natesha B.V., Kumar Sharma N., Domanal S., Reddy Guddeti R.M.

    Conference paper, Proceedings - 2018 14th International Conference on Semantics, Knowledge and Grids, SKG 2018, 2018, DOI Link

    View abstract ⏷

    Task Scheduling is a key challenging issue of Infrastructure as a Service (IaaS) based cloud data center and it is well-known NP-complete problem. As the number of users' requests increases then the load on the cloud data center will also increase gradually. To manage the heavy load on the cloud data center, in this paper, we propose multiobjective Grey Wolf Optimization (GWO) technique for task scheduling. The main objective of our proposed GWO based scheduling algorithm is to achieve optimum utilization of cloud resources for reducing both the energy consumption of the data center and total makespan of the scheduler for the given list of tasks while providing the services as requested by the users. Our proposed scheduling algorithm is compared with non meta-heuristic algorithms (First-Come-First-Serve (FCFS) and Modified Throttle (MT)), and meta-heuristic algorithms (Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Cat Swarm Optimization (CSO)). Experimental results demonstrate that the proposed GWO based scheduler outperforms all algorithms considered for performance evaluation in terms of makespan for the list of tasks, resource utilization and energy consumption.
  • Study of metal strip insertion and its optimization in doping less TFET

    Yadav D.S., Verma A., Sharma D., Sharma N.

    Article, Superlattices and Microstructures, 2018, DOI Link

    View abstract ⏷

    In this manuscript, a novel structure for dopingless tunnel field effect transistor (DL TFET) is introduced which comprises of metal strip (MS) in the oxide region near source/channel junction. Gate and drain work function engineering with hetero gate dielectric is used in this device which enhances the ON-state current, reduces the ambipolarity and also improves the RF performance. The increment in ON-state current with high subthreshold swing is achieved by the effect of MS. Steeper tunneling junction is achieved with the help of aforementioned modifications at the S/C junction. In this way, we increase the rate of tunneling at this interface and because of this, the threshold voltage of the proposed device reduces drastically. Further, study of ambipolarity suppression is done by using underlap of gate electrode near drain/channel (D/C) interface. Furthermore, temperature variation effect also incorporated in this manuscript, where study related to the threshold voltage and ON-state current is analysed by TCAD simulation. Moreover, most optimized MS length and work function are concluded in this paper for all simulations. Optimization for MS length and workfunction is analysed using TCAD simulation tool and shown in tabular form with one table showing effect of different work functions of MS on threshold voltage, ON-state current and SS, whereas another table shows the effect of MS length variations on RF parameters.
  • Multi-Objective Resources Allocation Using Improved Genetic Algorithm at Cloud Data Center

    Sharma N.K., Guddeti R.M.R.

    Conference paper, Proceedings - 2016 IEEE International Conference on Cloud Computing in Emerging Markets, CCEM 2016, 2017, DOI Link

    View abstract ⏷

    In this paper, a new novel Improved Genetic Algorithm (IGA) is proposed to determine the near optimal solution for multi-objective resources allocation at the green cloud data center of smart grid. However, instead of randomly generating the initial chromosomes for crossover and mutation operations the modified first decreasing (MFD) technique generates better solution for the initial population. The proposed work saves the energy consumption, minimizes the resource wastage, and reduce the algorithm's computation time at the cloud data center. The Cloud-sim simulator based experimental results show that our proposed approach improves the performance of the data center in terms of energy efficiency and average resources utilization when compared to the state-of-the-art VMs allocation approaches i.e. First Fit, Modified First Decreasing (MFD) and, Grouping Genetic Algorithm (GGA).
  • On demand Virtual Machine allocation and migration at cloud data center using Hybrid of Cat Swarm Optimization and Genetic Algorithm

    Sharma N.K., Guddeti R.M.R.

    Conference paper, Proceedings on 5th International Conference on Eco-Friendly Computing and Communication Systems, ICECCS 2016, 2017, DOI Link

    View abstract ⏷

    This paper deals with the energy saving at the data center using energy aware Virtual Machines (VMs) allocation and migration. The multi-objective based VMs allocation using Hybrid Genetic Cat Swarm Optimization (HGACSO) algorithm saves the energy consumption as well as also reduces resource wastage. Further consolidating VMs onto the minimal number of Physical Machines (PMs) using energy efficient VMs migration, we can shut down idle PMs for enhancing the energy efficiency at a cloud data center. The experimental results show that our proposed HGACSO VM allocation and energy efficient VM migration techniques achieved the energy efficiency and minimization of resource wastage.
  • A novel approach for multi-dimensional variable sized virtual machine allocation and migration at cloud data center

    Sharma N.K., Reddy G.R.M.

    Conference paper, 2017 9th International Conference on Communication Systems and Networks, COMSNETS 2017, 2017, DOI Link

    View abstract ⏷

    In this paper, we propose a branch-and-bound based exact algorithm for allocating multi-dimensional variable sized VMs at the cloud data center. Further, an energy efficient VMs migration technique is proposed to reduce the energy consumption and thus avoids the Service Level Agreement (SLA) violation at the cloud data center.
  • Novel energy efficient virtual machine allocation at data center using Genetic algorithm

    Sharma N.K., Ram Mohana Reddy G.

    Conference paper, 2015 3rd International Conference on Signal Processing, Communication and Networking, ICSCN 2015, 2015, DOI Link

    View abstract ⏷

    Increased resources utilization from clients in a smart computing environment poses a greater challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can save the energy of data center as well as avoiding the service level agreement (SLA) violation. This paper deals with the design of an energy efficient algorithm for optimized resources allocation at data center using combined approach of Dynamic Voltage Frequency Scaling (DVFS) and Genetic algorithm (GA). The performance of the proposed energy efficient algorithm is compared with DVFS. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy over a specified workload with 0% SLA violation.
  • A novel energy efficient resource allocation using hybrid approach of genetic DVFS with bin packing

    Sharma N.K., Reddy G.R.M.

    Conference paper, Proceedings - 2015 5th International Conference on Communication Systems and Network Technologies, CSNT 2015, 2015, DOI Link

    View abstract ⏷

    Increased resources utilization from several clients in a smart computing environment poses a key challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can reduce the energy consumption of the data center and also avoid the service level agreement (SLA) violation. This paper deals with the development of an energy efficient algorithm for optimal resources allocation at the data center using hybrid approach of the Dynamic Voltage Frequency Scaling (DVFS), Genetic algorithm (GA) and Bin Packing techniques. The performance of the proposed hybrid approach is compared with Genetic Algorithm, DVFS with Bin Packing, DVFS without Bin Packing techniques. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy as compared to the DVFS with Bin Packing technique over a specified workload with 0% SLA violation.
  • Adoption of knowledge management practices in software engineering organizations: A survey of software engineers’ perceptions

    Sharma N., Singh K., Goyal D.P.

    Conference paper, Proceedings - 2012 2nd International Conference on Advanced Computing and Communication Technologies, ACCT 2012, 2012, DOI Link

    View abstract ⏷

    Most businesses rely on the fact that their employees possess relevant knowledge and that they can apply it to the task at hand. But a serious problem exists, this knowledge is not owned by the organisation as such. Rather the knowledge is owned and controlled by its employees. The success or failure of the KM systems and the various KM practices claimed to be adopted by software engineering organisations can be best judged from software engineers' point of view as they are the ones who are the first-hand users of the KM systems and technologies. © 2012 IEEE.
Contact Details

neerajkumar.s@srmap.edu.in

Scholars

Doctoral Scholars

  • Mr Chiranjeevi Koganti

Interests

  • Artificial Intelligence
  • Cloud Computing
  • Distributed Computing
  • Machine Learning

Education
2005
B.E.
RGPV Bhopal
India
2010
M.E.
IET, DAVV Indore
India
2018
National Institute of Technology Karnataka, Surathkal
India
Experience
  • 2017-2020-Professor-RAIT, D.Y. Patil University, Navi Maumbai
  • 2014-2017-Research Assistant- NITK, Surathkal
  • 2006-2014-Assistant Professor-Shri Vaishnav Institute of Technology Indore
Research Interests
  • Multi-Objective energy efficient resources allocation at cloud data center.
  • Workload predication in dynamic cloud environment using machine learning approaches.
  • Cloud optimization problem formulation and its solution using bio-inspired/soft computing
Awards & Fellowships
  • 2008 – Faculty Research Grant Award – Microsoft
Memberships
  • Indian Society for Technical Education (ISTE) Life Time Membership
Publications
  • Deep learning BiLSTM and Branch-and-Bound based multi-objective virtual machine allocation and migration with profit, energy, and SLA constraints

    Sharma N.K., Bojjagani S., Uyyala R., Maurya A.K., Kumari S.

    Article, Sustainable Computing: Informatics and Systems, 2025, DOI Link

    View abstract ⏷

    This paper highlights a novel approach to address multiple networking-based VM allocation and migration objectives at the cloud data center. The proposed approach in this paper is structured into three distinct phases: firstly, we employ a Bi-Directional Long Short Term Memory (BiLSTM) model to predict Virtual Machines (VMs) instance's prices. Subsequently, we formulate the problem of allocating VMs to Physical Machines (PMs) and switches in a network-aware cloud data center environment as a multi-objective optimization task, employing Linear Programming (LP) techniques. For optimal allocation of VMs, we leverage the Branch-and-Bound (BaB) technique. In the third phase, we implement a VM migration strategy sensitive to SLA requirements and energy consumption considerations. The results, conducted using the CloudSim simulator, demonstrate the efficacy of our approach, showcasing a substantial 35% reduction in energy consumption, a remarkable decrease in SLA violations, and a notable 18% increase in the cloud data center's profit. Finally, the proposed multi-objective approach reduces energy consumption and SLA violation and makes the data center sustainable.
  • Unveiling Android security testing: A Comprehensive overview of techniques, challenges, and mitigation strategies

    Palutla D.V., Bojjagani S., Mula S.C.R., Uyyala R., Sharma N.K., Morampudi M.K., Khan M.K.

    Review, Computers and Electrical Engineering, 2025, DOI Link

    View abstract ⏷

    With the rapid growth of Android applications, ensuring robust security has become a critical concern. Traditional Vulnerability Assessment and Penetration Testing (VAPT) approaches, though effective across platforms, often fall short in addressing Android-specific security challenges. This paper presents a comprehensive review of security testing methods tailored to the Android ecosystem, including static and dynamic analysis, hybrid approaches, network communication testing, reverse engineering, malware detection, and permission-based assessments. Android's open-source nature, device fragmentation, and inconsistent security policies introduce unique vulnerabilities that require specialized testing strategies. By examining current tools, methodologies, and best practices, this review identifies recurring gaps in the Android application security testing process. It highlights the need for more adaptable and thorough testing frameworks. The insights provided are valuable to developers, researchers, and security professionals aiming to strengthen Android app security. Ultimately, this work underscores the importance of tailoring security assessment practices to the evolving threat landscape of the Android platform, thereby contributing to the development of safer and more resilient applications.
  • A Novel Convolutional Neural Network Architecture for Gender and Age Prediction

    Sharma N.K., Induri A., Reddy Priya M., Choubey P.

    Conference paper, Lecture Notes in Networks and Systems, 2025, DOI Link

    View abstract ⏷

    Soft biometrics is a field in which a person's gender, height, and weight are determined using a digital machine. This is due to the increasing number of real-world applications in day-to-day life with improved wireless technology. Soft biometrics generally consist of gender, age, ethnicity, height, and facial dimensions. In this paper, the authors propose a state-of-the-art Convolutional Neural Network (CNN) to classify gender and estimate age based on human face images. We also tune the hyperparameters of the CNN to compute the best result for the gender and age of a human face. Experimental results reveal that our proposed methodology achieved 82% accuracy for gender and 78% accuracy for age.
  • Deciphering the Role of Functional Ion Channels in Cancer Stem Cells (CSCs) and Their Therapeutic Implications

    Samanta K., Reddy G.S.V.S.R., Sharma N.K., Kar P.

    Review, International Journal of Molecular Sciences, 2025, DOI Link

    View abstract ⏷

    Despite advances in medicine, cancer remains one of the foremost global health concerns. Conventional treatments like surgery, radiotherapy, and chemotherapy have advanced with the emergence of targeted and immunotherapy approaches. However, therapeutic resistance and relapse remain major barriers to long-term success in cancer treatment, often driven by cancer stem cells (CSCs). These rare, resilient cells can survive therapy and drive tumour regrowth, urging deeper investigation into the mechanisms underlying their persistence. CSCs express ion channels typical of excitable tissues, which, beyond electrophysiology, critically regulate CSC fate. However, the underlying regulatory mechanisms of these channels in CSCs remain largely unexplored and poorly understood. Nevertheless, the therapeutic potential of targeting CSC ion channels is immense, as it offers a powerful strategy to disrupt vital signalling pathways involved in numerous pathological conditions. In this review, we explore the diverse repertoire of ion channels expressed in CSCs and highlight recent mechanistic insights into how these channels modulate CSC behaviours, dynamics, and functions. We present a concise overview of ion channel-mediated CSC regulation, emphasizing their potential as novel diagnostic markers and therapeutic targets, and identifying key areas for future research.
  • Dynamic Threshold-based DDoS Detection and Prevention for Network Function Virtualization (NFV) in Digital Twin Environment

    Bojjagani S., Surya Nagi Reddy N., Medasani S.S., Umar M., Reddy C.A., Sharma N.K.

    Book chapter, Blockchain and Digital Twin Enabled IoT Networks: Privacy and Security Perspectives, 2024, DOI Link

    View abstract ⏷

    Digital Twin (DT) technology is a digital illustration of a physical object or system; this technology has paid much attention to IoT, healthcare, automotive manufacturing, construction of buildings, and even cities. However, these applications may also have serious security pitfalls in DT deployment. Distributed Denial of Service (DDoS) attacks significantly threaten the availability and stability of computer networks and services. Detecting and mitigating these attacks on time is crucial for maintaining network security. This chapter aims to develop an algorithmic-based approach for detecting and preventing DDoS attacks in the initial stages of Network Function Virtualisation (NFV). The proposed model involves the Network traffic collected from a sender in various monitoring points within the network infrastructure. Then the traffic is analysed by extracting relevant information like from which source the traffic is coming, Transmission Control Protocol (TCP), three-way handshake details, packet size, and traffic volume. The developed model is deployed in real time to monitor incoming network traffic. It analyses the extracted features and compares them with the learned patterns to identify potential Distributed Denial of Service attacks (DDoS). Alerts and notifications are generated, and warning notifications will be given to the source node. Upon detection of a DDoS attack, appropriate mitigation strategies are implemented to protect the network infrastructure and services. These may include traffic filtering; and rate limiting to mitigate the attack’s impact and ensure critical resource availability. The performance metrics, such as detection accuracy, false positive rate, and response time, will be measured to assess the reliability and efficiency of the solution, by developing an algorithmic model that can effectively detect and mitigate Distributed Denial of Service attacks. This chapter aims to enhance network security and ensure the uninterrupted availability of online services for digital twin environments, even in the face of evolving and sophisticated cyber threats.
  • BCECBN: Blockchain-enabled P2P Secure File Sharing System Over Cloudlet Networks

    Sumanth P., Bojjagani S., Poojitha P., Bharani P., Krishna T.G., Sharma N.K.

    Book chapter, Blockchain and Digital Twin Enabled IoT Networks: Privacy and Security Perspectives, 2024, DOI Link

    View abstract ⏷

    Cloud computing is a relatively recent technological development that has steadily gained popularity over the last few years. The data sharing between legitimate entities in peer-to-peer (P2P) file systems in the cloud is the most challenging task. Nowadays, many individuals utilize document destruction programs. Sharing files created with these programs is one way for individuals to make money online. Some websites, like Chegg and Scribd, provide a forum for academics and freelancers to share their work with the public. To participate in these programs, the users first become members. And those who wish to access the files must still pay for the application rather than the original author. In this chapter, we developed a novel approach for a secure file-sharing system called blockchain-enabled and cloudlet-based networks (BCECBN). It integrates a blockchain for secure transactions and access to shared files. The proposed model is protected against various attacks performed by the adversary. In addition, it provides a solution for the above-discussed examples; users may easily share and trade data with one another and save and share data over the internet with no effort. In addition, this research will examine a cloud-based secure file-sharing system and exchange the data using digital media. Each user performs a transaction along the blockchain to get access to the files.
  • Federated Learning-based Big Data Analytics For The Education System

    Surapaneni P., Bojjagani S., Sharma N.K.

    Conference paper, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

    View abstract ⏷

    This paper proposes a novel approach to enhancing education systems by integrating federated learning techniques with big data analytics. Traditional data analysis methods in educational settings often need help regarding data privacy, security, and scalability. Federated learning addresses these issues by enabling collaborative model training across distributed datasets without data centralization, thus preserving the privacy of sensitive information. By harnessing the vast amounts of educational data generated from various sources such as online learning platforms, student information systems, and academic applications, federated learning empowers educational institutions to derive valuable insights while respecting data privacy regulations. Leveraging the collective intelligence of decentralized data sources, federated learning algorithms facilitate the development of robust predictive models for student performance, personalized learning recommendations, and early intervention strategies. Moreover, federated learning enables continuous model improvement by aggregating local model updates from participating institutions, ensuring adaptability to evolving educational landscapes. This paper explores the technical foundations of federated learning, its application in education systems, and its potential benefits in improving learning outcomes and fostering data-driven decision-making in education. Through a comprehensive review of existing literature and case studies, this research aims to provide insights into the opportunities and challenges associated with implementing federated learning-based big data analytics in education systems, ultimately paving the way for a more efficient and personalized approach to education.
  • Reliable and privacy-preserving multi-instance iris verification using Paillier homomorphic encryption and one-digit checksum

    Morampudi M.K., Gonthina N., Bojjagani S., Sharma N.K., Veeraiah D.

    Article, Signal, Image and Video Processing, 2024, DOI Link

    View abstract ⏷

    The utilization of a biometric authentication system (BAS) for reliable automatic human recognition has increased exponentially in recent years over traditional authentication systems. Since the biometric traits are irrevocable, two important issues such as security and privacy still need to be addressed in BAS. Researchers explore homomorphic encryption (HE) to propose several privacy-preserving BAS. However, the correctness of the evaluated results computed by the cloud server on the protected templates is still an open research challenge. These methods are able to conserve the privacy of biometric templates but unable to check the correctness of computed result results in false reject or accept. To overcome this issue, we suggest a reliable and privacy-preserving verifiable multi-instance iris verification system using Paillier HE and one-digit checksum (PVMIAPO). Modified local random projection is implemented on the fused iris template to produce the reduced template. Later, Paillier HE is applied on the reduced template to create the protected template. The result returned by the third party server is verified using the one-digit checksum. The efficiency of PVMIAPO is verified by experimenting with it on SDUMLA-HMT, IITD, and CASIA-V3-Interval iris databases. PVMIAPO gratifies the irreversibility, diversity, and revocability properties. PVMIAPO also obtains fair performance in contrast to the existing methods.
  • Secure privacy-enhanced fast authentication and key management for IoMT-enabled smart healthcare systems

    Bojjagani S., Brabin D., Kumar K., Sharma N.K., Batta U.

    Article, Computing, 2024, DOI Link

    View abstract ⏷

    The smart healthcare system advancements have introduced the Internet of Things, enabling technologies to improve the quality of medical services. The main idea of these healthcare systems is to provide data security, interaction between entities, efficient data transfer, and sustainability. However, privacy concerning patient information is a fundamental problem in smart healthcare systems. Many authentications and critical management protocols exist in the literature for healthcare systems, but ensuring security still needs to be improved. Even if security is achieved, it still requires fast communication and computations. In this paper, we have introduced a new secure privacy-enhanced fast authentication key management scheme that effectively applies to lightweight resource-constrained devices in healthcare systems to overcome the issue. The proposed framework is applicable for quick authentication, efficient key management between the entities, and minimising computation and communication overheads. We verified our proposed framework with formal and informal verification using BAN logic, Scyther simulation, and the Drozer tool. The simulation and tool verification shows that the proposed system is free from well-known attacks, reducing communication and computation costs compared to the existing healthcare systems.
  • Mechanical element’s remaining useful life prediction using a hybrid approach of CNN and LSTM

    Sharma N.K., Bojjagani S.

    Article, Multimedia Tools and Applications, 2024, DOI Link

    View abstract ⏷

    For the safety and reliability of the system, Remaining Useful Life (RUL) prediction is considered in many industries. The traditional machine learning techniques must provide more feature representation and adaptive feature extraction. Deep learning techniques like Long Short-Term Memory (LSTM) achieved an excellent performance for RUL prediction. However, the LSTM network mainly relies on the past few data, which may only capture some contextual information. This paper proposes a hybrid combination of Convolution Neural Network (CNN) and LSTM (CNN+LSTM) to solve this problem. The proposed hybrid model predicts how long a machine can operate without breaking down. In the proposed work, 1D horizontal and vertical signals of the mechanical bearing are first converted to 2D images using Continuous Wavelet Transform (CWT). These 2D images are applied to CNN for key feature extraction. Ultimately, these key features are applied to the LSTM deep neural network for predicting the RUL of a mechanical bearing. A PRONOSTIA data is utilized to demonstrate the performance of the proposed model and compare the proposed model with other state-of-the-art methods. Experimental results show that our proposed CNN+LSTM-based hybrid model achieved higher accuracy (98%) with better robustness than existing methods.
  • Selective Weighting and Prediction Error Expansion for High-Fidelity Images

    Uyyala R., Bojjagani S., Sharma N.K., Chithaluru P., Akuri S.R.C.M.

    Article, SN Computer Science, 2024, DOI Link

    View abstract ⏷

    Reversible data hiding (RDH) based on prediction error expansion (PEE) needs a reliable predictor to forecast the pixel. The hidden information is inserted into the original cover image pixels using the Prediction Error (PE). To improve the accuracy of pixel predictions for cover images, there are a number of algorithms available in the literature. Based on the different gradient estimations, several academics have suggested prediction methods. More research on this gradient-based pixel prediction method is presented in this article. In order to improve exploration gradient estimates, we have looked at a number of local contexts surrounding the current pixel. It has been stated that experiments have been conducted to evaluate the effect of different neighborhood sizes on gradient estimation. Additionally, we investigate two methods for choosing paths according to gradient magnitudes. To incorporate the data into the initial pixels, a new embedding technique called Prediction Error Expansion has been suggested. In the context of reversible data concealment, experimental results point towards a better gradient based prediction employing an prediction embedding technique.
  • A Novel Energy Efficient Multi-Dimensional Virtual Machines Allocation and Migration at the Cloud Data Center

    Sharma N.K., Bojjagani S., Reddy Y.C.A.P., Vivekanandan M., Srinivasan J., Maurya A.K.

    Article, IEEE Access, 2023, DOI Link

    View abstract ⏷

    Due to the rapid utilization of cloud services, the energy consumption of cloud data centres is increasing dramatically. These cloud services are provided by Virtual Machines (VMs) through the cloud data center. Therefore, energy-aware VMs allocation and migration are essential tasks in the cloud environment. This paper proposes a Branch-and-Price based energy-efficient VMs allocation algorithm and a Multi-Dimensional Virtual Machine Migration (MDVMM) algorithm at the cloud data center. The Branch-and-Price based VMs allocation algorithm reduces energy consumption and wastage of resources by selecting the optimal number of energy-efficient PMs at the cloud data center. The proposed MDVMM algorithm saves energy consumption and avoids the Service Level Agreement (SLA) violation by performing an optimal number of VMs migrations. The experimental results demonstrate that our proposed Branch-and-Price based VMs allocation with VMs migration algorithms saves more than 31% energy consumption and improves 21.7% average resource utilization over existing state-of-the-art techniques with a 95% confidence interval. The performance of the proposed approaches outperforms in terms of SLA violation, VMs migration, and Energy SLA Violation (ESV) combined metrics over existing state-of-the-art VMs allocation and migration algorithms.
  • The use of IoT-based wearable devices to ensure secure lightweight payments in FinTech applications

    Bojjagani S., Seelam N.R., Sharma N.K., Uyyala R., Akuri S.R.C.M., Maurya A.K.

    Article, Journal of King Saud University - Computer and Information Sciences, 2023, DOI Link

    View abstract ⏷

    Daily digital payments in Financial Technology (FinTech) are growing exponentially. A huge demand is for developing secure, lightweight cryptography protocols for wearable IoT-based devices. The devices hold the consumer information and transit functions in a secure environment to provide authentication and confidentiality using contactless Near-Field Communication (NFC) or Bluetooth technologies. On the other hand, Security breaches have been observed in various dimensions, especially in wearable payment technologies. In this paper, we developed a threat model in the proposed framework and how to mitigate these attacks. This study accepts the three-authentication factor, as biometrics is one of the user's most vital authentication mechanisms. The scheme uses an “Elliptic Curve Integrated Encryption Scheme (ECIES)”, “Elliptic Curve Digital Signature Algorithm (ECDSA)” and “Advanced Encryption Standard (AES)” to encrypt the messages between the entities to ensure higher security. The security analysis of the proposed scheme is demonstrated through the Real-or-Random oracle model (RoR) and Scyther's widely accepted model-checking tools. Finally, we present a comparative summary based on security features, communication cost, and computation overhead of existing methods, specifying that the proposed framework is secure and efficient for all kinds of remote and proximity payments, such as mini, macro, and micro-payments, using wearable devices.
  • Crowd Management System Based on Hybrid Combination of LSTM and CNN

    Sharma N.K., Krishna G.V.A., Kumar V.B., Kumar T.S.P., Rakesh C.H., Reddy R.C.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Automatic recognition of violence and nonviolence activities in the crowd management system is a broad area of interest in today’s scenario. In this paper, we propose a hybrid combination of the Convolution Neural Networks (CNNs), and Long Short-Term Memory (LSTM) model to recognize violence/nonviolence activities in a crowded area. In the proposed approach a stream of video is applied to a pretrained Darknet-19 network, then a CNN with LSTM network is used to extract spatial and temporal features from the video. In the end, these spatial features are applied to a fully connected layer to identify the violence/nonviolence condition. The experimental results show that 98.1% accuracy was achieved in the case of video, and 97.8% accuracy was achieved in the case of the image frame by our proposed violence/nonviolence detection model.
  • Music Generation Using Deep Learning

    Vemula D.R., Tripathi S.K., Sharma N.K., Hussain M.M., Swamy U.R., Polavarapu B.L.

    Conference paper, Lecture Notes in Electrical Engineering, 2023, DOI Link

    View abstract ⏷

    In this paper, we explore the usage of char-RNN which is special type of recurrent neural network (RNN) in generating music pieces and propose an approach to do so. First, we train a model using existing music data. The generating model mimics the music patterns in such a way that we humans enjoy. The generated model does not replicate the training data but understands and creates patterns to generate new music. We generate honest quality music which should be good and melodious to hear. By tuning, the generated music can be beneficial for composers, film makers, artists in their tasks, and it can also be sold by companies or individuals. In our paper, we focus more on char ABC-notation because it is reliable to represent music using just sequence of characters. We use bidirectional long short-term memory (LSTM) which takes input as music sequences and observer that the proposed model has more accuracy compared with other models.
  • Software Fault Prediction Using Deep Neural Networks

    Mohana Ramya Y., Deepthi K., Vamsai A., Juhi Sai A., Sharma N., Ramachandra Reddy B.

    Conference paper, Lecture Notes in Electrical Engineering, 2023, DOI Link

    View abstract ⏷

    Software failure prediction is the process of building models that software interpreters can use to detect faulty constructs early in the software development life cycle. Faults are the main source of time consumption and cost less over the life cycle of applications. Early failure prediction increases device consistency and reliability and decreases the expense of software development. However, machine learning techniques are also valuable in detecting software bugs. There are various machine learning techniques for finding bugs, ambiguities, and faulty software. In this paper, we direct an exploratory review to assess the performance of popular techniques including logistical regression, decision tree, random forest algorithm, SVM algorithms, and DNN. Our experiment is performed on various types of datasets (jedit, Tomcat, Tomcat-1, Xalan, Xerces, and prop-6). The experimental results show that DNN produces a better accuracy among all techniques used above.
  • Output Power Prediction of Solar Photovoltaic Panel Using Machine Learning Approach

    Tripathi A.K., Sharma N.K., Pavan J., Bojjagania S.

    Article, International Journal of Electrical and Electronics Research, 2022, DOI Link

    View abstract ⏷

    Solar power-based photovoltaic energy conversion could be considered one of the best sustainable sources of electric power generation. Thus, the prediction of the output power of the photovoltaic panel becomes necessary for its ef ficient utilization. The main aim of this paper is to predict the output power of solar photovoltaic panels using different machine learning algorithms based on the various input parameters such as ambient temperature, solar radiation, panel surface temperature, rel ative humidity and time of the day. Three different machine learning algorithms namely, multiple regression, support vector machine regression and gaussian regression were considered, for the prediction of output power, and compared on the basis of results obtained by different machine learning algorithms. The outcomes of this study showed that the multiple linear regression algorithm provides better performance with the result of mean absolute error, mean squared error, coefficient of determination and accuracy of 0.04505, 0.00431, 0.9981 and 0.99997 respectively, whereas the support vector machine regression had the worst prediction performance. Moreover, the predicted responses are in great understanding with the actual values indicating that the purposed machine learning algorithms are quite appropriate for predicting the output power of solar photovoltaic panels under different environmental conditions.
  • A Novel Heart Disease Prediction Approach Using the Hybrid Combination of GA, PSO, and CNN

    Sharma N.K., Ramchandra Reddy B., Monika Chowdary M., Rani Durga Prasanna Swetha Y., Rishitha Varma B., Bharat C.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Heart disease is one of the foremost health problems nowadays, and deadliest human disease around the world. It is the main reason for the enormous range of deaths in the world over the previous few decades. Therefore, there is a need to diagnose it in an exceeding specific time to avoid abandoned dangers. In this paper, we propose a hybrid approach to heart disease prediction by using a given range of feature vectors. Furthermore, a comparison of several classifiers for the prediction of heart disease cases with a minimum number of feature vectors are carried out. We proposed two different optimization algorithms like genetic algorithm (GA), and particle swarm optimization (PSO) for feature selection, and convolution neural network (CNN) for classification. The hybrid of GA and CNN is known as genetic neural network (GCNN), and hybrid of PSO and CNN now as particle neural network (PCNN). The experimental results show that accuracy values obtained by PCNN is approximately 82% and GCNN is 75.51%.
  • A machine learning approach to software model refactoring

    Sidhu B.K., Singh K., Sharma N.

    Article, International Journal of Computers and Applications, 2022, DOI Link

    View abstract ⏷

    Good software quality is a consequence of good design. Model refactoring counteracts erosion of the software design at an early stage in the software development project complying with the model-driven engineering paradigm. Traditional model refactoring approaches work at the surface level by using threshold values of model metrics as indicators of suboptimal design and carry out localized corrections. Through this paper, it is proposed that identifying design flaws at a higher level of granularity will save from the vicious cycle of small refactoring operations and their cascaded side-effects. The notion of functional decomposition, as an anomalous design tendency and a dominant cause of design, smells in object-oriented software, is introduced. It is suggested that refactoring operations targeted at signs of functional decomposition instead of atomic smells achieve substantial improvement in design within a concise quality assurance procedure. The idea is realized using a deep neural network that learns to recognize the presence of functional decomposition in UML models of object-oriented software. The presented approach uses data science methods to gain insight into multidimensional software design features and uses the experience gained to generalize subtle relationships among architectural components.
  • BAT algorithm based feature selection: Application in credit scoring

    Tripathi D., Ramachandra Reddy B., Padmanabha Reddy Y.C.A., Shukla A.K., Kumar R.K., Sharma N.K.

    Article, Journal of Intelligent and Fuzzy Systems, 2021, DOI Link

    View abstract ⏷

    Credit scoring plays a vital role for financial institutions to estimate the risk associated with a credit applicant applied for credit product. It is estimated based on applicants' credentials and directly affects to viability of issuing institutions. However, there may be a large number of irrelevant features in the credit scoring dataset. Due to irrelevant features, the credit scoring models may lead to poorer classification performances and higher complexity. So, by removing redundant and irrelevant features may overcome the problem with large number of features. In this work, we emphasized on the role of feature selection to enhance the predictive performance of credit scoring model. Towards to feature selection, Binary BAT optimization technique is utilized with a novel fitness function. Further, proposed approach aggregated with 'Radial Basis Function Neural Network (RBFN)', 'Support Vector Machine (SVM)' and 'Random Forest (RF)' for classification. Proposed approach is validated on four bench-marked credit scoring datasets obtained from UCI repository. Further, the comprehensive investigational results analysis are directed to show the comparative performance of the classification tasks with features selected by various approaches and other state-of-the-art approaches for credit scoring.
  • Multi-Objective Energy Efficient Virtual Machines Allocation at the Cloud Data Center

    Sharma N.K., Reddy G.R.M.

    Article, IEEE Transactions on Services Computing, 2019, DOI Link

    View abstract ⏷

    Due to the growing demand of cloud services, allocation of energy efficient resources (CPU, memory, storage, etc.) and resources utilization are the major challenging issues of a large cloud data center. In this paper, we propose an Euclidean distance based multi-objective resources allocation in the form of virtual machines (VMs) and designed the VM migration policy at the data center. Further the allocation of VMs to Physical Machines (PMs) is carried out by our proposed hybrid approach of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) referred to as HGAPSO. The proposed HGAPSO based resources allocation and VMs migration not only saves the energy consumption and minimizes the wastage of resources but also avoids SLA violation at the cloud data center. To check the performance of the proposed HGAPSO algorithm and VMs migration technique in the form of energy consumption, resources utilization and SLA violation, we performed the extended amount of experiment in both heterogeneous and homogeneous data center environments. To check the performance of proposed HGAPSO with VM migration, we compared our proposed work with branch-and-bound based exact algorithm. The experimental results show the superiority of HGAPSO and VMs migration technique over exact algorithm in terms of energy efficiency, optimal resources utilization, and SLA violation.
  • Hetero-material CPTFET with high-frequency and linearity analysis for ultra-low power applications

    Yadav D.S., Sharma D., Tirkey S., Sharma D.G., Bajpai S., Soni D., Yadav S., Aslam Md., Sharma N.

    Article, Micro and Nano Letters, 2018, DOI Link

    View abstract ⏷

    In this work, the authors have focused on increasing the current driving capability, speed of operation, suppression of parasitic capacitance and ambipolarity of the charge plasma tunnel field effect transistor (CPTFET). Gate dielectric and hetero-material engineering are employed in the CPTFET to obtain better drain current. Introduction of high-k dielectric increases the injection of charge carriers in the intrinsic body while a low-energy bandgap III–V material reduces the tunnelling width leading to the increased rate of band-to-band tunnelling of electrons and thus, enhancing the ON-state current of the device. Hence, the proposed device shows superior performance when operated in regime of DC and high frequency. For reducing the ambipolar conduction in the device, a widely used concept of underlapping of gate electrode is employed which reduces the leakage current in the device. Further, to determine the reliability of the device at high frequency, an analysis of linearity parameters is carried out. The proposed device is highly reliable to function at high-frequency regime. Therefore, the overall introduction of gate dielectric engineering, hetero-material engineering and underlapping of gate electrode improves the performance and characteristics of CPTFET.
  • Memory-based load balancing algorithm in structured peer-to-peer system

    Raghu G., Sharma N.K., Domanal S.G., Ram Mohana Reddy G.

    Conference paper, Advances in Intelligent Systems and Computing, 2018, DOI Link

    View abstract ⏷

    There are several load balancing techniques which are popular used in Structured Peer-to-Peer (SPTP) systems to distribute the load among the systems. Most of the protocols are concentrating on load sharing in SPTP Systems that lead to the performance degeneration in terms of processing delay and processing time due to the lack of resources utilization. The proposed work is related to the sender-initiated load balancing algorithms which are based on the memory. Further to check the performance of the proposed load balancing algorithm, the experimental results carried out in the real-time environment with different type of network topologies in distributed environment. The proposed work performed better over existing load balancing algorithm such as Earliest Completion Load Balancing (ECLB) and First Come First Serve (FCFS) in terms of processing delay and execution time.
  • Energy efficient quality of service aware virtual machine migration in cloud computing

    Sharma N.K., Sharma P., Guddeti R.M.R.

    Conference paper, Proceedings of the 4th IEEE International Conference on Recent Advances in Information Technology, RAIT 2018, 2018, DOI Link

    View abstract ⏷

    This paper deals with mulit-objective (network aware, energy efficient, and Service Level Agreement (SLA) aware) Virtual Machines (VMs) migration at the cloud data center. The proposed VMs migration technique migrate the VMs from the underutilized PMs to the energy efficient Physical Machines (PMs) at the cloud data center. Further, the multi-objective VMs migration technique not only reduces the power consumption of PMs and switches but also guarantees the quality of service by maintaining the SLA at the cloud data center. Our proposed VMs migration approach can find the good balance between three conflict objectives as compared to other algorithms. Further, the cloudsim based experimental results demonstrate the superiority of our proposed multi-objective VMs migration technique in terms of energy efficiency and also reduces the SLA violation over state-of-the-art VMs migration techniques such as Interquartile Range (IQR), and Random VMs migration techniques at the cloud data center.
  • A novel hetero-material gate-underlap electrically doped TFET for improving DC/RF and ambipolar behaviour

    Yadav S., Sharma D., Chandan B.V., Aslam M., Soni D., Sharma N.

    Article, Superlattices and Microstructures, 2018, DOI Link

    View abstract ⏷

    In this article, the impact of gate-underlap with hetero material (low band gap) has been investigated in terms of DC and Analog/RF parameters by proposed device named as hetero material gate-underlap electrically doped TFET (HM-GUL-ED-TFET). Gate-underlap resolves the problem of ambipolarity, gate leakage current (Ig) and slightly improves the gate to drain capacitance, but DC performance is almost unaffected. Further, the use of low band gap material (Si0.5Ge) in proposed device causes a drastic improvement in the DC as well as RF figures of merit. We have investigated the Si0.5Ge as a suitable candidate among different low band gap materials. In addition, the sensitivity of gate-underlap in terms of gate to drain inversion and parasitic capacitances has been studied for HM-GUL-ED-TFET. Further, relatively it is observed that gate-underlap is a better way than drain-underlap in the proposed structure to improve Analog/RF performances without degrading the DC parameters of device. Additionally, hetero-junction alignment analysis has been done for fabrication feasibility.
  • A new structure of electrically doped TFET for improving electronic characteristics

    Yadav S., Madhukar R., Sharma D., Aslam M., Soni D., Sharma N.

    Article, Applied Physics A: Materials Science and Processing, 2018, DOI Link

    View abstract ⏷

    This article put forward a novel device structure of electrically doped tunnel field effect transistor to improve DC and RF performance with suppressed ambipolarity and gate leakage. For suppressing gate leakage and ambipolarity, gate underlapping has been presented, which does not significantly affect the Analog/RF parameters of the device. Further, for improving the device performance a novel initiative of implanting a T-shaped metal layer under gate electrode at source/channel interface with high-k dielectric material has been investigated in the proposed structure. In addition, optimization of gate and electrical drain underlapping is investigated in comparative manner for proposed structure.
  • GWOTS: Grey Wolf Optimization Based Task Scheduling at the Green Cloud Data Center

    Natesha B.V., Kumar Sharma N., Domanal S., Reddy Guddeti R.M.

    Conference paper, Proceedings - 2018 14th International Conference on Semantics, Knowledge and Grids, SKG 2018, 2018, DOI Link

    View abstract ⏷

    Task Scheduling is a key challenging issue of Infrastructure as a Service (IaaS) based cloud data center and it is well-known NP-complete problem. As the number of users' requests increases then the load on the cloud data center will also increase gradually. To manage the heavy load on the cloud data center, in this paper, we propose multiobjective Grey Wolf Optimization (GWO) technique for task scheduling. The main objective of our proposed GWO based scheduling algorithm is to achieve optimum utilization of cloud resources for reducing both the energy consumption of the data center and total makespan of the scheduler for the given list of tasks while providing the services as requested by the users. Our proposed scheduling algorithm is compared with non meta-heuristic algorithms (First-Come-First-Serve (FCFS) and Modified Throttle (MT)), and meta-heuristic algorithms (Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Cat Swarm Optimization (CSO)). Experimental results demonstrate that the proposed GWO based scheduler outperforms all algorithms considered for performance evaluation in terms of makespan for the list of tasks, resource utilization and energy consumption.
  • Study of metal strip insertion and its optimization in doping less TFET

    Yadav D.S., Verma A., Sharma D., Sharma N.

    Article, Superlattices and Microstructures, 2018, DOI Link

    View abstract ⏷

    In this manuscript, a novel structure for dopingless tunnel field effect transistor (DL TFET) is introduced which comprises of metal strip (MS) in the oxide region near source/channel junction. Gate and drain work function engineering with hetero gate dielectric is used in this device which enhances the ON-state current, reduces the ambipolarity and also improves the RF performance. The increment in ON-state current with high subthreshold swing is achieved by the effect of MS. Steeper tunneling junction is achieved with the help of aforementioned modifications at the S/C junction. In this way, we increase the rate of tunneling at this interface and because of this, the threshold voltage of the proposed device reduces drastically. Further, study of ambipolarity suppression is done by using underlap of gate electrode near drain/channel (D/C) interface. Furthermore, temperature variation effect also incorporated in this manuscript, where study related to the threshold voltage and ON-state current is analysed by TCAD simulation. Moreover, most optimized MS length and work function are concluded in this paper for all simulations. Optimization for MS length and workfunction is analysed using TCAD simulation tool and shown in tabular form with one table showing effect of different work functions of MS on threshold voltage, ON-state current and SS, whereas another table shows the effect of MS length variations on RF parameters.
  • Multi-Objective Resources Allocation Using Improved Genetic Algorithm at Cloud Data Center

    Sharma N.K., Guddeti R.M.R.

    Conference paper, Proceedings - 2016 IEEE International Conference on Cloud Computing in Emerging Markets, CCEM 2016, 2017, DOI Link

    View abstract ⏷

    In this paper, a new novel Improved Genetic Algorithm (IGA) is proposed to determine the near optimal solution for multi-objective resources allocation at the green cloud data center of smart grid. However, instead of randomly generating the initial chromosomes for crossover and mutation operations the modified first decreasing (MFD) technique generates better solution for the initial population. The proposed work saves the energy consumption, minimizes the resource wastage, and reduce the algorithm's computation time at the cloud data center. The Cloud-sim simulator based experimental results show that our proposed approach improves the performance of the data center in terms of energy efficiency and average resources utilization when compared to the state-of-the-art VMs allocation approaches i.e. First Fit, Modified First Decreasing (MFD) and, Grouping Genetic Algorithm (GGA).
  • On demand Virtual Machine allocation and migration at cloud data center using Hybrid of Cat Swarm Optimization and Genetic Algorithm

    Sharma N.K., Guddeti R.M.R.

    Conference paper, Proceedings on 5th International Conference on Eco-Friendly Computing and Communication Systems, ICECCS 2016, 2017, DOI Link

    View abstract ⏷

    This paper deals with the energy saving at the data center using energy aware Virtual Machines (VMs) allocation and migration. The multi-objective based VMs allocation using Hybrid Genetic Cat Swarm Optimization (HGACSO) algorithm saves the energy consumption as well as also reduces resource wastage. Further consolidating VMs onto the minimal number of Physical Machines (PMs) using energy efficient VMs migration, we can shut down idle PMs for enhancing the energy efficiency at a cloud data center. The experimental results show that our proposed HGACSO VM allocation and energy efficient VM migration techniques achieved the energy efficiency and minimization of resource wastage.
  • A novel approach for multi-dimensional variable sized virtual machine allocation and migration at cloud data center

    Sharma N.K., Reddy G.R.M.

    Conference paper, 2017 9th International Conference on Communication Systems and Networks, COMSNETS 2017, 2017, DOI Link

    View abstract ⏷

    In this paper, we propose a branch-and-bound based exact algorithm for allocating multi-dimensional variable sized VMs at the cloud data center. Further, an energy efficient VMs migration technique is proposed to reduce the energy consumption and thus avoids the Service Level Agreement (SLA) violation at the cloud data center.
  • Novel energy efficient virtual machine allocation at data center using Genetic algorithm

    Sharma N.K., Ram Mohana Reddy G.

    Conference paper, 2015 3rd International Conference on Signal Processing, Communication and Networking, ICSCN 2015, 2015, DOI Link

    View abstract ⏷

    Increased resources utilization from clients in a smart computing environment poses a greater challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can save the energy of data center as well as avoiding the service level agreement (SLA) violation. This paper deals with the design of an energy efficient algorithm for optimized resources allocation at data center using combined approach of Dynamic Voltage Frequency Scaling (DVFS) and Genetic algorithm (GA). The performance of the proposed energy efficient algorithm is compared with DVFS. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy over a specified workload with 0% SLA violation.
  • A novel energy efficient resource allocation using hybrid approach of genetic DVFS with bin packing

    Sharma N.K., Reddy G.R.M.

    Conference paper, Proceedings - 2015 5th International Conference on Communication Systems and Network Technologies, CSNT 2015, 2015, DOI Link

    View abstract ⏷

    Increased resources utilization from several clients in a smart computing environment poses a key challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can reduce the energy consumption of the data center and also avoid the service level agreement (SLA) violation. This paper deals with the development of an energy efficient algorithm for optimal resources allocation at the data center using hybrid approach of the Dynamic Voltage Frequency Scaling (DVFS), Genetic algorithm (GA) and Bin Packing techniques. The performance of the proposed hybrid approach is compared with Genetic Algorithm, DVFS with Bin Packing, DVFS without Bin Packing techniques. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy as compared to the DVFS with Bin Packing technique over a specified workload with 0% SLA violation.
  • Adoption of knowledge management practices in software engineering organizations: A survey of software engineers’ perceptions

    Sharma N., Singh K., Goyal D.P.

    Conference paper, Proceedings - 2012 2nd International Conference on Advanced Computing and Communication Technologies, ACCT 2012, 2012, DOI Link

    View abstract ⏷

    Most businesses rely on the fact that their employees possess relevant knowledge and that they can apply it to the task at hand. But a serious problem exists, this knowledge is not owned by the organisation as such. Rather the knowledge is owned and controlled by its employees. The success or failure of the KM systems and the various KM practices claimed to be adopted by software engineering organisations can be best judged from software engineers' point of view as they are the ones who are the first-hand users of the KM systems and technologies. © 2012 IEEE.
Contact Details

neerajkumar.s@srmap.edu.in

Scholars

Doctoral Scholars

  • Mr Chiranjeevi Koganti