Link Prediction in Complex Hyper-Networks Leveraging HyperCentrality
Dr Murali Krishna Enduri, Ms Yalamanchili Venkata Nandini, Jaya Lakshmi Tangirala., Mohd Zairul Mazwan Jilani
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
In complex networks, predicting the formation of new connections, or links, within complex networks has been a central challenge, traditionally addressed using graph-based models. These models, however, are limited in their ability to capture higher-order interactions that exist in many real-world networks, such as social, biological, and technological systems. To account for these multi-node interactions, hyper-networks have emerged as a more flexible framework, where hyperedges can connect multiple nodes simultaneously. Traditional link prediction methods often treat all common neighbors equally, overlooking the fact that not all nodes contribute uniformly to the formation of future links. Each node within a network holds a distinct level of importance, which can influence the likelihood of link formation among its neighbors. To address this, we introduce a link prediction approach leveraging hypercentrality measures adapted from traditional centrality metrics such as degree, clustering coefficient, betweenness, and closeness to capture node significance and improve link prediction in hyper-networks. We propose the Link Prediction Based on HyperCentrality in hyper-networks (LPHC) model, which enhances traditional common neighbor and jaccard coefficient of hyper-network frameworks by incorporating centrality scores to account for node importance. Our approach is evaluated across multiple real-world hyper-networks datasets, demonstrating its superiority over traditional link prediction methods. The results show that link prediction in hypercentrality-based models, particularly those utilizing hyperdegree and hyperclustering coefficients for common neighbor and jaccard coefficent approaches in hyper-networks, consistently outperform existing methods in terms of both F1-score and Area Under the Precision-Recall Curve (AUPR), offering a more precise understanding of potential link formations in hyper-networks. The proposed LPHC model consistently outperforms the existing HCN and HJC models across all datasets, achieving an overall improvement of 69% compared to HCN and 68% compared to HJC
Finding Influential Nodes using Mixed Centralities in Complex Networks
Source Title: 2025 17th International Conference on COMmunication Systems and NETworks (COMSNETS), DOI Link
						View abstract ⏷
					
In an era of rapidly increasing technology, social media plays a crucial role which allows people to interact with each other, sharing knowledge, and shaping influence. Identifying the most important influencers within complex networks is essential for effectively spreading information globally. These influencers are pivotal in various domains, such as marketing, information dissemination, and opinion formation. Various centrality measures like isolating centrality, local-global centrality, closeness, betweenness, and degree centrality etc., have been developed to identify influential nodes. These measures are categorized into two types namely: local and global measures. The local measures rely solely on local information, resulting in lower accuracy, whereas global metrics uses global information, which increases computational complexity. To tackle these issues, we propose a novel Mixed Centrality(MC) metric, based on the local average shortest path along with semi local and isolating centrality. To calculate the efficiency of MC, we use SIR model used for calculating the effect of data dissemination. Subsequently, we use Kendall Taus coefficient for calculating the similarity between our method and existing centrality measures on real-world datasets, such as bio-celegans and fb-pages-politicians
Identifying influential nodes using semi local isolating centrality based on average shortest path
Source Title: Journal of Intelligent Information Systems, Quartile: Q1, DOI Link
						View abstract ⏷
					
In complex networks, identifying influential nodes becomes critical as these networks emerge rapidly. Extensive studies have been carried out on intricate networks to comprehend diverse real-world networks, including transportation networks, facebook networks, animal social networks, etc. Centrality measures like degree, betweenness, closeness, and clustering centralities are used to find influential nodes, but these measures have limitations in implementation with large-scale networks. These centrality measures are classified into global and local centralities. Semi-local structures perform well compared to local and global centralities but efficient centrality for finding influential nodes remains a challenging issue in large-scale networks. To address this challenge, a Semi-Local Average Isolating Centrality (SAIC) metric is proposed that integrates semi-local and local information to identify important nodes in large networks, along with the relative change in average shortest path. Here, we consider extended neighborhood concept for selecting the nodes nearest neighbors along with the weighted edge policy to find the best influential nodes by using SAIC. Along with these, SAIC also consider isolated nodes which significantly impact the network connectedness by maximizing the number of connected components upon removal. As a result SAIC differentiates itself from other centrality metrics by employing a distributed approach to define semi-local structure and utilizing an efficient edge weighting policy. The analysis of SAIC has been performed on multiple real-time datasets using Kendall taus coefficient. Using the Susceptible-Infected-Recovered (SIR) and Independent Cascade(IC) models, the performance of SAIC has been examined to determine maximum information spread in comparison to the most recent metrics in some real-world datasets. Our proposed method SAIC performs better in terms of information spreading when compare with other exisiting methods, with an improvement ranging from 4.11% to 17.9%
Blockchain and AI for Educational Data Analytics in the Modern Education System
Source Title: Blockchain and AI in Shaping the Modern Education System, DOI Link
						View abstract ⏷
					
This chapter explores the transformative potential of integrating blockchain and artificial intelligence (AI) technologies within educational data analytics. It begins by examining blockchain's capacity to enhance data security, streamline record-keeping, and ensure transparent credential verification. Concurrently, it analyzes AI's role in enabling adaptive learning, predictive modeling, and insightful data analysis to improve student outcomes and optimize educational strategies. The chapter further evaluates the synergistic benefits of combining blockchain and AI, proposing a robust framework to address prevalent challenges in the education sector, including data privacy, security, and personalized learning. By securing student records through blockchain's immutability and enhancing personalized learning experiences via AI-driven analytics, the chapter presents a comprehensive approach to modernizing educational systems. Additionally, it addresses technical challenges such as scalability and interoperability, alongside ethical considerations like data privacy, consent, and algorithmic bias. The chapter concludes with a call for collaborative efforts among educators, technologists, and policymakers to leverage these technologies, navigate their challenges, and fully realize their potential in revolutionizing education
Evaluating Community Detection Algorithms: A Focus on Effectiveness and Efficiency
Source Title: Journal of Scientometric Research, Quartile: Q2, DOI Link
						View abstract ⏷
					
Many practical problems and applications are characterized in the form of a network. If the network becomes huge and complex, it becomes very difficult to identify the partitions and the relationships among each of the networks nodes. As a result, the graph is divided into communities and several community detection methods are proposed to associate those communities. The formation of virtual clusters or communities often occurs in networks due to the likelihood of individuals with similar choices and desires associating with one another. Detecting these communities holds significant benefits across various applications, such as identifying shared research areas in collaboration networks, detecting protein interaction in biological networks and finding like-minded individuals for marketing and suggestions. Numerous community detection algorithms are applied in different domains. This paper gives a brief explanation of existing algorithms and approaches for community detection like Louvain, Kernighan-Lin, Girvan Neuman, Label Propagation and Leiden algorithms as well as discusses various applications of community detection. We have evaluated our comparison with six different datasets namely biocelegans, ca-netscience, usair97, webpolblogs, email-univ and powergrid for comparing the efficiency of the methods. The modularity and conductance scores are used to assess the caliber of the partitioned community. A special emphasis on the comparison of these community detection methods is concerned and how the quality resembles and the time taken for its evaluation. We have evaluated all these algorithms and concluded that Louvain and Leiden community detection algorithms are used for effective community division in terms of its structure and time
Discovering Influential Nodes in Hypergraphs With MHRW Sampling and Isolating Centrality
Source Title: 2025 IEEE 14th International Conference on Communication Systems and Network Technologies (CSNT), DOI Link
						View abstract ⏷
					
Determining the influential nodes is a fundamental problem in analyzing the dynamics of a complex system. The complex systems are represented as hypergraphs, conventional centrality metrics, such as degree, closeness, betweenness and harmonic centralities have been extended from graph-based models to hypergraphs, but these adaptions fail to capture the unique structural characteristics and higher order relationships inherent in hypergraphs. To overcome these limitations, we proposed Isolating centrality (ISC), quantifies node influence by incorporating both local connectivity patterns and degree of structural isolation. To manage the computational complexity of analyzing large-scale hypergraphs, we employed the Metropolis-Hastings Random Walk (MHRW) sampling technique, traditionally used in graphs and extended to hypergraphs, by preserving the basic properties of hypergraphs. For this sampled hypergraph we computed ISC and compared it with traditional metrics by evaluation metrics such as SIR model, Kendall Tau correlation analysis
Enhanced Identification of Fraud in Credit Card Transactions Applying Machine Learning Strategies
Dr Murali Krishna Enduri, Ms Anupoju Tejaswi, Bugginni Roshini., Likhitha Sri Gandham., Parimala Shivani Mandava
Source Title: 2025 IEEE 14th International Conference on Communication Systems and Network Technologies (CSNT), DOI Link
						View abstract ⏷
					
Credit card theft resulting in multi-millions of dollars in damages is making it a major global financial and security concern yearly. This analysis focuses on dealing with this critical issue by establishing a model developed on machine learning that can identify fraudulent transactions effectively. Developed model leverages earlier credit card transactions to identify patterns indicative of fraud. By employing several kinds of Machine learning algorithms, comprising K-Nearest Neighbors (KNN), Logistic Regression, Random Forest, Decision Trees, and XGB Classifier, the project evaluates the performance of each approach in accurately differentiating among transactions that are fraudulent and those that are not. These models work well for preventing credit card scams as they can manage unbalanced data, identify irregularities, and adjust to intricate fraud patterns. Random Forest and XGBoost provide excellent accuracy and resilience. Enabling early identification of fraudulent activities, and ensuring that customers money is safeguarded and unauthorized charges are prevented, is the principal focus. This will benefit customers by ensuring their funds are restored and their accounts remain secure. The effectiveness of the models is examined by integrating measures involving precision, recall, accuracy, F1-score, and confusion matrix after they have been trained and tested on a dataset. This paper emphasizes a way in which machine learning may be used to detect fraud also emphasizing how important the advanced algorithms are in mitigating financial losses and enhancing the security of digital transactions
Leveraging Seasonal Trends and Symptomatic Data for Precise Disease Prediction Using Machine Learning
Dr Murali Krishna Enduri, Dr Satish Anamalamudi, Anjana Harshitha Somayajula., Aetesh Chagantipati., Amish Gollapudi., Maganti Suryanarayana Dattatreya
Source Title: 2025 IEEE 14th International Conference on Communication Systems and Network Technologies (CSNT), DOI Link
						View abstract ⏷
					
Our analysis examines the amalgamation of seasonal and symptomatic data to boost the forecasting of diseases, emphasizing the need for proactive healthcare measures. The rising need for accurate, data-driven preventive healthcare solutions, especially in regions where seasonal variations significantly impact disease patterns, serves as the foundation for this study. We developed a comprehensive prediction framework using a range of machine learning models, including logistic regression, decision trees, Naive Bayes, K-nearest neighbors, support vector machines (SVM), and random forests. Our results show that logistic regression and SVM achieved high accuracy both with and without the use of SMOTE, demonstrating their effectiveness in handling imbalanced datasets. This technique links symptoms, seasonal variations, and disease patterns for precise categorization and actionable insights, enabling effective illness detection and preventive coordination. The findings of this study help foster the use of machine learning methods to improve preventive healthcare and illustrate the need to include contextual seasonal data in healthcare forecasts. Additionally, the research highlights how symptoms and seasonal variables work together dynamically, demonstrating the potential of adaptive models to assist with healthcare decision-making in real-time
Comparative Analysis of Machine Learning Algorithm; for Conversion Predictors of Clinically Isolated Syndrome (CIS) to Multiple Sclerosis (MS)
Dr Murali Krishna Enduri, Mr Koduru Hajarathaiah, Serena Mendanha., Keerthi Reddy Gudibandi.,Bharat Reddy Gudibandi
Source Title: 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), DOI Link
						View abstract ⏷
					
The central nervous system is impacted by multiple sclerosis (MS), a chronic neurological condition that causes significant cognitive and physical deficits. Better disease management and prompt intervention depend on early and precise detection. Effective diagnosis of MS is hampered by a number of factors, such as small datasets, significant clinical presentation variability, difficult feature selection, model generalization problems, and integration of multimodal data such as magnetic resonance imaging and genetic markers. Several machine learning (ML) models are assessed in this study in order to predict the development of clinically isolated syndrome (CIS) to multiple sclerosis. Using clinical and magnetic resonance data from patients with CIS at risk of developing MS, we evaluate the effectiveness of support vector machines (SVM), K nearest neighbor (KNN), decision trees (DT), random forests (RF), logistic regression (LR), Gaussian naive Bayes (Gaussian NB) and XGBoost (XG). Evaluation is performed using performance criteria including F1 score, recall, precision, and precision. According to our research, Random Forest has the best prediction accuracy, which makes it a potentially useful tool for helping doctors diagnose and treat MS patients early. Notwithstanding these developments, issues like model interpretability and data scarcity still exist. In order to improve diagnosis precision, future research will concentrate on enhancing these models by integrating deep learning methods, genetic markers,, and more advanced imaging modalities.
Link Prediction Based on Node Centrality Measure
Source Title: Smart Innovation, Systems and Technologies, Quartile: Q4, DOI Link
						View abstract ⏷
					
Predicting links is crucial task for determining future links in complex networks across different real-world domains like information networks, social interactions, and technological networks. The link prediction method utilizes graph topological features to locate common neighborhood, yet it overlooks the importance of nodes within the network. In this context, we seek to utilize the importance of node in the network in the link prediction techniques. Centrality metrics measure a nodes relative importance within the network and demonstrates a strong correlation with future links in complex networks. In our study, we propose a novel link prediction measure called Local-Similarity based on Summation of Degree Centrality (CLP). CLP finds similarity scores for node pairs by considering common neighbors and use the centrality scores of these common neighbors in the prediction task. To assess our approach, we compare it with existing methods like Jaccard coefficient, Preferential Attachment, and a recent measure like Keyword Network Link Prediction based on degree centrality. We conduct experiments on four real-world datasets, and CLP shows significant improvements. On average, theres a 15% improvement in Area Under the Receiver Operating Characteristic (AUROC) compared to existing methods and a 27% improvement over the recent one. Additionally, theres an average 20 and 23% enhancement in Area Under Precision Recall (AUPR) compared to existing and recent methods. Our experiments highlight the superior performance of the proposed CLP method
Swarm Intelligence in IoT and Edge Computing
Source Title: Swarm Intelligence, Quartile: Q2, DOI Link
						View abstract ⏷
					
Swarm intelligence plays a crucial role in enhancing the performance of IoT and edge computing. Swarm intelligence, a natural systems-inspired collective decision-making paradigm, has helped to solve most of the existing issues like channel selection, routing table optimization, and scheduling operations in IoT networks. This study discusses how swarm intelligence might improve anomaly detection, energy-efficient routing, and scalable, decentralized algorithms. IoT, edge computing, and swarm intelligence enable efficient data processing, network performance, and novel solutions to complicated issues. Swarm intelligence enhances IoT and edge computing systems, bringing new ideas and solutions for the growing environment of interconnected devices
Empirical Analysis of Variations of Matrix Factorization in Recommender Systems
Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link
						View abstract ⏷
					
Recommender systems recommend products to users. Almost all businesses utilize recommender systems to suggest their products to customers based on the customer's previous actions. The primary inputs for recommendation algorithms are user preferences, product descriptions, and user ratings on products. Content-based recommendations and collaborative filtering are examples of traditional recommendation systems. One of the mathematical models frequently used in collaborative filtering is matrix factorization (MF). This work focuses on discussing five variants of MF namely Matrix Factorization, Probabilistic MF, Non-negative MF, Singular Value Decomposition (SVD), and SVD . We empirically evaluate these MF variants on six benchmark datasets from the domains of movies, tourism, jokes, and e-commerce. MF is the least performing and SVD is the best-performing method among other MF variants in terms of Root Mean Square Error (RMSE)
Deep Learning for Aerial and Satellite Image Analysis: a CNN-Based Approach
Dr Murali Krishna Enduri, Ms Anupoju Tejaswi, Sailesh Adda., Hemanth Valeti., Gangireddy Salla
Source Title: 2025 IEEE 14th International Conference on Communication Systems and Network Technologies (CSNT), DOI Link
						View abstract ⏷
					
Applications like disaster management, urban planning, and environmental monitoring rely on satellite image categorization. This project develops a machine learning pipeline using MobileNetV2, a CNN architecture, to classify high-resolution satellite images. It employs two convolutional layers (3x3 kernels) with ReLU activation, 2x2 max-pooling, a fully connected layer, and a SoftMax output for multi-class classification. Images are resized to 200x200 pixels (RGB) to balance detail and efficiency. MobileNetV2 was chosen for its low latency and high performance, using depth-wise separable convolutions and inverted residuals. The model, optimized with Adam and categorical crossentropy, achieved 98% validation accuracy and F1-scores above 0.96 across all classes, converging in 8 epochs. The architecture balances simplicity and performance for robust feature learning and generalization. This approach highlights CNNs' ability to classify satellite images effectively. Future work could explore transformer-based models or integrate temporal satellite data to enhance analysis. This work offers a scalable, automated solution for satellite image classification
Improving Weather Forecast Accuracy Using Hybrid Machine Learning Algorithms
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Naga Charitavya Madala., Sai Durga Saradhi Pranu Deepak Tallapudi., Venkata Srikari Malladi., Durga Mahesh Muthinti
Source Title: 2024 IEEE 16th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
Weather prediction, particularly for rainfall, temperature and relative humidity (RH%), is critical for climate-sensitive industries such as agriculture and disaster relief. This paper introduces a predictive modeling framework based on a range of machine learning (ML) methods. In addition, hybrid models such as Catboost-Logistic Regression, XGboost-k-Nearest Neighbors (KNN) and Gaussian ProcessDecision Tree have been investigated to improve prediction accuracy. Using a 10-year Gannavaram dataset, we focus on multi-label classifications (2 to 5 labels) of weather characteristics such as rainfall, temperature, and RH%. Notably, the Gaussian Process consistently predicted rainfall with 100% accuracy, while hybrid models such as CatboostLogistic Regression and XGBoost-KNN performed well across a variety of criteria. The combination of these hybrid models and standalone ML algorithms has considerably increased the resilience of weather forecasting. Our research highlights the effectiveness of combining machine learning models to improve predictive accuracy, offering a valuable contribution to realtime weather prediction systems
Explainable Depression Detection in Social Media Using Transformer-Based Models: A Comparative Analysis of Machine Learning
Dr Murali Krishna Enduri, Venkata Sai Laxman B., Sanat Shantanu Kulkarni., Beecha Venkata Naga Hareesh
Source Title: 2024 IEEE 16th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
The growing use of online platforms like Twitter and Reddit has created new opportunities in the field of mental health by enabling the analysis of language patterns in user-generated content. This study explores how transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT), Generative Pre-trained Transformer (GPT-2), Robustly Optimized BERT Approach (RoBERTa), and Term Frequency-Inverse Document Frequency (TF-IDF) for text feature matrices can be applied to detect depressive symptoms in social media posts. The study utilizes machine learning algorithms, including Logistic Regression (LR), Random Forest, Support Vector Machine (SVM), Decision Tree, Naive Bayes, Multi-Layer Perceptron (MLP), and XGBoost, to improve the detection of depression-related markers in textual data. By leveraging annotated datasets specifically focused on depression, these models are trained to identify depression indicators in the language used by social media users. We employ supervised learning techniques to enhance model performance across various platforms, aiming to achieve greater accuracy and generalizability
Cryptographic Pixel Manipulation for Visual Security
Source Title: 2024 IEEE 16th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
Data protection via encryption continues to be a key concern in the constantly changing field of digital security. This study investigates a novel method of pixel displacement picture encryption via a modified Caesar cipher algorithm. The proposed method ensures enhanced security by shifting pixel values according to a random key matrix, obscuring image content from unauthorized access. Unlike traditional Caesar cipher applications, which are often criticized for their simplicity and vulnerability, this pixel-wise encryption method leverages the power of modular arithmetic to transform grayscale image data into a format resilient to common cryptographic attacks and concerns. Since the encryption strength is largely dependent on the key's unpredictability and secrecy, key management is essential to this strategy. This technique offers a trivial alternative suitable for specific low resource applications where efficiency is Paramount. The paper also discusses the implications of this method in the broader context of confidentiality, data integrity, and authentication, which are crucial elements in the modern digital security paradigm
Enhancing Image Crispness: A Detailed System for Versatile Optimization
Dr Murali Krishna Enduri, Dr Satish Anamalamudi, Kalyan Kumar Doppalapudi., Geetha Siva Srinivas Gollapalli., Yaswanth Chowdary Thotakura., Shalom Raja Kasim.,Yaswanth Chowdary Thotakura
Source Title: 2024 International Conference on Intelligent Computing and Emerging Communication Technologies (ICEC), DOI Link
						View abstract ⏷
					
From space shots to health checkups, improving photos is crucial. Improving the clarity of the image is crucial here. Techniques such as adjusting contrast and balancing the histogram in space may improve clarity and sharpness by directly managing pixel data. Meanwhile, Fourier Transform-based enhancement analyzes and modifies frequency portions to make precise modifications. These methods are crucial for duties like reducing sound in aerial photos and emphasizing microscopic health images. This illustrates how simple they are to employ to improve varied image quality. Choosing between spatial or frequency-based methods in real-life relies on the pictures unique properties and desired improvements. Using space-based techniques on pixel values may improve the clarity of minor patterns in a picture. Frequency-based methods provide a unique perspective of light wavelengths utilized to produce visuals. A solid balance between keeping critical sections clear and reducing unnecessary picture components is crucial. When improving photographs, this balance yields the greatest outcomes, demonstrating how these principles apply to real-life circumstances
Evaluation of Asymmetric Link-Based AODV Routing Protocol for Low Power and Lossy Networks in COOJA Simulator
Source Title: 2024 International Conference on Computer, Electronics, Electrical Engineering & their Applications (IC2E3), DOI Link
						View abstract ⏷
					
Route discovery for Point-to-Point (P2P) traffic flows is critical in Low-power and Lossy Networks (LLNs), especially in home and building automation applications. Although the Routing Protocol for Low-Power and Lossy Networks (RPL), standardized by the Internet Engineering Task Force (IETF), is widely used, it struggles to efficiently handle P2P flows, leading to congestion at the Destination Oriented Directed Acyclic Graph (DODAG) root and increased packet delays. The P2P-RPL protocol aims to address this issue, but neither RPL nor P2P-RPL adequately account for asymmetric wireless links in their route computations. AODV-RPL, a protocol draft adopted by the IETF's ROLL working group, offers a reactive P2P route discovery mechanism based on Ad Hoc On-demand Distance Vector Routing (AODV), operating with RPL in its storing mode. This study assesses the performance of AODV-RPL using the Cooja simulator across various network topologies and node densities, considering both symmetric and asymmetric links. Through extensive simulations, it is demonstrated that AODV-RPL outperforms traditional RPL in Packet Delivery Ratio (PDR) and delay performance for numerous source and destination pairs. However, while AODV-RPL generally improves upon traditional RPL for P2P routes, some node pairs remain where the difference in relative hop distance is minimal
Algorithmic Guardians: Evaluating Machine Learning for Predicting Criminal Activities
Dr Murali Krishna Enduri, Beecha Venkata Naga Hareesh., Reethu Bhargavi Sajjala., Lahari Kotapati., Rishitha Kancharla., Tadiparthy Mani Dheeraj Kumar
Source Title: 2024 IEEE 13th International Conference on Communication Systems and Network Technologies (CSNT), DOI Link
						View abstract ⏷
					
The objectives of this study are to investigate the use of machine learning algorithms to the prediction of criminal behavior. There are many different algorithms that are analyzed and contrasted based on their performance in terms of accuracy, precision, recall, and F1 score. Some of these algorithms include Logistic Regression, K-Nearest Neighbours (KNN), Support Vector Machine (SVM), Random Forest, Naive Bayes, Decision Tree, Multi-Layer Perceptron (MLP), and XG Boost. The evaluation and projection of crime rates in any particular area or nation is of the utmost importance to the authorities who are in charge of governance. These results not only contribute to the identification of methods that may be used to reduce the rates of criminal activity in communities, but they also contribute to the development of practical methods that can finally reduce the number of unlawful activities
MindSight: Revolutionizing Brain Tumor Diagnosis with Deep Learning
Dr Murali Krishna Enduri, Dr Satish Anamalamudi, Mr Koduru Hajarathaiah, Rushita Gandham., Keerthi Reddy Manambakam., Navyasri Nannapaneni
Source Title: 2024 IEEE 13th International Conference on Communication Systems and Network Technologies (CSNT), DOI Link
						View abstract ⏷
					
Brain tumors, characterized by abnormal cell growth, pose a substantial health challenge with non-cancerous (benign) and cancerous (malignant) categories. India witnesses the diagnosis of approximately 40,000 fresh instances of brain tumors annually. The rarity and diversity of tumor types make predicting survival rates challenging. Efficient identification of cerebral abnormalities is essential for the timely and effective management of neurological conditions. Exploring the application of deep learning, this study investigates brain tumor detection using a curated dataset of Magnetic Resonance Images (MRI). Utilizing this dataset, brain tumor detection is advanced through the application of diverse models, including EfficientNetB3, ResNet50, MobileNetV3, and VGG16. The study prioritizes dataset preprocessing, emphasizing data augmentation. Diverse brain tumor images contribute to model training, incorporating transfer learning from pre-trained models on extensive datasets for discerning intricate patterns in medical images. Efficiency evaluation considers computational resources, training time, and complexity. Quantitative metrics F1 score, accuracy, recall, and precision are employed to gauge model performance in classifying the tumor and non-tumor regions. In the conducted study, VGG16 demonstrated the best performance compared to all other models
Predicting Machine Learning for Early Diabetes Mellitus Prediction
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Sanjana Singamsetty., Mittapally Shanmukh Nandan., Siva Sai Cherish Polu., Junga Leela Manohar
Source Title: 2024 Beyond Technology Summit on Informatics International Conference (BTS-I2C), DOI Link
						View abstract ⏷
					
Diabetes mellitus (DM) is characterized by hyperglycemia, a chronic illness caused by inadequate insulin synthesis or an abnormal insulin response. Early identification is crucial since the Centres for Disease Control and Prevention (CDC) anticipate that by 2060, the number of Type 2 Diabetes Mellitus (T2DM) cases among those under 20 would have skyrocketed by 700%. This article offers a thorough approach to diabetes prediction using three datasets: the Pima Indian Diabetes Dataset, the Iraqi Diabetes Dataset, and a medical dataset from Kaggle. Logistic Regression, Decision Tree, Random Forest, SVM, K-Nearest Neighbours, Naive Bayes, Gradient Boosting, and many neural network designs (two-layered neural networks, LSTM, and Bi-LSTM) were among the machine learning models used, along with a voting classifier. The experimental findings demonstrate that machine learning can improve diabetes diagnosis and treatment by demonstrating robust prediction capabilities across models.
Estimating Future Prices of Key Agricultural Commodities Using Machine Learning Models
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Sashikanth Yerukala., Poojitha Madala., Vanhi Battula., Nomitha Prasanthi Bisiringi
Source Title: 2024 Beyond Technology Summit on Informatics International Conference (BTS-I2C), DOI Link
						View abstract ⏷
					
Farmers, consumers, and policymakers face difficulties as a result of the price fluctuation of basic agricultural commodities like rice, tomatoes, onions, and dals. For better market stability and well-informed decision-making, accurate price forecasts are essential. In order to assess historical market data and forecast price patterns, this study proposes a machine learning-based method that makes use of regression and time-series forecasting models. The suggested models show superior accuracy than conventional statistical techniques by capturing the intricacies of price swings caused by seasonal demand and crop yields, promoting enhanced supply chain planning and efficiency in agriculture.
Identifying Influential Nodes in Hypergraph Using Isolating Centrality
Source Title: 2024 Beyond Technology Summit on Informatics International Conference (BTS-I2C), DOI Link
						View abstract ⏷
					
Understanding processes such as information dissemination and network resilience relies on pinpointing influential nodes within complex relationships. Conventional centrality measures, which are based on shortest path calculations, fall short in hypergraphs due to the complexity of paths involving multiple simultaneous relationships. Traditional metrics oversimplify the node influence and the stability of hyperedges. The Isolating Centrality measure is introduced in hypergraphs to address the drawback it specifically focuses on local structures. ISC assesses how the removal of a node affects the connectivity of hyperedges, providing a more detailed approach for identifying influential nodes. We assess the superiority performance of ISC over conventional centrality metrics and are evaluated through correlation analysis and the SIR model.
Microplastic Detection in Drinking Water: A Comparative Analysis of CNN-SVM and CNN-RF Hybrid Models
Dr Satish Anamalamudi, Dr Murali Krishna Enduri, Prashanthi Thota., Kausik Challapalli., Harsha Garikapati., Veera Manikanta Sai Adusumilli
Source Title: 2024 OITS International Conference on Information Technology (OCIT), DOI Link
						View abstract ⏷
					
The growing presence of microplastics in drinking water poses severe dangers to health and the environment, requiring enhanced detection methods. This work deals with the constraints of conventional detection methods, such as visual inspection and Raman spectroscopy, that are labour-intensive and unscalable. By contrasting CNN-SVM and CNN-RF, two highly efficient hybrid models of machine learning, the essential purpose is to improve the accuracy of detection of the microplastics. Convolutional neural networks (CNNs) extract features from images of water samples, and the method classifies them by using Support Vector Machine (SVM) and Random Forest (RF) algorithms. The study assesses the models' precision, recall, f1-score, and overall accuracy. The findings show that these hybrid models greatly enhance detection abilities, resulting in a more effective and flexible solution. Practical uses involve real-time monitoring in water treatment plants, ecological evaluations of water bodies, as well as household water filtering systems, which provide vital information for compliance with regulations and public health safety. This study provides our knowledge on the problem of microplastic pollution and indicates possible future uses in monitoring the environment and policy-making, which will help attempts to reduce the harmful effects of microplastics in drinking water.
Employing TF-IDF and Word2Vec Embeddings to Identify Multi-Class Toxicity Through Machine and Deep Learning Approaches
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Sanjana Singamsetty., Harshitha Somayajula., Anu Likitha Immadisetty., Keerthi Sree Konkimalla
Source Title: 2024 OITS International Conference on Information Technology (OCIT), DOI Link
						View abstract ⏷
					
Our investigation delves into the intricate catalysts triggering toxicity within online discourse, revealing how seemingly innocuous comments can unexpectedly provoke hostile reactions. Given the profound influence of social media viewpoints on individuals, mitigating toxicity emerges as a critical imperative. To address this challenge, we present a sophisticated multi-label classification framework integrating TF-IDF and Word2Vec methodologies for robust vectorization. This framework amalgamates fundamental textual data with intricate metrics derived from prior research, facilitating nuanced monitoring of sentiment shifts, topic dynamics, and conversational context. Leveraging a diverse array of algorithms, including Logistic Regression, AdaBoost, Naive Bayes, Gradient Boosting, as well as Neural Network architectures like LSTM and Bi-LSTM, our model showcases exceptional efficacy in identifying four distinct types of toxicity: toxic, obscene, insult, and non-toxic. Importantly, our study underscores the necessity of accounting for contextual subtleties and sentiment fluctuations in online interactions, advocating for the widespread adoption of advanced natural language processing techniques to foster constructive discourse and enhance digital engagement. Furthermore, our research underscores the dynamic nature of online conversations, emphasizing the need for adaptable frameworks capable of capturing evolving patterns of toxicity.
Empowering Quality of Recommendations by Integrating Matrix Factorization Approaches With Louvain Community Detection
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
Recommendation systems play an important role in creating personalized content for consumers, improving their overall experiences across several applications. Providing the user with accurate recommendations based on their interests is the recommender systems primary goal. Collaborative filtering-based recommendations with the help of matrix factorization techniques is very useful in practical uses. Owing to the expanding size of the dataset and as the complexity increases, there arises an issue in delivering accurate recommendations to the users. The efficient functioning of the recommendation system undergoes the scalability challenge in controlling large and varying datasets. This paper introduces an innovative approach by integrating matrix factorization techniques and community detection methods where the scalability in recommendation systems will be addressed. The steps involved in the proposed approach are: 1) The rating matrix is modeled as a bipartite network. 2) Communities are generated from the network. 3) Extract the rating matrices that belong to the communities and apply MF to these matrices in parallel. 4) Merge the predicted rating matrices belonging to the communities and evaluate root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE). In our paper different matrix factorization approaches like basic MF, NMF, SVD++, and FANMF are taken along with the Louvain community detection method for dividing the communities. The experimental analysis is performed on five different diverse datasets to enhance the quality of the recommendation. To determine the methods efficiency, the evaluation metrics RMSE, MSE, and MAE are used, and the time required to evaluate the computation is also computed. It is observed in the results that almost 95% of our results are proven effective by getting lower RMSE, MSE, and MAE values. Thus, the main aim of the user will be satisfied in getting accurate recommendations based on the user experiences.
Extending Graph-Based LP Techniques for Enhanced Insights Into Complex Hypergraph Networks
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
Many real-world problems can be modelled in the form of complex networks. Social networks such as research collaboration networks and facebook, biological neural networks such as human brains, biomedical networks such as drug-target interactions and protein-protein interactions, technological networks such as telephone networks, transportation networks and power grids are a few examples of complex networks. Any complex system with entities and interactions existing between the entities can be modelled as a graph mathematically, with nodes representing entities and edges reflecting interactions. In numerous real-world circumstances, interactions are not confined to pair of entities. Majority of these intricate systems inherently possess hypergraph structures, characterized by interactions that extend beyond pairwise connections. Existing studies often transform complex interactions at a higher level into pairwise interactions and subsequently analyze them. This conversion frequently leads to both the loss of information and the inability to reconstruct the original hypergraph from the transformed network with pairwise interactions. One of the most essential tasks that can be performed on these graphs is Link Prediction (LP), which is the task of predicting future edges (links) in a graph. LP in graphs is well investigated. This article presents a novel methodology for predicting links in hypergraphs. Unlike conventional approaches that transform hypergraphs into graphs with pairwise interactions, the proposed method directly leverages the inherent structure of hypergraphs in predicting future interaction between a pair of nodes. This is motivated by the fact that hypergraphs enable the depiction of intricate higher-order relationships through hyperlinks, enhancing their representation. Their capacity to capture complex structural patterns improves predictive capabilities. Node neighborhoods within hypergraphs offer a comprehensive framework for LP, where hyperlinks simplify interactions between nodes across cliques. We propose a novel method of Link Prediction in Hypergraphs (LPH) to predict interactions within hypergraphs, maintaining their original structure without conversion to graphs, thus preserving information integrity. The proposed approach LPH extends local similarity measures like Common Neighbors, Jaccard Coefficient, Adamic Adar, and Resource Allocation, along with a global measure, Katz index, to hypergraphs. LPH's effectiveness is assessed on six benchmark hyper-networks, employing evaluation metrics such as Area under ROC curve, Precision, and F1-score. The proposed measures of LP on hypergraphs resulted in an average enhancement of 10% in terms of Area under ROC curve compared to contemporary as well as conventional measures. Additionally, there is an average improvement of 70% in precision and around 50% in F1-score. This methodology presents a promising avenue for predicting pairwise interactions within hypergraphs while retaining their inherent structural complexity as well as information integrity.
Exploring the Path: Machine Learning Approaches to Cardiovascular Risk Assessment
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Nagacharitavya Madala., Sai Durga Sardhi Pranu Deepak Tallapudi., Mahitha Chimata., Venkata Srikari Malladi.,
Source Title: 2024 10th International Conference on Communication and Signal Processing (ICCSP), DOI Link
						View abstract ⏷
					
Cardiovascular disease, refers to a variety of circumstances affecting the cardiovascular and blood vessels, such as cardiovascular failure and coronary artery illness. Cardiovascular disease, a major global health concern, is frequently caused by atherosclerosis, a condition in which plaque builds up and obstructs blood flow. This study introduces a predictive modeling methodology utilizing various machine learning (ML) algorithms. Additionally, hybrid models including Random Forest-Gradient Boosting, Genetic Algorithm-Support Vector Machine (GA-SVM), AdaBoostSupport Vector Machine (AdaBoost-SVM), Logistic Regression-Principal Component Analysis (LR-PCA), and Gradient Boosting Machines-Decision Tree (GBMDT) have been integrated into the analysis. Using two distinct datasets, our study focuses on proactive heart disease management, addressing a significant health challenge. Notably, the Random Forest-Gradient Boosting Machines (RF-GBM) hybrid model exhibited exceptional performance, achieving an impressive 93.5% accuracy for both datasets in predicting heart disease. These results highlight the effectiveness of our integrated approach in advancing predictive modeling for improved cardiovascular health management.
Isolating Centrality-Based Generalization of Traditional Centralities to Discover Vital Nodes in Complex Networks
Source Title: Arabian Journal for Science and Engineering, Quartile: Q1, DOI Link
						View abstract ⏷
					
The detection and ranking of influential nodes remains one of the key areas of research for understanding information diffusion, epidemic control, routing efficiency, and online influence in large-scale complex networks. Centrality measures have been proven to be the most reliable methods that effectively capture the nodes influence in the literature. Based on the structural information incorporated, these measures can be classified as local centrality (PageRank, degree, etc.) and global centrality (betweenness, closeness, etc.) measures. Nevertheless, global measures require huge computational resources in large-scale networks, whereas local measures suffer with less accuracy. To address these challenges, this work proposes a convex combination-based hybrid centrality method. Leveraging the proposed method, we design the six novel centrality metrics, namely convex isolating betweenness centrality (CIBC), convex isolating clustering coefficient centrality (CICLC), convex isolating coreness score centrality (CICRS), convex isolating degree centrality (CIDC), convex isolating eigenvector centrality (CIEC), and convex isolating Katz centrality (CIKC). Next, we compare the effectiveness and computational efficiency of the proposed measures with the traditional and recent measures on the SIR (susceptibleinfectedrecovered) model using real-world network datasets. Our comprehensive simulations validate the proposed convex centrality measures, showing enhanced spreading efficiency and modest improvements in time complexity
Node Significance Analysis in Complex Networks Using Machine Learning and Centrality Measures
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
The study addresses the limitations of traditional centrality measures in complex networks, especially in disease-spreading situations, due to their inability to fully grasp the intricate connection between a node's functional importance and structural attributes. To tackle this issue, the research introduces an innovative framework that employs machine learning techniques to evaluate the significance of nodes in transmission scenarios. This framework incorporates various centrality measures like degree, clustering coefficient, Katz, local relative change in average clustering coefficient, average Katz, and average degree (LRACC, LRAK, and LRAD) to create a feature vector for each node. These methods capture diverse topological structures of nodes and incorporate the infection rate, a critical factor in understanding propagation scenarios. To establish accurate labels for node significance, propagation tests are simulated using epidemic models (SIR and Independent Cascade models). Machine learning methods are employed to capture the complex relationship between a node's true spreadability and infection rate. The performance of the machine learning model is compared to traditional centrality methods in two scenarios. In the first scenario, training and testing data are sourced from the same network, highlighting the superior accuracy of the machine learning approach. In the second scenario, training data from one network and testing data from another are used, where LRACC, LRAK, and LRAD outperform the machine learning methods.
IoT Task Offloading in Edge Computing Using Non-Cooperative Game Theory for Healthcare Systems
Source Title: CMES - Computer Modeling in Engineering and Sciences, Quartile: Q2, DOI Link
						View abstract ⏷
					
We present a comprehensive system model for Industrial Internet of Things (IIoT) networks empowered by Non-Orthogonal Multiple Access (NOMA) and Mobile Edge Computing (MEC) technologies. The network comprises essential components such as base stations, edge servers, and numerous IIoT devices characterized by limited energy and computing capacities. The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption. The system operates in discrete time slots and employs a quasi-static approach, with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context. This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation, particularly relevant in real-time industrial applications. Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches, reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Q. Moreover, the algorithmeffectively balances complexity and network performance, as demonstratedwhen reducing the number of devices in each group (N) from 200 to 50, resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption. This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.
Convex Isolating Clustering Centrality to Discover the Influential Nodes in Large Scale Networks
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
Ranking influential nodes within complex networks offers invaluable insights into a wide array of phenomena ranging from disease management to information dissemination and optimal routing in real-time networking applications. Centrality measures, which quantify the importance of nodes based on network properties and relationships of nodes within the network, are instrumental in achieving this task. These measures are typically classified into local and global centralities. Global measures consider the overall structure and connectivity patterns. However, they often suffer from high computational complexity in large-scale networks. On the other hand, local measures focus on the immediate neighborhood of each node, potentially overlooking global information. To address these challenges, we propose a novel metric called Isolating Clustering Centrality (ISCL), which leverages a convex combination approach. By introducing a convex tuning parameter, ISCL enhances the applicability and adaptability of centrality measures across a wide range of real-world network applications. In this study, we assess the efficacy of the proposed measure using real-world network datasets and simulate the spreading process using susceptible-infected-removed (SIR) and independent cascade (IC) models. Our extensive results demonstrate that ISCL significantly improves spreading efficiency compared to conventional and recent centrality measures, while also maintaining better computational efficiency in large-scale complex networks.
Identifying and Ranking of Best Influential Spreaders with Extended Clustering Coefficient Local Global Centrality Method
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
The detection and ranking of influential nodes in complex networks are crucial for various practical applications such as identifying potential drug targets in protein-to-protein interaction networks, critical devices in communication networks, key people in social networks, and transportation hubs in logistics networks. The knowledge of influential spreaders in complex networks is extremely useful for controlling the spread of information. Centrality measures are known for effectively quantifying the influential nodes information in large-scale complex networks. Researchers have proposed different centrality measures in the literature, including Degree, Betweenness, Closeness, and Clustering coefficient centralities. However, these measures have certain limitations when implemented over large-scale complex networks. Most of these measures can be classified as global and local structural approaches. The global structure based algorithms are too complex to evaluate key nodes, particularly in large-scale networks, whereas the local measures overlook the essential global network information. To address these challenges, an extended clustering coefficient local global centrality (ECLGC) is proposed, which combines the local and global structural information to measure the node's influence in large-scale networks. The effectiveness and computational efficiency of the proposed measure are compared with existing centrality measures on real-world network datasets. Susceptible-Infected-Recovered (SIR) model is utilized to evaluate the performance of the ECLGC to capture the high-information dissemination compared to conventional measures. Further, we demonstrate that the proposed measure outperforms the conventional measures in terms of spreading efficiency.
Navigating Social Networks: A Hypergraph Approach to Influence Optimization
Source Title: COMPLEXIS 2024- 9th International Conference on Complexity, Future Information Systems and Risk, DOI Link
						View abstract ⏷
					
We introduce a novel approach to influence optimization in social networks by leveraging the mathematical framework of hypergraphs. Traditional centrality measures often fall short in capturing the multi-dimensional nature of influence. To address this gap, we propose the Spreading Influence (SI) model, a sophisticated tool designed to quantify the propagation potential of nodes more accurately within hypergraphs. Our research embarked on a comparative analysis using the Susceptible-Infected-Recovered (SIR) model across four distinct scenarios-where the top 5, 10, 15, and 20 nodes were initially infected-in four diverse datasets: Amazon, DBLP, Email-Enron, and Cora. The SI model's performance was benchmarked against established centrality measures: Hyperdegree Centrality (HDC), Closeness Centrality (CC), Betweenness Centrality (BC), and Hyperedge Degree Centrality (HEDC). The findings underscored the SI model's consistently superior performance in predicting influence spread. In scenarios involving the top 10 nodes, the model exhibited up to 3.18% increased influence spread over HDC, 2.14% over CC, 1.04% over BC, and 1.69% over HEDC. This indicates a substantial improvement in identifying key influencers within networks.
Link Prediction in Complex Networks Using Average Centrality-Based Similarity Score
Source Title: Entropy, Quartile: Q1, DOI Link
						View abstract ⏷
					
Link prediction plays a crucial role in identifying future connections within complex networks, facilitating the analysis of network evolution across various domains such as biological networks, social networks, recommender systems, and more. Researchers have proposed various centrality measures, such as degree, clustering coefficient, betweenness, and closeness centralities, to compute similarity scores for predicting links in these networks. These centrality measures leverage both the local and global information of nodes within the network. In this study, we present a novel approach to link prediction using similarity score by utilizing average centrality measures based on local and global centralities, namely Similarity based on Average Degree (Formula presented.), Similarity based on Average Betweenness (Formula presented.), Similarity based on Average Closeness (Formula presented.), and Similarity based on Average Clustering Coefficient (Formula presented.). Our approach involved determining centrality scores for each node, calculating the average centrality for the entire graph, and deriving similarity scores through common neighbors. We then applied centrality scores to these common neighbors and identified nodes with above average centrality. To evaluate our approach, we compared proposed measures with existing local similarity-based link prediction measures, including common neighbors, the Jaccard coefficient, AdamicAdar, resource allocation, preferential attachment, as well as recent measures like common neighbor and the Centrality-based Parameterized Algorithm (Formula presented.), and keyword network link prediction (Formula presented.). We conducted experiments on four real-world datasets. The proposed similarity scores based on average centralities demonstrate significant improvements. We observed an average enhancement of 24% in terms of Area Under the Receiver Operating Characteristic (AUROC) compared to existing local similarity measures, and a 31% improvement over recent measures. Furthermore, we witnessed an average improvement of 49% and 51% in the Area Under Precision-Recall (AUPR) compared to existing and recent measures. Our comprehensive experiments highlight the superior performance of the proposed method.
Cognitive Algorithms: Machine Learning’s Role in Alzheimer’s Early Detection
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Lakshmi Sathvika Kurmala., Srujitha Devineni., Tanya Kavuru., Vijaya Vyshnavi Muvvala.,
Source Title: 2024 IEEE 9th International Conference for Convergence in Technology (I2CT), DOI Link
						View abstract ⏷
					
Alzheimer's disease is a neurogenerative disorder that produces a particular global healthcare challenge. Early diagnosis is required for effective treatment. This research explores the potential of machine learning and deep learning techniques for predicting Alzheimer's disease. The datasets encompass both numerical data and structural MRI scans, including cognitive test scores and genetic markers from individuals both with and without Alzheimer's. A dataset containing various MRI scans, neuroimaging, and features was used to train and contemplate machine learning models. Numerous engineering and selection techniques were applied to enhance the model's performance. Various classification algorithms were used in the implementation that predict Alzheimer's disease. These models were strictly evaluated using different measures. The results indicate that ML models can effectively predict the disease based on a combination of neuroimaging features. This demonstrates the potential of ML in aiding early Alzheimer's disease diagnosis, which is important for personalized treatment. Future work may involve refining and validating these models and exploring the integration of multi-modal data sources for even more robust predictions.
Machine and Deep Learning Approaches for Crop Disease Detection: An In-Depth Analysis
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Chandu H P., Gayam V., Kotipalli K D., Ramireddygari P.,
Source Title: 2024 IEEE 9th International Conference for Convergence in Technology, I2CT 2024, DOI Link
						View abstract ⏷
					
Agricultureprovides livelihood fornearly two and half billion of the world's population. It employs around 58 percent Indians making it the highest employmentsector in India. Despite having highest employment rate, India's agricultural sector has low crop yields than global average. This is due to many factors like unlikely rains, excessive use of pesticides and fertilizers, and diseases, etc. Pests and diseases cause over Rs 290 billionper annum losses of crops in India. Crop diseases can have a notable impact on crop productivity leading to loss for farmers. This is a worldwide problem. Early detection of the diseases is crucial to prevent crop damage. Mostly, detection of these diseases is done manually, which is time taking andmay not be accurate. Embracing automatic crop disease detection becomes imperative for identifying diseases in their early stages efficiently. Integration of technologies in agriculture help farmers overcome various challenges. Using machine learning and deep learning to detect crop diseases can assist farmers to keep a close eye on their crops as they grow, ensuring healthier plants and better yields.Our main objective is to employ machine learning models for crop disease detection. We have used popular PlantVillage and Plant Pathology datasets, consistingimages of different cropsandRandom Forest,CNN and SVM algorithms areimplemented for classification. The results obtained are promising in detecting the crop diseases. © 2024 IEEE.
A Novel Convex Combination-based Mixed Centrality Measure for Identification of Influential Nodes in Complex Networks
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
Exploring the significance of popular node’s impact in complex networks yields numerous advantages, such as improving network resilience and accelerating information dissemination. While conventional centrality measures accurately quantify individual node importance, they may inadvertently overlook certain properties of influential nodes. The quest for new centrality metrics has garnered substantial research due to their theoretical relevance and practical applicability in real-world network scenarios. The existing research has predominantly focused on designing centrality metrics based on the local and/or global topological characteristics of nodes. Nevertheless, these metrics do not consider the nodes located in the intermediary zones between the inner and outer regions of a network, resulting in reduced effectiveness when applied to large scale network scenarios. To address these challenges, we have introduced a novel convex framework to formulate the Convex Mixed Centrality (COMC) measure. This metric aims to overcome the limitations of traditional centrality metrics by incorporating insights from both local and global network dynamics, thus enhancing its ability to identify influential nodes across various network regions. To prove the efficacy of our proposed measure, we utilize the Susceptible-Infected-Recovered (SIR) and Independent Cascade (IC) models, alongside the Kendall tau metric. Extensive simulation experiments conducted on various real-world datasets demonstrate that the COMC measure outperforms conventional centrality indices in terms of spreading efficiency, all while maintaining comparable computational complexity. Authors
Enhanced Movie Recommender system using Deep Learning Techniques
Source Title: Proceedings - 2024 3rd International Conference on Computational Modelling, Simulation and Optimization, ICCMSO 2024, DOI Link
						View abstract ⏷
					
Recommender systems filter user preferences and surfing history to provide recommendations. These recommendations are used to capture the user interests for making decisions. Based on the interaction of like, and dislikes of the user the decisions are made. We are using the deep learning techniques to enhance the movie recommendations. It has the ability to extract meaningful patterns from large volumes of data. This study uses Artificial Neural Networks (ANN) to learn features from user behavior and movie metadata, and Recurrent Neural Networks (RNN) to capture temporal patterns in user preferences and thereby enhance the recommendation accuracy by considering both short-term and long-term factors. Additionally, Convolutional Neural Networks (CNN) enhance model capabilities by focusing on input data spatial correlations. By combining CNN's ability to extract hierarchical representations of structural and visual aspects into our recommendation system, we intend to improve material knowledge. These techniques play a vital role in providing recommendations by enabling the personalized preferences to the users. The models are trained on diverse datasets using user ratings and viewing history. Model performance on datasets shows decreased mean square and mean absolute error. This research shows how ANN, RNN, and CNN algorithms can provide reliable movie suggestions. © 2024 IEEE.
Quantifying Node Influence in Networks: Isolating-Betweenness Centrality for Improved Ranking
Dr Murali Krishna Enduri, Mondikathi Chiranjeevi, Mr Koduru Hajarathaiah, Dhuli V S., Cenkeramaddi L R
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
In complex networks, node impact refers to an individual node's significance or influence within the structure. The evaluation of the impact of the nodes in information transmission, prevention of pandemics, and resilience applications of the infrastructure is studied. Centrality measures are crucial for understanding the impact of particular nodes in the network structure. Most centrality measures, such as degree centrality, betweenness centrality, and eigenvector centrality, provide influential node information based on network aspects such as connection patterns, paths for communication, and influence propagation dynamics. However, these centrality measures could capture local and global information by balancing time complexity and spreading efficiency. This paper proposes an Isolating-Betweenness Centrality (ISBC) for quantifying node impact by incorporating the properties Betweenness Centrality and Isolating Centrality. The proposed measure evaluates a node's impact by considering local and global structural influence. We verify the SIR and IC epidemic models to evaluate ISBC's performance compared with conventional and recent centrality measures on real-world datasets. Furthermore, we show that the proposed measure exhibits improved spreading efficiency over recent and conventional measures with moderate time complexity. © 2013 IEEE.
Link Prediction in Complex Networks: An Empirical Review
Source Title: Intelligent Data Engineering and Analytics, DOI Link
						View abstract ⏷
					
Any real-world entity with entities and interactions between them can be modeled as a complex network. Complex networks are mathematically modeled as graphs with nodes denoting entities and edges(links) depicting the interaction between entities. Many analytical tasks can be performed on such networks. Link prediction (LP) is one of such tasks, that predicts missing/future links in a complex network modeled as graph. Link prediction has potential applications in the domains of biology, ecology, physics, computer science, and many more. Link prediction algorithms can be used to predict future scientific collaborations in a collaborative network, recommend friends/connections in a social network, future interactions in a molecular interaction network. The task of link prediction utilizes information pertaining to the graph such as node-neighborhoods, paths. The main focus of this work is to empirically evaluate the efficacy of a few neighborhood-based measures for link prediction. Complex networks are very huge in size and sparse in nature. Choosing the candidate node pairs for future link prediction is one of the hardest tasks. Majority of the existing methods consider all node pairs absent of an edge to be candidates; compute prediction score and then the node pairs with the highest prediction scores are output as future links. Due to the massive size and sparse nature of complex networks, examining all node pairs results in a large number of false positives. A few existing works select only a subset of node pairs to be candidates for prediction. In this study, a sample of candidates for LP based are chosen based on the hop distance between the nodes. Five similarity-based LP measures are chosen for experimentation. The experimentation on six benchmark datasets from four domains shows that a hop distance of maximum three is optimum for the prediction task.
Redundant Transmission Control Algorithm for Information-Centric Vehicular IoT Networks
Dr Satish Anamalamudi, Dr Murali Krishna Enduri, Abdur Rashid Sangi., Mohammed S Alkatheiri., Chettupally Anil Carie., Mohammed A Alqarni
Source Title: Computers, Materials and Continua, Quartile: Q1, DOI Link
						View abstract ⏷
					
Vehicular Adhoc Networks (VANETs) enable vehicles to act as mobile nodes that can fetch, share, and disseminate information about vehicle safety, emergency events, warning messages, and passenger infotainment. However, the continuous dissemination of information from vehicles and their one-hop neighbor nodes, Road Side Units (RSUs), and VANET infrastructures can lead to performance degradation of VANETs in the existing host-centric IP-based network. Therefore, Information Centric Networks (ICN) are being explored as an alternative architecture for vehicular communication to achieve robust content distribution in highly mobile, dynamic, and error-prone domains. In ICN-based Vehicular-IoT networks, consumer mobility is implicitly supported, but producer mobility may result in redundant data transmission and caching inefficiency at intermediate vehicular nodes. This paper proposes an efficient redundant transmission control algorithm based on network coding to reduce data redundancy and accelerate the efficiency of information dissemination. The proposed protocol, called Network Cording Multiple Solutions Scheduling (NCMSS), is receiver-driven collaborative scheduling between requesters and information sources that uses a global parameter expectation deadline to effectively manage the transmission of encoded data packets and control the selection of information sources. Experimental results for the proposed NCMSS protocol is demonstrated to analyze the performance of ICN-vehicular-IoT networks in terms of caching, data retrieval delay, and end-to-end application throughput. The end-to-end throughput in proposed NCMSS is 22% higher (for 1024 byte data) than existing solutions whereas delay in NCMSS is reduced by 5% in comparison with existing solutions.
Acute Lymphoblastic Leukemia Blood Cells Prediction Using Deep Learning & Transfer Learning Technique
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Omkar Subhash Ghongade., S Kiran Sai Reddy., Yaswanth Chowdary Gavini
Source Title: Indonesian Journal of Electrical Engineering and Informatics, Quartile: Q3, DOI Link
						View abstract ⏷
					
White blood cells called lymphocytes are the target of the blood malignancy known as acute lymphoblastic leukemia (ALL). In the domain of medical image analysis, deep learning and transfer learning methods have recently showcased significant promise, particularly in tasks such as identifying and categorizing various types of cancer. Using microscopic pictures, we suggest a deep learning and transfer learning-based method in this research work for predicting ALL blood cells. We use a pre-trained convolutional neural network (CNN) model to extract pertinent features from the microscopic images of blood cells during the feature extraction step. To accurately categorize the blood cells into leukemia and non-leukemia classes, a classification model is built using a transfer learning technique employing the collected features. We use a publicly accessible collection of microscopic blood cell pictures, which contains samples from both leukemia and non-leukemia, to assess the suggested method. Our experimental findings show that the suggested method successfully predicts ALL blood cells with high accuracy. The method enhances early ALL detection and diagnosis, which may result in better patient treatment outcomes. Future research will concentrate on larger and more varied datasets and investigate the viability of integrating it into clinical processes for real-time ALL prediction.
Advancements in Sentiment Analysis: A Deep Learning Approach
Dr Satish Anamalamudi, Dr Murali Krishna Enduri, Mr Koduru Hajarathaiah, Yogeshvar Reddy Kallam., Lovely Yeswanth Panchumarthi., Lavanya Parchuri
Source Title: 2023 IEEE 15th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
Sentiment analysis, a pivotal discipline in the digital era, revolves around the nuanced task of categorizing user sentiments within textual data. This research embarks on an exhaustive exploration of diverse sentiment analysis models, comprising Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), Support Vector Machines (SVMs), and a Baseline Model. Through a rigorous comparative analysis of their performance across varied datasets, this study illuminates the unique strengths and limitations inherent to each model. Furthermore, the research extends beyond the realm of academic inquiry to unveil the practical applications of sentiment analysis. It underscores the profound impact of sentiment analysis in contemporary datadriven decision-making, illustrating its significance across multifaceted domains such as marketing, social media monitoring, finance, customer service, and public sentiment analysis. This investigation seeks to empower stakeholders with invaluable insights, thereby facilitating informed choices and strategies in the ever-evolving digital landscape.
Convolutional Neural Networks for Automated Glaucoma Detection: Performance and Limitations
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Akash Bayyana., Jeyanand Vemulapati., Sai Hemanth Bathula., Gangula Rakesh
Source Title: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
						View abstract ⏷
					
Glaucoma is a set of eye disorders that, if left untreated, can cause optic nerve damage, resulting in vision loss and blindness. While glaucoma is often linked with high eye stress, it can also develop with normal or low pressure. The most common variety, primary open-angle glaucoma, is known as the silent thief of sight because it causes slow vision loss with no symptoms. Ethnicity, age, diabetes, bloodline, and hypertension are all dangerous factors. Regular eye exams are crucial for early detection. To aid in glaucoma detection, a model utilizing eye fundus images is proposed. Fundus images provide valuable information about the optic nerve's health and abnormalities. The model employs a Convolutional Neural Network (CNN) to classify fundus images and detect glaucoma. By automating the process, the proposed system aims to improve accuracy. This CNN-based model has the potential to enhance glaucoma detection, enabling prompt interventions and better patient outcomes.
Performance Evaluation of Machine Learning and Neural Network Algorithms for Wine Quality Prediction
Dr Murali Krishna Enduri, Dr Satish Anamalamudi, Ms Tokala Srilatha, Harika Kakarala., Asish Karthikeya Gogineni., Thadi Venkata Satya Murty
Source Title: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
						View abstract ⏷
					
The assessment of wine quality is of paramount importance to both consumers and the wine industry. Recognizing its impact on customer satisfaction and business success, companies are increasingly turning to product quality certification to enhance sales in the global beverage market. Traditionally, quality testing was conducted towards the end of the manufacturing process, resulting in time-consuming and resource-intensive procedures. This approach involved the engagement of multiple human experts to evaluate wine quality, leading to high costs. Moreover, since taste perception is subjective and varies among individuals, relying solely on human specialists for assessing wine quality presents significant challenges. Our research focuses on advancing the quality of wine prediction by leveraging diverse characteristics of wine. We applied various feature selection techniques and explored machine learning algorithms to identify the optimal combination of parameters for accurate wine quality prediction. This approach reduces the time and costs associated with traditional quality assessment methods and provides a more standardized and consistent evaluation process. Our findings contribute to the advancement of wine industry practices, enabling businesses to make informed decisions and deliver high-quality products that meet consumer expectations.
Deep Learning Approaches for Detecting Psychological Instability: An Evaluation of Performance
Source Title: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
						View abstract ⏷
					
These days people all around the world put in a lot of effort to keep up with the rapidly growing world. Yet, as a result of this, every person has to cope with a variety of health problems, the most well-known of which is depression or stress, which can ultimately result in demise or other heinous actions. These irregularities can be referred to as psychological instability, which can be cured by pursuing some form of therapy advised by medical professionals. For this research, a dataset is taken from the internet and then data was processed using neural networks(NN) and few machine learning models were also used to check the accuracy whether the model is being more accurate and on the other hand an interface was created consisting a variety of questions which are related to the dataset that we are considering for this work and for the instability identification process we are using fuzzy techniques as the answers to these questions are created in that way and based on the answers received to questions output is determined about the level of instability that a person is having and based on that person can get the treatment required.
A Hybrid Deep Learning Framework for Efficient Sentiment Analysis
Dr Murali Krishna Enduri, Mr Koduru Hajarathaiah, Asish Karthikeya Gogineni., S Kiran Sai Reddy., Harika Kakarala., Yaswanth Chowdary Gavini., M Pavana Venkat
Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link
						View abstract ⏷
					
In the era of Microblogging and the rapid growth of online platforms, an exponential rise is shown in the volume of data generated by internet users across various domains. Additionally, the creation of digital or textual data is expanding significantly. This is because consumers respond to comments made on social media platforms regarding events or products based on their personal experiences. Sentiment analysis is usually used to accomplish this kind of classification on a large scale. It is described as the process of going through all user reviews and comments that are discovered in product reviews, events, or similar sources in order to look for unstructured text comments. Our study examines how deep learning models like LSTM, GRU, CNN, and hybrid models (LSTM+CNN, LSTM+GRU, GRU+CNN) capture complex sentiment patterns in text data. Additionally, we study integrating BOW and TF-IDF as complementing features to improve model predictive power. CNN with RNNs consistently improves outcomes, demonstrating the synergy between convolutional and recurrent neural network architectures in recognizing nuanced emotion subtleties.In addition, TF-IDF typically outperforms BOW in enhancing deep learning model sentiment analysis accuracy.
Forecasting Stock Markets Trends using Machine Learning Algorithms
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Lahari Kotapati., Reethu Bhargavi Sajjala., Sricharan Gudi., Beecha Venkata Naga Hareesh
Source Title: 2023 IEEE 15th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
Prices on the stock market vary frequently as a result of many economic, political, and social reasons. It is a fluid and challenging environment. Investors look for efficient tools to enhance their investing strategy and make well- informed judgements. This study investigates how stock market fluctuations are predicted using machine learning approaches and how well they can identify complex patterns. Various algorithms, including Logistic Regression, K-Nearest Neighbours, Support Vector Machines, Random Forest, Decision Tree, Naive Bayes, Long Short-Term Memory, Multilayer Perceptron, and XG Boost are evaluated and compared based on their accuracy, precision, recall, and F1 score. The Multilayer Perceptron emerges as the most accurate predictor, showcasing its ability to handle complex relationships and learn from historical data. The results of this study provide insightful information for investors who want to use predictions from machine learning in their decision- making.
Ranking Popular Personalities in Social Networks Using Mixed Centrality Method
Source Title: 2023 IEEE 15th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
In today's world, social networks play a crucial role by providing individuals with a platform to engage, exchange knowledge, and exert influence on others. Finding popular personalities in these networks is important for several purposes, including marketing, information sharing, and opinion forming. To determine influential individuals in social networks, traditional centrality metrics such as degree centrality and betweenness centrality have been extensively used. On the other hand, these measurements usually concentrate on the local or global topological structure both inside and outside of networks. Specifically, a lot of measures restrict their capacity to record the total impact of the node on small-scale networks. To address these challenges, we design a new measure called mixed centrality (MC2) which are focus on the union of the local and global formation of the network. To shown the effectiveness of the proposed measure, we compare the our measures with Degree, Betweenness, Closeness, Cluster-Coefficient, Local and Global centrality measures. We employ the SIR (Susceptible-Infected-Recovered) model to investigate the extreme data dissemination of our centrality metrics compared to traditional measures and to pull off in-depth simulations on big concrete data sets such as jb-pages-company, facebook-combined, jb-pages-public-figure and.fb-pages-government.
Predictive Modeling for Heart Disease Detection with Machine Learning
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Rushita Gandham., Keerthi Reddy Manambakam., Sai Venkat Naveen Madala., Navya Sri Nannapaneni
Source Title: 2023 IEEE 15th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
Debilitating health symptoms brought on by heart disease reduce people's quality of life and impose serious pain, discomfort, and restrictions on daily activities. It places a heavy burden on economies, healthcare systems, and society at large. Accurate cardiac disease prediction has the ability to significantly contribute to prevention, treatment, and essential assistance for healthcare personnel facing this ailment given its influence on public health. This study uses the most recent developments in machine learning techniques to build an accurate model for heart disease prediction. Heart disease prediction and the Cleveland datasets, which combine approximately 13 important patient history variables, are used to analyze data from people with and without heart disease. XGBoost, naive bayes, logistic regression, decision trees, support vector machines, random forests, and k-nearest neighbors are just a few of the machine learning techniques used in the model development for classification. We can increase the precision and effectiveness of identifying persons at risk of heart disease and enabling prompt therapies by applying these machine learning techniques. According to the findings of this study, XGBoost, decision trees, and random forests have consistently produced high accuracy predictions of heart disease.
Discovering Vital Nodes in Complex Networks Using Isolating Extended Coreness Score
Source Title: 2023 IEEE 15th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
Identifying vital nodes involves the task of pinpointing the most essential nodes within intricate networks. This challenge holds significant implications across different domains, including areas like viral marketing and managing the spread of viruses or rumors within real-world networks. Numerous techniques have been proposed for ranking influential nodes in complex networks, spanning from node centrality to diffusion-based processes. K-shell coreness centrality is employed in network analysis to evaluate the structural significance of nodes. The process in-volves k-shell decomposition, which categorizes nodes into shells according to their connectivity patterns. However, these measures are based on the coreness of direct nodes. We proposed an extended coreness score for finding the vital nodes based on the coreness of a node and its neighbors along with the degree. The foundation of degree centrality lies in the principle that the highly connected node is also the most central within the network. With the combination of the degree and isolating centralities, we proposed the isolating extended coreness score. We employ the SIR (Susceptible-Infected-Recovered) model to analyze the maximum information spread achieved by the proposed measure in comparison to conventional centralities. We apply the proposed centrality measure to various real-world networks to identify vital nodes. Additionally, we compare these results with existing basic centrality measures.
Unleashing the Power of SVD and Louvain Community Detection for Enhanced Recommendations
Source Title: 2023 IEEE 15th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
Recommendation systems play a vital role in delivering personalized content to users, thereby enhancing their overall experiences across diverse applications. Collaborative filtering based recommendation systems have demonstrated success through the application of matrix factorization techniques. However, the incessant growth in dataset size and complexity presents challenges regarding the scalability of recommendation algorithms. Consequently, addressing these scalability concerns becomes imperative to ensure the seamless functioning of recommendation systems in handling increasingly large and diverse datasets. This research introduces an innovative method that seamlessly integrates matrix factorization techniques and community detection algorithms to effectively tackle the scalability issue in recommendation systems. Through numerous experiments utilizing real-world datasets, the proposed method's efficiency is thoroughly assessed. These compelling findings underscore the method's potential as a promising solution for constructing robust and scalable recommendation systems effectively. Ultimately, the overarching objective is to enhance user experiences by providing personalized and relevant content recommendations that cater to the evolving needs of modern recommendation systems. By optimizing scalability and recommendation accuracy, this innovative approach seeks to elevate the efficacy and user satisfaction of recommendation systems across various domains.
Algorithms for Finding Influential People with Mixed Centrality in Social Networks
Source Title: Arabian Journal for Science and Engineering, Quartile: Q1, DOI Link
						View abstract ⏷
					
Identifying the seed nodes in networks is an important task for understanding the dynamics of information diffusion. It has many applications, such as energy usage/consumption, rumor control, viral marketing, and opinion monitoring. When compared to other nodes, seed nodes have the potential to spread information in the majority of networks. To identify seed nodes, researchers gave centrality measures based on network structures. Centrality measures based on local structure are degree, semi-local, Pagerank centralities, etc. Centrality measures based on global structure are betweenness, closeness, eigenvector, etc. Very few centrality measures exist based on the networks local and global structure. We define mixed centrality measures based on the local and global structure of the network. We propose a measure based on degree, the shortest path between vertices, and any global centrality. We generalized the definition of our mixed centrality, where we can use any measure defined on a networks global structure. By using this mixed centrality, we identify the seed nodes of various real-world networks. We also show that this mixed centrality gives good results compared with existing basic centrality measures. We also tune the different real-world parameters to study the effect of their maximum influence.
ICDC: Ranking Influential Nodes in Complex Networks based on Isolating and Clustering Coefficient Centrality Measures
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
Over the past decade, there has been extensive research conducted on complex networks, primarily driven by their crucial role in understanding the various real-world networks such as social networks, communication networks, transportation networks, and biological networks. Ranking influential nodes is one of the fundamental research problems in the areas of rumor spreading, disease research, viral marketing, and drug development. Influential nodes in any network are used to disseminate the information as fast as possible. Centrality measures are designed to quantify the node's significance and rank the influential nodes in complex networks. However, these measures typically focus on either the local or global topological structure within and outside network communities. In particular, many measures limit their ability to capture the node's overall impact on small-scale networks. To address these challenges, we develop a novel centrality measure called Isolating Clustering Distance Centrality (ICDC) by integrating the isolating and clustering coefficient centrality measures. The proposed metric gives a more thorough assessment of the node's importance by integrating the local isolation and global topological influence in large-scale complex networks. We employ the SIR and ICM epidemic models to study the efficiency of ICDC against traditional centrality measures across real-world complex networks. Our experimental findings consistently highlight the superior efficacy of ICDC in terms of fast spreading and computational efficiency when compared to existing centrality measures.
Improving Skin Disease Diagnosis with Deep Learning: A Comprehensive Evaluation
Source Title: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
						View abstract ⏷
					
One of the deadliest maladies is skin disease. Despite it could be challenging to diagnose dermatological problems precisely. In the present study, deep learning will be used to pinpoint the skin problem described earlier. Such a tool would be more efficient than manual procedures, which are time-consuming and call for expert help. This paper's main goal is to provide an in-depth analysis of conceptual review of current advancements in deep learning-based skin disease identification. Although deep learning is gaining popularity, there are still many problems to be solved and much more study to be done. The present manual procedures for diagnosing skin diseases are known to be time-consuming because they rely on professional judgement.
Comparative study on sentimental analysis using machine learning techniques
Dr Murali Krishna Enduri, Dr Satish Anamalamudi, Abdur Rashid Sangi., Ramanadham Chandu Badrinath Manikanta., Kallam Yogeshvar Reddy., Panchumarthi Lovely Yeswanth., Suda Kiran Sai Reddy., Gogineni Asish Karthikeya
Source Title: Mehran University Research Journal of Engineering and Technology, DOI Link
						View abstract ⏷
					
-
Generalization of Relative Change in a Centrality Measure to Identify Vital Nodes in Complex Networks
Source Title: IEEE Access, Quartile: Q1, DOI Link
						View abstract ⏷
					
Identifying vital nodes is important in disease research, spreading rumors, viral marketing, and drug development. The vital nodes in any network are used to spread information as widely as possible. Centrality measures such as Degree centrality (D), Betweenness centrality (B), Closeness centrality (C), Katz (K), Cluster coefficient (CC), PR (PageRank), LGC (Local and Global Centrality), ISC (Isolating Centrality) centrality measures can be used to effectively quantify vital nodes. The majority of these centrality measures are defined in the literature and are based on a network's local and/or global structure. However, these measures are time-consuming and inefficient for large-scale networks. Also, these measures cannot study the effect of removal of vital nodes in resource-constrained networks. To address these concerns, we propose the six new centrality measures namely GRACC, LRACC, GRAD, LRAD, GRAK, and LRAK. We develop these measures based on the relative change of the clustering coefficient, degree, and Katz centralities after the removal of a vertex. Next, we compare the proposed centrality measures with D, B, C, CC, K, PR, LGC, and ISC to demonstrate their efficiency and time complexity. We utilize the SIR (Susceptible-Infected-Recovered) and IC (Independent Cascade) models to study the maximum information spread of proposed measures over conventional ones. We perform extensive simulations on large-scale real-world data sets and prove that local centrality measures perform better in some networks than global measures in terms of time complexity and information spread. Further, we also observe the number of cliques drastically improves the efficiency of global centrality measures.
Integration of E-health and Internet of Things
Source Title: Blockchain Technology Solutions for the Security of IoT-Based Healthcare Systems, DOI Link
						View abstract ⏷
					
The proliferation of healthcare-specific Internet of Things (IoT) devices opens up huge opportunities in automated healthcare management systems. Integrating the healthcare system with IoT networks is crucial due to time-critical sensitive applications. State-of-the-art IoT networks transmit the application data through nondeterministic best effort traffic flows, whereas the data from different nodes used to be scheduled in a single shared channel. On the contrary, data from healthcare systems needs to be transmitted in predetermined per-flow deterministic traffic flows to guarantee the quality of service (QoS) in terms of transmission delay and packet drops. To achieve this, the current IoT protocol stack needs to be updated with the support of deterministic traffic flows to ensure the guaranteed QoS in healthcare and medical applications. Hence, this chapter proposes the protocol aspects (scheduling and routing protocol) to integrate E-health with IoT networks to ensure predetermined traffic flows with predictable end-to-end delays.
Liver Disease Prediction and Classification using Machine Learning Techniques
Dr Satish Anamalamudi, Dr Murali Krishna Enduri, Ms Tokala Srilatha, Mr Koduru Hajarathaiah, Sai Ram Praneeth Gunda., Botla Srinivasrao., Nalluri Lakshmikanth., Nagamanohar Pathipati
Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link
						View abstract ⏷
					
Recently liver diseases are becoming most lethal disorder in a number of countries. The count of patients with liver disorder has been going up because of alcohol intake, breathing of harmful gases, and consumption of food which is spoiled and drugs. Liver patient data sets are being studied for the purpose of developing classification models to predict liver disorder. This data set was used to implement prediction and classification algorithms which in turn reduces the workload on doctors. In this work, we proposed apply machine learning algorithms to check the entire patients liver disorder. Chronic liver disorder is defined as a liver disorder that lasts for at least six months. As a result, we will use the percentage of patients who contract the disease as both positive and negative information We are processing Liver disease percentages with classifiers, and the results are displayed as a confusion matrix. We proposed several classification schemes that can effectively improve classification performance when a training data set is available. Then, using a machine learning classifier, good and bad values are classified. Thus, the outputs of the proposed classification model show accuracy in predicting the result.
A Comparison of Neural Networks and Machine Learning Methods for Prediction of Heart Disease
Source Title: 2023 3rd International Conference on Intelligent Communication and Computational Techniques, DOI Link
						View abstract ⏷
					
Heart disease is a major cause of death and disability across the world. Heart disease mortality and morbidity rates can be greatly decreased with early detection and treatment. Hence, the development of efficient and accurate methods for early diagnosis of heart disease has become a priority in the medical field. In this study, we did a comparative study of exiting supervised machine learning approaches for predicting heart disease diagnosis and also improved the accuracy of KNN by changing K values. We used a dataset that consists of a variety of features such as age, gender and other important indicators for heart disease diagnosis. We then explored and evaluated traditional ML algorithms such as logistic regression, decision tree, random forest and SVM for the predictive analysis. A number of criteria, including accuracy, precision, recall, and F1 Score, were used to assess the models' performance. This study provides evidence that ML algorithms can be used to forecast the diagnosis of heart disease. Healthcare providers and medical practitioners can utilize the outcomes of this study for early detection and management of cardiac disease. Further research will aim to analyse and evaluate additional machine learning algorithms to enhance precision and performance.
Empirical Analysis of Income Prediction Using Deep Learning Techniques
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Mr Koduru Hajarathaiah, Jeyanand Vemulapati., Akash Bayyana., Sai Hemanth Bathula
Source Title: 2023 IEEE International Students' Conference on Electrical, Electronics and Computer Science, DOI Link
						View abstract ⏷
					
One of the main problems with determining income for employees is that there is no single formula that can be applied to all employees. Each employee's income is based on a variety of factors such as job position, experience, and special skills. Another problem is that the cost of living in different parts of the country can vary significantly, making it difficult to accurately assign a salary to different employees. The third issue is that there are variations in the types of benefits and perks that employers offer, which can make it difficult to accurately compare total compensation between different positions. Finally, there are often external factors that can affect the income of employees, such as the economic conditions of the area they are located in or the cost of health insurance. It is important to know what will be the income of employees to ensure that they are paid fair wages for their work and to ensure that total compensation is competitive with the market. Knowing the income of employees is also important to ensure that the employer can budget for salaries, benefits, and other costs associated with hiring and employing staff. To overcome the problems, we have made a model using the different models of LSTM. This paper presents a comparative study of various deep learning techniques for income prediction. Specifically, Long Short-Term Memory (LSTM), Stacked LSTM, Bidirectional LSTM, and Convolutional Long Short-Term Memory (Conv-LSTM) architectures are used to predict income. Simulations are processed on the Socio-Economic dataset having twenty-six years of monthly income data. This study provides insights into the effectiveness of deep learning techniques for predicting income.
An AI fuzzy clustering-based routing protocol for vehicular image recognition in vehicular ad hoc IoT networks
Source Title: Soft Computing, Quartile: Q1, DOI Link
						View abstract ⏷
					
A vehicular ad hoc IoT network (VA-IoT) plays a key role in exchanging the constrained networked vehicle information through IPv6-enabled sensor nodes. It is noteworthy to understand that vehicular IoT is interconnection of vehicular ad hoc networks with the support of constrained IoT devices. Routing protocols in VAN-IoT are designed to route the vehicular traffic in the distributed environments. In addition, VAN-IoT is designed to enhance road safety by reducing the number of road accidents through reliable data transmission. Routing in VAN-IoT has a unique dynamic topology, frequent spectrum, and node handover with restricted versatility. Hence, it is very crucial to design the hybrid reactive routing protocols to ensure the network throughput and data reliability of the VAN-IoT networks. This paper aims to propose an AI-based reactive routing protocol to enhance the performance of the network throughput, minimize the end-to-end delay with respect to node mobility, spectrum mobility, link traffic load and end-to-end network traffic load while transmitting the vehicular images. In addition, the performance of the proposed routing protocol in terms of image transmission time is being compared with the existing initiative-taking- and reactive-based routing protocols in vehicular ad hoc IoT (VA-IoT) networks.
Global Isolating Centrality Measure for Finding Vital Nodes in Complex Networks
Source Title: 2023 IEEE 12th International Conference on Communication Systems and Network Technologies (CSNT), DOI Link
						View abstract ⏷
					
Identification of influential vertices plays a very prominent task in a complex network. So, the fundamental task in complex networks is to determine the influential nodes whose expulsion crucially cutoff network harmony. The analysis of the network's topological characteristics, such as susceptibility and resilience, can be aided by identifying influential nodes. This work uses the notion of Maximize the Number of Connected Components Problem, which helps in determining influential nodes, and whose removal yields optimum number of connected components. This work includes application of topology-based centrality measures on real-world networks. However, conventional methods fail to detect most influential nodes that cause the network to split into the optimum number of components. To address this, our work introduces new centrality called global isolating centrality with half of the diameter hops, that focus on network connectedness. The results reveal that the new centrality is better than the existing centralities for certain probability values.
Aspects of effectiveness and significance: The use of machine learning methods to study CuIn1-xGaxSe2 solar cells
Dr Murali Krishna Enduri, Narendra Bandaru., Murali Krishna Enduri., Raghava Reddy Kakarla., Ch Venkata Reddy
Source Title: Solar Energy, Quartile: Q1, DOI Link
						View abstract ⏷
					
The goal of this work is to enhance the efficiency of CuIn 1-x Ga x Se 2 (CIGS) thin film solar cells by investigating the critical factors affecting their device performance and the correlations between them. To achieve this goal, machine learning algorithms are employed to uncover the primary parameters and correlations affecting CIGS solar cell device performance. The experimental data is used to develop the data sets for machine learning analysis. The correlation studies allow for the investigation of the key factors governing device performance. The algorithms used in the study include linear regression (LR), random forest (RF), extreme gradient boosting (XG), decision tree (DT), support vector machine regressor (SVM), stochastic gradient descent regressor (SGD) and Bayesian ridge (Bayesian). The results showed that decision trees provide the most accurate predictions of CIGS solar cell efficiency, with a root mean square error of 0.11 and 1.83 and a Pearson coefficient of 0.9 and 0.88 for the training and test data sets, respectively. Additionally, this research provides important insight into the necessary components and ideal device dimensions, offering helpful guidelines for subsequent experimenting optimization endeavours.
A novel approach to minimize the Black Hole attacks in Vehicular IoT Networks
Source Title: ACM International Conference Proceeding Series, Quartile: Q3, DOI Link
						View abstract ⏷
					
Vehicular Ad-hoc IoT Networks (VA-IOT) have gained significant attention due to their ability to enable the distributed data transmission between vehicles to vehicle. However, VA-IOT are susceptible to various security threats, including the Black Hole attack. With Black Hole attacks, an intruder or malicious node attracts the internet traffic by broadcasting fake messages and drops all the received packets, which can significantly impact the network's performance. To mitigate, this paper presents an new mechanism to minimize the Black Hole attack on VANETs by combining two techniques: A trust management system and an intrusion detection system. The proposed approach involves assigning trust values to each vehicle based on their past behavior and routing packets through only trusted nodes. Additionally, an intrusion detection system is used to identify malicious nodes that violate the trust threshold and to take appropriate measures. The performance of the proposed approach is outperformed in terms of achievable end-To-end throughput and minimized network delays.
A Crop Recommendation System Based on Nutrients and Environmental Factors Using Machine Learning Models and IoT
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Mr Koduru Hajarathaiah, Anishka Chauhan., Anuraag Tsunduru., Kishwar Parveen
Source Title: 2023 International Conference on Information Technology (ICIT), DOI Link
						View abstract ⏷
					
With the ever-increasing population of the world, enough crop production is the biggest concern for the human race. This issue is more pressing than ever as the world population has surpassed the 8 billion mark. Smart farming has become a popular option as it solves the problem by suggesting ways to increase the quality and quantity of crop yield. It is a term associated with the practice of automating farm-related activities. This paper proposes a crop recommendation system based on machine learning algorithms for agricultural fields in India. A sensor system is also prepared to collect first-hand data from fields. These IoT sensors are then used to record levels of soil moisture content, Temperature, and the three most important macro-nutrients required for soil growth: Nitrogen (N), Phosphorus (P), and Potassium (K), from different fields. Additionally, other variables such as rainfall, sowing season, and pH value of soil are also considered to build the proposed crop recommendation system that recommends the best-yielding crop based on the other environmental factors. Multiple machine learning algorithms including Artificial Neural Networks (ANN), Random Forest, Logistic Regression, and K-Nearest Neighbor (KNN) are used and compared to identify the most efficient algorithm for the crop recommendation system. The proposed system aims to develop a model that can help farmers increase their crop yield and quality by providing personalized recommendations based on environmental variables.
Community-Based Matrix Factorization (CBMF) Approach for Enhancing Quality of Recommendations
Dr Murali Krishna Enduri, Ms Tokala Srilatha, Hemlata Sharma., Jaya Lakshmi Tangirala
Source Title: Entropy, Quartile: Q1, DOI Link
						View abstract ⏷
					
Matrix factorization is a long-established method employed for analyzing and extracting valuable insight recommendations from complex networks containing user ratings. The execution time and computational resources demanded by these algorithms pose limitations when confronted with large datasets. Community detection algorithms play a crucial role in identifying groups and communities within intricate networks. To overcome the challenge of extensive computing resources with matrix factorization techniques, we present a novel framework that utilizes the inherent community information of the rating network. Our proposed approach, named Community-Based Matrix Factorization (CBMF), has the following steps: (1) Model the rating network as a complex bipartite network. (2) Divide the network into communities. (3) Extract the rating matrices pertaining only to those communities and apply MF on these matrices in parallel. (4) Merge the predicted rating matrices belonging to communities and evaluate the root mean square error (RMSE). In our experimentation, we use basic MF, SVD++, and FANMF for matrix factorization, and the Louvain algorithm is used for community division. The experimental evaluation on six datasets shows that the proposed CBMF enhances the quality of recommendations in each case. In the MovieLens 100K dataset, RMSE has been reduced to 0.21 from 1.26 using SVD++ by dividing the network into 25 communities. A similar reduction in RMSE is observed for the datasets of FilmTrust, Jester, Wikilens, Good Books, and Cell Phone.
Find the Spreading Ability of the Influential Nodes using the IC Model in Social Networks
Source Title: 2022 14th International Conference on Computational Intelligence and Communication Networks, DOI Link
						View abstract ⏷
					
In the world of fast-growing technology, we have social media and networks have reached a place where they tend to influence the largest percentage of the population in their respective area or language. In today's world, people are influenced by popular public figures around the world. So in this research, we will identify the most influential people on social networks so that we can easily share the information, which helps in different ways like marketing, stopping the spread of false information, cautions or hazardous information, etc. This helps us spread information to large groups of people with comparatively less capital. Finding influential people can now be done by finding influential nodes in social networks. Different researchers around the world have proposed various ways of finding influential nodes like PageRank, degree centrality, betweenness centrality, closeness centrality, etc. Of these, some come under global-structure-based and some come under local-structure-based. Our idea is for an independent cascade model to be applied to the basic centralities to test the spreading ability. We analyze the relationship between centrality values and information spread. Finally, in this research, we will discuss various centralities that help in finding influential nodes and pick the best centrality depending on the cause or situation.
An Empirical Study on Fake News Prediction with Machine Learning Methods
Source Title: 2022 14th International Conference on Computational Intelligence and Communication Networks, DOI Link
						View abstract ⏷
					
Due to advancement in technology and distributed networking, there is huge information available on the internet. Due to this, it is possible that some users may try to post fake news through some platforms to get the financial credibility. A common user finds it difficult to differentiate the fake news in comparison with the authentic news. Due to this, a fake news can be main agenda against a particular individual, society, organization or even related to political party. To date, lot of research has been done to detect the fake news on the internet. But, most of the solutions are proposed by comparing with very few performance metrics along with limited data sets. In this work, we propose to use Decision tree, SVM, LSTM, Naive Bayes techniques to analyse and observe the behavior on different datasets. Furthermore, we compare and demonstrate the best approach through experimental analysis.
A Comparative Study on Machine Learning based Prediction of Citations of Articles
Source Title: 2022 6th International Conference on Trends in Electronics and Informatics, DOI Link
						View abstract ⏷
					
Authors can use predictions to create very accurate estimations about the likely outcomes of a query based on past data, which can be about anything from customer churn to possible fraudulent conduct. The citation count indicates to the number of times publication has been cited. One of the most important considerations for a writer or author when publishing an article is how to make a significant effect on the content. The impact of a paper is broad, which increases the opportunity for fresh ideas and progress. Future paper citation counts will be useful for researchers in selecting representative literature because they are an important indicator for estimating possible influences of published papers. This is a regression problem. Predicting and comprehending article citation numbers, on the other hand, is a difficult problem to solve, both theoretically and empirically, as evidenced by decades of research. The influence of each work is predicted based on its previous citations. The goal is to precisely anticipate the number of citations that will be received over time. The proposed research study also provides a comparative analysis on the prediction of citations for articles.
Empirical Study on Citation Count Prediction of Research Articles
Source Title: Journal of Scientometric Research, Quartile: Q2, DOI Link
						View abstract ⏷
					
Citation is a measure that quantifies the impact of the researcher, research article and journals quality. Investigating the citation of articles and/or researchers is one of the important tasks in the research community. So, understanding and predicting citation patterns of research articles has become popular in scientific research fields. In this work, we give a machine learning approach to predict the citations of research articles using the keywords. We study the citation impact based on keywords motioned in the articles using the data set of publications which are published in the various physical review journals from 1985-2012. In this dataset, for each publication is allocated some PACS codes (keywords) by their authors which represent a sub-field of Physics. In this work, we are investigating the impact of PACS codes of article on articles citation. We are performing our analysis on the first (sub-field of physics), second (sub area of sub-field of physics) and third level of PACS codes. We observed that compared to the first level, every pair of citation patterns of the second level is highly correlated. We also obtained a universal approximation curve for the third level that matches with the average value of the first level. This curve looks like a shifted and scaled version of the Gaussian function and is right skewed. We can also predict the citations based on the keywords by using this universal curve.
Air Quality Analysis and Forecasting Using Deep Learning
Source Title: 2022 International Conference on Computational Intelligence and Sustainable Engineering Solutions, DOI Link
						View abstract ⏷
					
In today's world, people are more concerned about the quality of air. Since air is everywhere, we cannot escape from the pollutants of air. To keep our health safe, we need to maintain certain quality air in our surroundings. Effective air prediction is the most trending research in this era. This paper discusses some of the challenges that are faced due to lack of data resources, various concentrations of the quality of air, etc. And we propose a solution to these kinds of problems with the help of a predictive data feature extraction based on an approach of predicting the quality of air. This model is based on lightBGM which predicts PM2.5 concentration of air at 35 air monitoring systems at Beijing in the upcoming 24 hours. This high dimensionality large-scale data is collected and employed using the CNN, KNN, and random forest algorithm which is used to predict the quality of air. As you explore new data, you can use the spatial data remaining in the existing model. We can also improve the predictive accuracy of the existing model and make our model more efficient. Using the sliding window mechanism, we can mine deeper data of high dimensions, increasing the amount of information. Here we implement the comparison of the predicted values with the input actual values from the data set which was provided and prove that our model is more beneficial and superior to all other models that already exist by constructing a high-dimensional statistical analysis on the data the implementation give the effective results over the air quality management.
A Tool for Fake News Detection using Machine Learning Techniques
Source Title: 2022 2nd International Conference on Intelligent Technologies (CONIT), DOI Link
						View abstract ⏷
					
The web and internet are very important to a very huge number of people and it has a large number of users. These users use these platforms for different purposes. There are many social media platforms that are available to these users. Any user can make a spread or post the news/message through these online social platforms. Even though the algorithms used by social media platforms are updated meticulously, they still are not efficient enough to filter out the fake news or make the essential information viral first where it is needed so that the information surrounding that specific region benefits the people living there before the news reaches out to the rest of the world. One of the biggest methods of fake news contribution is social bots. Social bots generate the content automatically and spread among the social media users. In this work, we propose an effective approach to detect fake news / false information using machine learning techniques. We provide a tool to detect fake news using Naive Bayes technique with high accuracy. We show the results on two data sets by using our tool.
Hyperspectral Image Classification with Optimized Compressed Synergic Deep Convolution Neural Network with Aquila Optimization
Dr Murali Krishna Enduri, Md Habibur Rahman., Jonnadula Harikiran., Sultan Almakdi., Mohammed Alshehri., Tatireddy Subba Reddy., Koduru Hajarathaiah., Quadri Noorulhasan Naveed
Source Title: Computational Intelligence and Neuroscience, DOI Link
						View abstract ⏷
					
The classification technology of hyperspectral images (HSI) consists of many contiguous spectral bands that are often utilized for a various Earth observation activities, such as surveillance, detection, and identification. The incorporation of both spectral and spatial characteristics is necessary for improved classification accuracy. In the classification of hyperspectral images, deep learning has gained significant traction. This research analyzes how to accurately classify new HSI from limited samples with labels. A novel deep-learning-based categorization based on feature extraction and classification is designed for this purpose. Initial extraction of spectral and spatial information is followed by spectral and spatial information integration to generate fused features. The classification challenge is completed using a compressed synergic deep convolution neural network with Aquila optimization (CSDCNN-AO) model constructed by utilising a novel optimization technique known as the Aquila Optimizer (AO). The HSI, the Kennedy Space Center (KSC), the Indian Pines (IP) dataset, the Houston U (HU) dataset, and the Salinas Scene (SS) dataset are used for experiment assessment. The sequence testing on these four HSI-classified datasets demonstrate that our innovative framework outperforms the conventional technique on common evaluation measures such as average accuracy (AA), overall accuracy (OA), and Kappa coefficient (k). In addition, it significantly reduces training time and computational cost, resulting in enhanced training stability, maximum performance, and remarkable training accuracy.
Computing Influential Nodes Using the Nearest Neighborhood Trust Value and PageRank in Complex Networks
Source Title: Entropy, Quartile: Q1, DOI Link
						View abstract ⏷
					
Computing influential nodes gets a lot of attention from many researchers for information spreading in complex networks. It has vast applications, such as viral marketing, social leader creation, rumor control, and opinion monitoring. The information-spreading ability of influential nodes is greater compared with other nodes in the network. Several researchers proposed centrality measures to compute the influential nodes in a complex network, such as degree, betweenness, closeness, semi-local centralities, and PageRank. These centrality methods are defined based on the local and/or global information of nodes in the network. However, due to their high time complexity, centrality measures based on the global information of nodes have become unsuitable for large-scale networks. Very few centrality measures exist that are based on the attributes between nodes and the structure of the network. We propose the nearest neighborhood trust PageRank (NTPR) based on the structural attributes of neighbors and nearest neighbors of nodes. We define the measure based on the degree ratio, the similarity between nodes, the trust values of neighbors, and the nearest neighbors. We computed the influential nodes in various real-world networks using the proposed centrality method. We found the maximum influence by using influential nodes with SIR and independent cascade methods. We also compare the maximum influence of our centrality measure with the existing basic centrality measures.
Automated Resume Screener using Natural Language Processing(NLP)
Source Title: 2022 6th International Conference on Trends in Electronics and Informatics, DOI Link
						View abstract ⏷
					
Resume Screening is the process of evaluating the resume of the job seekers based on a specific requirement. It is used to identify the candidate eligibility for a job by matching all the requirements needed for the offered role with their resume information such as education qualification, skill sets, technical stuff etc. Resume Screening is a crucial stage in candidate's selection for a job role, it is the stage where the decision making is done whether to move the candidate to the next level of hiring process or not. Traditionally, this process is performed manually, but companies often receive thousands of resumes for job applications. In order to reduce the human involvement and errors, many new ways were introduced in this process. This paper discusses about one such process which is very efficient in performing Resume screening. It includes Natural Language Processing (NLP), an automated Machine Learning Algorithm for screening the resumes. This paper explains the end to end working of a python application which efficiently screens the resumes of the candidates based on the organization's requirement.
Efficient algorithm for finding the influential nodes using local relative change of average shortest path
Source Title: Physica A: Statistical Mechanics and its Applications, Quartile: Q1, DOI Link
						View abstract ⏷
					
In complex networks, finding the influential nodes playing a crucial role in theoretical and practical point of view because they are capable of propagating information to large portion of the network. Investigating the dynamics of information spreading in complex networks is a hot topic with a wide range of applications, including information dissemination, information propagation, rumour control, viral marketing, and opinion monitoring. In recent years, several centrality measures have been discovered to find influential nodes in complex networks. In this work, the local relative change of average shortest path (i.e Local RASP) based on the local structure of the network is being proposed. This local RASP measure of a node defined based on the local networks relative change in average shortest path when the node is deleted. Our local RASP centrality produces good results compared to degree, betweenness, closeness, semi-local, PageRank, Trust-PageRank, and RASP centralities. Our local RASP centrality measures computation time is less compared to global centrality measure RASP. It measures the information diffusion efficiently within the network through the initial seed nodes identified by the local RASP.
Finding Influential Nodes in Complex Networks Using Nearest Neighborhood Trust Value
Source Title: Studies in Computational Intelligence, Quartile: Q3, DOI Link
						View abstract ⏷
					
Information spreading in complex networks is an emerging topic in many applications such as social leaders, rumour control, viral marketing, and opinion monitor. Finding the influential nodes plays a pivotal role for information spreading in complex network. This is because influential nodes have capable to spread more information in compared with other nodes. Currently, there are many centrality measures proposed to identify the influential nodes in the complex network such as degree, betweenness, closeness, semi-local centralities and page-rank etc. These centrality measures are defined based on the local and/or global information of nodes in the network. Sheng et al. [18] propose centrality measure based on the information between nodes and structure of network. Inspired by this measure, we propose the nearest neighborhood trust page rank (NTPR) based on structural information of neighbours and nearest neighbours. We proposed the measure based on the similarity between nodes, degree ratio, trust value of neighbours and nearest neighbours. We also perform on various real world network with proposed centrality measure for finding the influential nodes. Furthermore, we also compare the results with existing basic centrality measures.
Sentiment Analysis on Zomato Reviews
Source Title: 2021 13th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
The impact of online reviews on restaurants has reached to unprecedented level where vast number of people are checking posted opinions/reviews prior to ordering their food deliveries. The two main concepts used in the online reviews are sentiment analysis and exploratory data analysis (EDA). The goal of sentimental analysis is to determine whether the given data is positive, negative or neutral. It can help brands to determine how their product is perceived by their clientele. Sentiment analysis, otherwise known as opinion mining, works thanks to natural language processing and machine learning algorithms, to automatically determine the emotional tone behind online conversations. Sentiment analysis mainly relies on the keywords. The overall analysis is made on the data that has been reviewed on Zomato. Most restaurants available on the applications are established ones, hence we get a good idea regarding the restaurants of Hyderabad. Exploratory data analysis (EDA) is a term for certain kinds of initial analysis and findings done with data sets, usually early in an analytical process.
An Empirical Study on Impact of News Articles
Source Title: 2021 13th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
One of the major factors that an author thinks while publishing an article is about getting high impact on the article. Impact of an article is wide and this makes the influence for making challenges to get new ideas and development. An author by knowing the impact of an article can increase the visibility and enhances the influence of published research. It improves the quality and standard of the article. Sometimes citation count can also lead to the impact of an article. Citation count refers to the number of citations established by an article. This research deals with the aim that how to increase the impact of the article to get more citations. Experimental results clearly shows that how the article visibility and the citations can be increased with different performance metrics.
Application of Steganography Imaging by AES and Random Bit
Source Title: 2021 13th International Conference on Computational Intelligence and Communication Networks (CICN), DOI Link
						View abstract ⏷
					
The goal of steganography is to hide the data in another medium, meaning disguising the data, so that the existence of the messages can be concealed. Steganography can be applied to many formats of data, including audio, video, and images and can hide any kind of digital information through data hiding techniques. In this work, we propose an application of steganography imaging that would ensure the secure transfer of data along with integrity and confidentiality because steganography relies on hiding messages in unsuspected multimedia data. In this paper, we providing a steganography imaging application which is based on the Advanced Encryption Standard (AES) and random bit technique.
A Secure Matrix Inversion Protocol for IoT Applications in Smart Home Systems
Source Title: 2021 12th International Conference on Computing Communication and Networking Technologies, ICCCNT 2021, DOI Link
						View abstract ⏷
					
Internet of Things(IoT) has been immensely progressed through recent years both in academic as well as in the industrial field. IoT has been widely used in Smart Home System(SHS) to provide the wide variety of facilities. In IoT enabled Smart Home environment various things such as lights, home appliances, computers, cameras, and many others which all are connected to the Internet and allowing the user to monitor and control things at any time and from any location. However, we need to ensure the proper security to maintain the quality of the SHS. In this paper, we propose the novel security protocols based on Matrix inversion method to prevent the loss of user data in IoT based SHS. Further, we observe that data transfers within the SHS are protected by an encryption scheme. We use the partition of matrices to larger size matrices over rings for encryption and decryption using matrix inversion method. Data transfers within the SHS more secured in our proposed system.
Decentralized Cloud Storage using Unutilized Storage in PC
Source Title: 2021 12th International Conference on Computing Communication and Networking Technologies, ICCCNT 2021, DOI Link
						View abstract ⏷
					
Cloud Storage is emerging tremendously. Cloud Storage service is provided as an Infrastructure as a Service (IaaS). We can access the data stored in the cloud whenever and wherever we want it. Cloud Storage providers require high resources to maintain the storage servers in their data centres. Nowadays in most of our personal computers (PCs), we find a lot of storage space left unused and we have good internet connectivity. So, our idea is to use our PCs as storage servers by storing the data in the unutilized storage space. We also focus on the security of the data. So, along with some practices followed by the existing cloud storage services, we implement some additional methods to protect the data in our decentralized cloud storage model.
Evolution of Physics Sub-fields
Source Title: Proceedings of the 5th International Conference on Complexity, Future Information Systems and Risk, DOI Link
						View abstract ⏷
					
-
On structural parameterizations of firefighting
Dr Murali Krishna Enduri, Das B., Kiyomi M., Misra N., Otachi Y., Reddy I V., Yoshimura S
Source Title: Theoretical Computer Science, Quartile: Q3, DOI Link
						View abstract ⏷
					
The Firefighting problem is defined as follows. At time t=0, a fire breaks out at a vertex of a graph. At each time step t?1, a firefighter permanently defends (protects) an unburned vertex, and the fire then spreads to all undefended neighbors from the vertices on fire. This process stops when the fire cannot spread anymore. The goal is to find a sequence of vertices for the firefighter that maximizes the number of saved (non burned) vertices. The Firefighting problem turns out to be NP -hard even when restricted to bipartite graphs or trees of maximum degree three. We study the parameterized complexity of the Firefighting problem for various structural parameterizations. All our parameters measure the distance to a graph class (in terms of vertex deletion) on which the Firefighting problem admits a polynomial-time algorithm. To begin with, we show that the problem is W[1] -hard when parameterized by the size of a modulator to diameter at most two graphs and split graphs. In contrast to the above intractability results, we show that Firefighting is fixed parameter tractable ( FPT ) when parameterized by the size of a modulator to cographs, threshold graphs and disjoint unions of stars. We further investigate the kernelization complexity of the problem and show that it does not admit a polynomial kernel when parameterized by the size of a modulator to a disjoint union of stars under some complexity-theoretic assumptions.