An Integrated ELM Based Feature Reduction Combination Detection for Gene Expression Data Analysis
Dr Sambit Kumar Mishra, Jogeswar Tripathy., Rasmita Dash., Binod Kumar Pattanayak
Source Title: SN Computer Science, Quartile: Q1, DOI Link
View abstract ⏷
Globally, cancer stands as the second leading cause of mortality. Various strategies have been proposed to address this issue, with a strong emphasis on utilizing gene expression data to enhance cancer detection methods. However, challenges arise due to the high dimensionality, limited sample size relative to its dimensions, and the inherent redundancy and noise in many genes. Consequently, it is advisable to employ a subset of genes rather than the entire set for classifying gene expression data. This research introduces a model that incorporates Ranked-based Filter (RF) techniques for extracting significant features and employs Extreme Learning Machine (ELM) for data classification. The computational cost of using RF technique over high dimensional data is low. However extraction of significant genes using one or two stage of reduction is not effective. Thus, a 4-stage feature reduction strategy is applied. The reduced data is then utilized for classification using few variants of ELM model and activation function. Subsequently, a two-stage grading approach is implemented to determine the most suitable classifier for data classification. This analysis is conducted over four microarray gene expression data using four activation function with seven learning based classifiers, from which it is shown that II-ELM classifier outperforms in terms of performance matrix and ROC graph
A Survey on Task Scheduling in Edge-Cloud
Source Title: SN Computer Science, Quartile: Q1, DOI Link
View abstract ⏷
In this modern era, cloud computing is not enough to meet todays intelligent societys data processing needs, so edge computing has emerged. In contrast to computation in the cloud, it elaborates user proximity and proximity to the data source. To store local, small sized, and processed data on the edges of the network is more effective. The edge paradigm, intended to be a leading computation due to its low latency, also faces many challenges due to computational capabilities and resource availability. Edge computing allows edge devices to release heavy loads and computational operations on the remote server. This allows us to take full advantage of the server-side computing and storage in edge devices. However, the offload of all highly compressed computing operations on a remote server at the same time may become overcrowded, leading to intensive processing delays for many computing operations and unexpectedly elevated power usage. Instead of that, it is possible that spare edge resources may need to be utilized effectively and the access to expensive cloud resources would be restricted. As a result, it is important to investigate the collaborative planning process (scheduling) for the edge servers with a cloud server based on task features, development objectives, and system status. It can assist in performing all the computing functions efficiently and effectively. This paper analyzes and summarizes computing conditions for the edge computing context and classifies the computation of tasks into various edge-cloud computing scenarios. At the end, based on the problem structure, various collaborative planning methods for computational functions are presented.
Container Placement Using Penalty?Based PSO in the Cloud Data Center
Source Title: Concurrency and Computation: Practice and Experience, Quartile: Q1, DOI Link
View abstract ⏷
Containerization has transformed application deployment by offering a lightweight, scalable, and portable architecture for the deployment of container applications and their dependencies. In contemporary cloud computing data centers, where virtual machines (VMs) are frequently utilized to host containerized applications, the challenge of effective placement of the container has garnered significant attention. Container placement (CP) involves placing a container over the VM to execute a container. CP is a nontrivial problem in the container cloud data center (CCDC). Poor placement decisions can lead to decreased service performance or wastage of cloud resources. Efficient placement of containers within a virtual environment is critical while optimizing resource utilization and performance. This paper proposes a penalty?based particle swarm optimization (PB?PSO) CP algorithm. In the proposed algorithm, we have considered the makespan, cost, and load of the VM while making the CP decisions. We have proposed the concept of a load?balancing penalty to prevent a VM from becoming overloaded. This algorithm solves various CP challenges by varying container application sizes in heterogeneous cloud environments. The primary goal of the proposed algorithm is to minimize the makespan and computational cost of containers through efficient resource utilization. We have performed extensive simulation studies to verify the efficacy of the proposed algorithm using the CloudSim 4.0 simulator. The proposed optimization algorithm (PB?PSO) aims to minimize both the makespan and the execution monetary costs and maximize the resource utilization simultaneously. During the simulation, we observed a reduction of 10% to 15% in both execution cost and makespan. Furthermore, our algorithm achieved the most optimal cost?makespan trade?offs compared to other competing algorithms.
When latent features meet side information: A preference relation based graph neural network for collaborative filtering
Source Title: Expert Systems with Applications, Quartile: Q1, DOI Link
View abstract ⏷
As recommender systems shift from rating-based to interaction-based models, graph neural network-based collaborative filtering models are gaining popularity due to their powerful representation of user-item interactions. However, these models may not produce good item ranking since they focus on explicit preference predictions. Further, these models do not consider side information since they only capture latent feature information of user-item interactions. This study proposes an approach to overcome these two issues by employing preference relation in the graph neural network model for collaborative filtering. Using preference relation ensures the model will generate a good ranking of items. The item side information is integrated into the model through a trainable matrix, which is crucial when the data is highly sparse. The main advantage of this approach is that the model can be generalized to any recommendation scenario where a graph neural network is used for collaborative filtering. Experimental results obtained using the recent RS datasets show that the proposed model outperformed the related baselines. © 2024 Elsevier Ltd
Designing a GSM and ARDUINO Based Reliable Home Automation System
Dr Sambit Kumar Mishra, Jogeswar Tripathy., Swarnamayee Dash., Rasmita Dash., Jashasmita Pal., Sunita Padhii
Source Title: 2024 OITS International Conference on Information Technology (OCIT), DOI Link
View abstract ⏷
This paper introduces the design and prototype of a new home automation system that utilizes GSM technology as the network infrastructure to connect its components. The proposed system is composed of two primary parts: the first is the GSM module, which acts as the core of the system, managing, controlling, and monitoring the user's home. Users and system administrators can connect to the GSM locally to access devices and manage system functions. The second part is the hardware interface module, which provides the necessary interface for relays and actuators within the home automation system. The mobile phone, originally designed for making calls and sending text messages, has evolved into a versatile device, especially with the advent of smartphones. In this study, the researcher develops a home automation system using GSM and Arduino, allowing users to control household appliances by simply sending SMS commands through their GSM-based phones. This paper states that a smartphone is not necessary; but an old GSM phone can effectively be used to turn home electronic appliances on and off from any location. The proposed system offers greater scalability and flexibility compared to commercially available home automation systems.
ICEC 2024 Commentary
Source Title: 2024 International Conference on Intelligent Computing and Emerging Communication Technologies (ICEC), DOI Link
View abstract ⏷
-
A Panoramic Review on Cutting-Edge Methods for Video Anomaly Localization
Dr Sambit Kumar Mishra, Rashmiranjan Nayak., Asish Kumar Dalai., Umesh Chandra Pati., Santos Kumar Das
Source Title: IEEE Access, Quartile: Q1, DOI Link
View abstract ⏷
Video anomaly detection and localization is the process of spatiotemporally localizing the anomalous video segment corresponding to the abnormal event or activities. It is challenging due to the inherent ambiguity of anomalies, diverse environmental factors, the intricate nature of human activities, and the absence of adequate datasets. Further, the spatial localization of the video anomalies (video anomaly localization) after the temporal localization of the video anomalies (video anomaly detection) is also a complex task. Video anomaly localization is essential for pinpointing the anomalous event or object in the spatial domain. Hence, the intelligent video surveillance system must have video anomaly detection and localization as key functionalities. However, the state-of-the-art lacks a dedicated survey of video anomaly localization. Hence, this article comprehensively surveys the cutting-edge approaches for video anomaly localization, associated threshold selection strategies, publicly available datasets, performance evaluation criteria, and open trending research challenges with potential solution strategies
Maximizing Resource Utilization Using Hybrid Cloud-based Task Allocation Algorithm
Source Title: 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems (MASS), DOI Link
View abstract ⏷
Cloud computing operates similarly to a utility, providing users with on-demand access to various hardware and software resources, billed according to usage. These resources are primarily virtualized, with virtual machines (VMs) serving as critical components. However, task allocation within VMs presents significant challenges, as uneven distribution can lead to underloading or overloading, causing system inefficiencies and potential failures. This study addresses these issues by proposing a novel hybrid task allocation algorithm that combines the strengths of the Artificial Bee Colony (ABC) algorithm with Particle Swarm Optimization (PSO). Our approach aims to enhance resource utilization and reduce the risks of VM overload or underload. We conduct a comprehensive evaluation of the proposed hybrid algorithm against traditional ABC and PSO algorithms, focusing on their effectiveness in managing diverse task loads. The results of our empirical analysis indicate that our hybrid approach outperforms the conventional algorithms, leading to better resource utilization and more accurate task allocation. These findings have significant implications for optimizing task allocation in cloud computing environments, and we suggest potential avenues for future research to further refine these strategies.
Advanced Temporal Attention Mechanism Based 5G Traffic Prediction Model for IoT Ecosystems
Source Title: 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems (MASS), DOI Link
View abstract ⏷
Traffic prediction in5G is important for effective deployment and operation of Internet of Things (IoT) ecosystems. It enables resource management and optimization, guaranteeing that the network can handle unpredictable traffic volumes with-out experiencing traffic jams. This helps to ensure high quality of service and low latency for applications such as autonomous automobiles and virtual reality. Predictive traffic management further enhances user experience by keeping services consistent and reliable, particularly during busy hours. There are various approaches to traffic prediction in 5G networks, and each has advantages and disadvantages of its own. The choice of model will depend on how precise, adaptable, and computationally demanding the network must be. The model proposed in this paper integrates lightweight convolution with temporal attention to deliver accurate and efficient traffic prediction for 5G networks that may further be useful for developing IoT ecosystem
AI Based Feature Selection for Intrusion Detection Classifiers in Cloud of Things
Source Title: 2024 1st International Conference on Cognitive, Green and Ubiquitous Computing, IC-CGU 2024, DOI Link
View abstract ⏷
The popularity of cloud computing can be attributed to its on-demand nature, scalability, and flexibility. However, because of its heightened vulnerability and propensity for so-phisticated, widespread attacks, safeguarding this distributed en-vironment presents difficulties. Conventional IDS are insufficient. The proposed IDS for cloud environments in this study makes use of ensemble feature selection and classification techniques. This approach robustly distinguishes between attacks and normal traf-fic by merging individual classifiers through voting. Performance measures and ROC-AUC analysis show that the new approach is significantly more accurate and has fewer false alarms than the previous one. For cloud intrusion detection, this method provides a statistically better option. © 2024 IEEE.
An Ensemble Deep Learning Model for Oral Squamous Cell Carcinoma Detection Using Histopathological Image Analysis
Source Title: IEEE Access, Quartile: Q1, DOI Link
View abstract ⏷
Deep learning approaches for medical image analysis are widely applied for the recognition and classification of different kinds of cancer. In this study, histopathological images of oral cells are analyzed for the programmed recognition of Oral squamous cell carcinoma (OSCC) using the proposed framework. The suggested model applies transfer learning and ensemble learning in two phases. In the 1st phase, a few Convolutional neural network (CNN) models are considered through transfer learning applications for OSCC detection. In the 2nd phase, the ensemble model is constructed considering the best two pre-trained CNN from the 1st phase. The proposed classifier is compared with leading-edge models like Alexnet, Resnet50, Resnet101, Inception net, Xception net, and InceptionresnetV2. Results are analyzed to demonstrate the effectiveness of the suggested framework. A three-phase comparative analysis is considered. Firstly, various metrics including accuracy, recall, F-score, and precision are evaluated. Secondly, a graphical analysis using a loss and accuracy graph is performed. Lastly, the accuracy of the proposed classifier is compared with that of other models from existing literature. Following the three-stage performance evaluation, the proposed ensemble classifier exhibits enhanced performance with an accuracy of 97.88%. Authors
Enhancing Traffic Flow Through Advanced ACO Mechanism
Source Title: IEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2024, DOI Link
View abstract ⏷
Severe traffic congestion is a significant challenge for urban areas, and improving sustainable urban development is critical, yet traditional traffic management systems often struggle to cope with dynamic real-time conditions due to their reliance on predetermined schedules and fixed control mechanisms. This paper advocates for the application of optimizing techniques, specifically an enhanced version of ant colony optimization (ACO), to alleviate this challenge. By effectively managing and enhancing vehicle movement, these approaches target the reduction of congestion, travel times, and costs while concurrently enhancing fuel efficiency. This approach can also be adapted to optimize the deployment and movement of drones in wireless communication networks, ensuring optimal coverage and resource utilization. Implementations, comparisons, and visualizations show how these approaches help improve traffic movement, thereby minimizing congestion-associated problems. © 2024 IEEE.
A Systematic Review on Federated Learning in Edge-Cloud Continuum
Source Title: SN Computer Science, Quartile: Q1, DOI Link
View abstract ⏷
Federated learning (FL) is a cutting-edge machine learning platform that protects user privacy while enabling collaborative learning across various devices. It is particularly relevant in the current environment when massive volumes of data are generated at the edge of networks by developing technologies like social networking, cloud computing, edge computing, and the Internet of Things. FL reduces the possibility of unauthorized access by third parties by allowing data to stay on local devices, hence mitigating any privacy breaches. The integration of FL in Cloud, Edge, and hybrid Edge-Cloud settings are some of the computing paradigms that this study investigates. We highlight the salient features of FL, go over the main obstacles to its implementation and use, and make recommendations for future study directions. Furthermore, we assess how FL, by facilitating safe and cooperative data sharing among vehicles, can improve service quality in the Internet of Vehicles (IoV). Our study findings are intended to offer practical insights and suggestions that may have an impact on a variety of computing technology research topics. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2024.
Task Offloading Technique Selection In Mobile Edge Computing
Source Title: 2024 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), DOI Link
View abstract ⏷
In distributed computing environments, computation offloading is a vital strategy for maximizing the performance and energy efficiency of mobile devices. Distributed deep learning-based offloading (DDLO) [10] and deep reinforcement learning for online computation offloading (DROO) [10] are two popular methods for solving the computation offloading problem. In DDLO, the data is divided into smaller pieces during offloading and distributed throughout the systems or devices. In DROO, an agent is trained to determine the optimum offloading choices based on the resources at hand, the network environment, and the application's performance requirements. Comparison is presented of both approaches, emphasizing their benefits and drawbacks and the situations when one approach is more suitable than the other. Precision, effectiveness, and adaptability are just a few of the different metrics we use to evaluate the performance of both techniques in a variety of workload and network configuration scenarios. Our findings indicate that while deep reinforcement learning is more able to respond to environmental changes, distributed deep learning-based offloading is more efficient in terms of computational resources.
Enhancing Edge Intelligence with Layer-wise Adaptive Precision and Randomized PCA
Dr Sambit Kumar Mishra, Sri Lakshmi Praghna Manthena., Velankani Joise Divya G C., Paavani Aashika Maddi., Nvss Mounika Tanniru., Sambit Kumar Mishra
Source Title: 2024 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), DOI Link
View abstract ⏷
Edge intelligence is the ability of edge devices to carry out intelligent operations, such as object identification, speech recognition, or natural language processing, utilizing machine learning algorithms. The primary goal is to fix edge computing's problems and improve its performance. The main goal of this work is to apply RPCA to increase energy efficiency and reduce memory usage. The algorithm computes the covariance matrix of the centered data, finds the eigenvectors and eigenvalues of the covariance matrix, sorts the eigenvectors and eigenvalues in descending order of the eigenvalues, chooses the first set of eigenvectors, and projects the data onto the chosen eigenvectors. This article employs a technique known as layer-wise adaptive precision (LAP), which decreases the precision of activations in neural network layers that contribute less to output accuracy.
Special issue on collaborative edge computing for secure and scalable Internet of Things
Source Title: Software - Practice and Experience, Quartile: Q1, DOI Link
View abstract ⏷
-
A deep transfer learning model for green environment security analysis in smart city
Dr Sambit Kumar Mishra, Majed Alfayad., Madhusmita Sahu., Rasmita Dash., Mamoona Humayun., Mohammed Assiri
Source Title: Journal of King Saud University - Computer and Information Sciences, Quartile: Q1, DOI Link
View abstract ⏷
Green environmental security refers to the state of human-environment interactions that include reducing resource shortages, pollution, and biological dangers that can cause societal disorder. In IoT-enabled smart cities, due to the advancement of technologies, sensors and actuators collect vast quantities of data that are analyzed to extract potentially useful information. However, due to the noise and diversity of the data generated, only a small portion of the massive data collected from smart cities is used. In sustainable Land Use and Land Cover (LULC) management, environmental deterioration resulting from improper land usage in the digital ecosystem is a global issue that has garnered attention. The deep learning techniques of AI are recognized for their capacity to manage vast amounts of erroneous and unstructured data. In this paper, we propose a morphologically augmented fine-tuned DenseNet-121(MAFDN) LULC classification model to automate the categorization of high spatial resolution scene images for environmental conservation. This work includes an augmentation process (i.e. erosion, dilation, blurring, and contrast enhancement operations) to extract spatial patterns and enlarge the training size of the dataset. A few state-of-the-art techniques are incorporated for contrasting the efficacy of the proposed approach. This facilitates green resource management and personalized provision of services.
Comparative Evaluation of Optimization Techniques for Industrial Wireless Sensor Network Hello Flood Attack Mitigation
Source Title: Proceedings - 2024 3rd International Conference on Computational Modelling, Simulation and Optimization, ICCMSO 2024, DOI Link
View abstract ⏷
Protecting Industrial Wireless Sensor Networks (IWSNs) means ensuring that crucial industrial processes remain as stable and whole as ever. In order to mitigate the 'Hello Flood Attack' in IWSNs, this paper compares three optimization heuristic techniques: Genetic Algorithm (GA), Simulated Annealing (SA) and Particle Swarm Optimization (PSO). Genetic Algorithm (GA) progresses remedies, Simulated Annealing (SA) interactively fixes communication setup and Particle Swarm Optimization (PSO) upgrades features to elevate vigor. The study looks into how well each optimization technique enhances network resilience and protects against the negative effects of Hello Flood Attacks. There is also a benchmark scenario for comparison. These results offer valuable information on the development of safe, secure IWSNs by pointing out the benefits and drawbacks of these systems. © 2024 IEEE.
Blockchain-Based Medical Report Management and Distribution System
Dr Sambit Kumar Mishra, Mr Subham Kumar Sahoo, Subham Kumar Sahoo., Abhishek Guru
Source Title: 6G Enabled Fog Computing in IoT, DOI Link
View abstract ⏷
Generally, the Hospital operations contain loads of scientific reviews which can be a crucial part of operations. As a result of integrating pathology and other testing labs within the medical center, hospitals today have improved their business operations while also achieving greener and faster diagnoses. Many dif-ferent strategies are used in hospital operations, from patient admission and control to health center cost management. This will raise operational complexity and make it more challenging to manage, especially when combined with newly introduced offerings like pathology and pharmaceutical control. In order to overcome this issue, we employ the Hyperledger notion and a blockchain era to retain the data of each individual transaction with 100% authenticity. Instead of using a centralized server, all transactions are encrypted and kept as blocks, which are then used to authenticate within a network of computers. Additionally, we employ the hyper ledger concept to associate and store all associated scientific files for each transaction with a date stamp. This makes it possible to confirm the legitimacy of each document and identify any changes made by someone else. This consultation defines that affected person's clinical record is personal and every affected person has his very own privacy. To guard the reviews from hackers or enemies, who will make changes on clinical reviews and additionally saving the statistics without lacking any content material which performs an important position to shape a life. To study reviews, we are using a block chain method which splits the information into modules. Using this method hackers or enemies can't get the right information. "To bring forward a secure, safe, efficient, and legitimate medical report man-agement system" is the primary goal of this project.
Latency Aware – Resource Planning in Edge Using Fuzzy Logic
Source Title: 2023 2nd International Conference on Ambient Intelligence in Health Care (ICAIHC), DOI Link
View abstract ⏷
As a potential paradigm for enabling effective and low-latency computation at the network's edge, edge computing has recently come into the spotlight. In edge computing environments, resource allocation is essential for ensuring the best possible resource utilization while still satisfying application requirements. Traditional resource allocation algorithms, however, struggle to effectively capture the uncertainties and ambiguity associated with resource availability and application needs because of the dynamic and varied nature of edge environments. This research offers a fuzzy logic-based method for planning to allocate resources in edge computing. Fuzzy logic offers a flexible and understandable framework for modeling and reasoning with imperfect and ambiguous data. The suggested method offers a more reliable and adaptable resource allocation system that can successfully address the uncertainties present in edge computing by utilizing fuzzy logic. The resource allocation process incorporates fuzzy membership functions to capture the vagueness of resource availability and application requirements. Fuzzy rules are defined to map the linguistic variables representing resource availability, application demands, and performance objectives to appropriate resource allocation decisions. The fuzzy inference engine then utilizes these rules to make intelligent decisions regarding resource allocation, considering the fuzzy inputs and the system's predefined objectives.
A Hybrid Encryption Approach using DNA-Based Shift Protected Algorithm and AES for Edge-Cloud System Security
Dr Sambit Kumar Mishra, Chandan Cherukuri., Pavuluri Venkata Dheeraj., Deepak Puthal
Source Title: 2023 OITS International Conference on Information Technology (OCIT), DOI Link
View abstract ⏷
The modern applications, such as smart cities, connected homes, and crisis management systems, has driven the emergence of the edge-cloud continuum to enable data processing to occur closer to the source, reducing latency and enhancing data processing efficiency. However, due to the distributed nature of edge nodes and cloud environments, data security remains a critical concern. Malicious actors may intercept or eavesdrop on communication channels between edge devices and the cloud. DNA computing, a groundbreaking security concept inspired by biological DNA, offers a promising solution to address these security challenges. This paper proposes a DNA-based cryptographic method for secure data transfer and communication in edge-cloud computing environments. The research also examines into various data security threats in the edge-cloud continuum and explores potential countermeasures.
LiDAR-based Building Damage Detection in Edge-Cloud Continuum
Dr Sambit Kumar Mishra, Mohana Lasya Sanisetty., Apsareena Zulekha Shaik., Sai Likitha Thotakura., Sai Likhita Aluru., Deepak Puthal
Source Title: 2023 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress, DOI Link
View abstract ⏷
In recent years, natural disasters such as earth-quakes and hurricanes have caused significant damage to buildings and infrastructure worldwide. As a result, there has been an increasing demand for efficient and accurate methods of assessing the extent of building damage to facilitate effective recovery efforts. One emerging technology that shows great promise in this area is Light Detection and Ranging (Li-DAR). Therefore, this paper proposes a novel detection framework utilizing textural feature extraction strategies for Li-DAR-based building damage detection. Li-DAR, a remote sensing technology, has ability to create detailed maps of buildings and other infrastructure, allowing for precise identification and measurement of damage caused by natural disasters. Integration of the popular paradigm Edge cloud continuum extends cloud's capabilities to the edge of the network, enabling more effective post-disaster recovery efforts. Smart Li-DAR sensors pre-process the captured data and send it to the nearest edge device for further processing.. Inclusion of machine learning algorithms like K-means clustering algorithm here is used to classify the buildings into damaged and undamaged classes by analyzing the extracted textural features. The scheme can detect various types of building damage. The cloud server is utilized to store the processed maps. The integration of the Edge-Cloud Continuum (ECC) has added more value by reducing the network usage, and latency of the Li-DAR-based building damage detection system. ECC enables processing and analysis of data at the point of origin as well as large-scale data processing and storage in cloud-based systems. This proposed framework has shown promising results in preliminary experiments and has the potential to revolutionize post-disaster recovery efforts by providing efficient building damage maps.
Role of federated learning in edge computing: A survey
Dr Sambit Kumar Mishra, Nehal Sampath Kumar., Bhaskar Rao., Brahmendra., Lakshmana Teja
Source Title: Journal of Autonomous Intelligence, DOI Link
View abstract ⏷
This paper explores various approaches to enhance federated learning (FL) through the utilization of edge computing. Three techniques, namely Edge-Fed, hybrid federated learning at edge devices, and cluster federated learning, are investigated. The Edge-Fed approach implements the computational and communication challenges faced by mobile devices in FL by offloading calculations to edge servers. It introduces a network architecture comprising a central cloud server, an edge server, and IoT devices, enabling local aggregations and reducing global communication frequency. Edge-Fed offers benefits such as reduced computational costs, faster training, and decreased bandwidth requirements. Hybrid federated learning at edge devices aims to optimize FL in multi-access edge computing (MAEC) systems. Cluster federated learning introduces a cluster-based hierarchical aggregation system to enhance FL performance. The paper explores the applications of these techniques in various domains, including smart cities, vehicular networks, healthcare, cybersecurity, natural language processing, autonomous vehicles and smart homes. The combination of edge computing (EC) and federated learning (FL) is a promising technique gaining popularity across many applications. EC brings cloud computing services closer to data sources, further enhancing FL. The integration of FL and EC offers potential benefits in terms of collaborative learning.
CS-Based Energy-Efficient Service Allocation in Cloud
Dr Sambit Kumar Mishra, Mr Subham Kumar Sahoo, Abhishek Guru., Chinmaya Kumar Swain., Pramod Kumar Sethy., Bibhudatta Sahoo
Source Title: Lecture Notes in Networks and Systems, Quartile: Q4, DOI Link
View abstract ⏷
Nowadays, cloud computing is growing rapidly and has been developed as an adequate and adaptable paradigm in solving large-scale problems. Since the number of cloud users and their requests are increasing fast, the loads on the cloud data center may be under-loaded or over-loaded. These circumstances induce various problems, such as high response time and energy consumption. High energy consumption in the cloud data center has drastic negative impacts on the environment. Literature shows that scheduling plays a significant role in the reduction of energy consumption. In the recent decade, this problem has attracted huge interest among researchers, and several solutions have been proposed. Energy-efficient service (task) allocation with high Customer Satisfaction (CS) constraint has become a critical problem of a cloud. In this paper, a high CS-based energy-efficient service allocation framework has been designed. This optimizes the energy consumption as well as the CS level in the cloud. The proposed algorithm is simulated in CloudSim simulator and compared with some standard algorithms. The simulation results show in favor of the proposed algorithm.
Predictive VM Consolidation for Latency Sensitive Tasks in Heterogeneous Cloud
Dr Sambit Kumar Mishra, Chinmaya Kumar Swain., Preeti Routray., Abdulelah Alwabel
Source Title: Lecture Notes in Networks and Systems, Quartile: Q4, DOI Link
View abstract ⏷
Virtualization technology plays a crucial role for reducing the cost in a cloud environment. Efficient virtual machine (VM) packing method that focuses on compaction of hosts such that most of its resources are used when it serves the user requests. Here our aim is to reduce the power requirements of a cloud system by focusing on minimizing the number of hosts. We propose a predictive scheduling approach considering the deadline of a task request and make flexible decisions to allocate the tasks to hosts. Experimental results show that the proposed approach can save around 5 to 10% power consumption than the standard VM packing methods in most scenarios. Even when the total power consumption requirements remain the same as that of standard methods in some scenarios, the average number of hosts required in the cloud environment are reduced and thereby reducing the cost.
Applications of Federated Learning in Computing Technologies
Dr Sambit Kumar Mishra, Dr Tapas Kumar Mishra, Kotipalli Sindhu., Mogaparthi Surya Teja., Vutukuri Akhil., Ravella Hari Krishna., Pakalapati Praveen
Source Title: Convergence of Cloud with AI for Big Data Analytics: Foundations and Innovation, DOI Link
View abstract ⏷
Federated learning is a technique that trains the knowledge across different decentralized devices holding samples of information without exchanging them. The concept is additionally called collaborative learning. In federated learning, the clients are allowed separately to teach the deep neural network models with the local data combined at the deep neural network model at the central server. All the local datasets are uploaded to a minimum of one server, so it assumes that local data samples are identically distributed. It doesnt transmit the information to the server. Because of its security and privacy concerns, its widely utilized in many applications like IoT, cloud computing; Edge computing, Vehicular edge computing, and many more. The details of implementation for the privacy of information in federated learning for shielding the privacy of local uploaded data are described. Since there will be trillions of edge devices, the system efficiency and privacy should be taken with no consideration in evaluating federated learning algorithms in computing technologies. This will incorporate the effectiveness, privacy, and usage of federated learning in several computing technologies. Here, different applications of federated learning, its privacy concerns, and its definition in various fields of computing like IoT, Edge, and Cloud Computing are presented.
Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network
Source Title: International Journal of Environmental Research and Public Health, Quartile: Q1, DOI Link
View abstract ⏷
Worldwide, oral cancer is the sixth most common type of cancer. India is in 2nd position, with the highest number of oral cancer patients. To the population of oral cancer patients, India contributes to almost one-third of the total count. Among several types of oral cancer, the most common and dominant one is oral squamous cell carcinoma (OSCC). The major reason for oral cancer is tobacco consumption, excessive alcohol consumption, unhygienic mouth condition, betel quid eating, viral infection (namely human papillomavirus), etc. The early detection of oral cancer type OSCC, in its preliminary stage, gives more chances for better treatment and proper therapy. In this paper, author proposes a convolutional neural network model, for the automatic and early detection of OSCC, and for experimental purposes, histopathological oral cancer images are considered. The proposed model is compared and analyzed with state-of-the-art deep learning models like VGG16, VGG19, Alexnet, ResNet50, ResNet101, Mobile Net and Inception Net. The proposed model achieved a cross-validation accuracy of 97.82%, which indicates the suitability of the proposed approach for the automatic classification of oral cancer data.
Combination of Reduction Detection Using TOPSIS for Gene Expression Data Analysis
Dr Sambit Kumar Mishra, Dr Tapas Kumar Mishra, Jogeswar Tripathy., Rasmita Dash., Binod Kumar Pattanayak., Deepak Puthal
Source Title: Big Data and Cognitive Computing, Quartile: Q1, DOI Link
View abstract ⏷
In high-dimensional data analysis, Feature Selection (FS) is one of the most fundamental issues in machine learning and requires the attention of researchers. These datasets are characterized by huge space due to a high number of features, out of which only a few are significant for analysis. Thus, significant feature extraction is crucial. There are various techniques available for feature selection; among them, the filter techniques are significant in this community, as they can be used with any type of learning algorithm and drastically lower the running time of optimization algorithms and improve the performance of the model. Furthermore, the application of a filter approach depends on the characteristics of the dataset as well as on the machine learning model. Thus, to avoid these issues in this research, a combination of feature reduction (CFR) is considered designing a pipeline of filter approaches for high-dimensional microarray data classification. Considering four filter approaches, sixteen combinations of pipelines are generated. The feature subset is reduced in different levels, and ultimately, the significant feature set is evaluated. The pipelined filter techniques are Correlation-Based Feature Selection (CBFS), Chi-Square Test (CST), Information Gain (InG), and Relief Feature Selection (RFS), and the classification techniques are Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), and k-Nearest Neighbor (k-NN). The performance of CFR depends highly on the datasets as well as on the classifiers. Thereafter, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking all reduction combinations and evaluating the superior filter combination among all.
A Data Aggregation Approach Exploiting Spatial and Temporal Correlation among Sensor Data in Wireless Sensor Networks
Dr Kshira Sagar Sahoo, Dr Sambit Kumar Mishra, Lucy Dash., Noor Zaman Jhanjhi., Mohammed Baz., Mehedi Masud., Binod Kumar Pattanayak
Source Title: Electronics, Quartile: Q3, DOI Link
View abstract ⏷
Wireless sensor networks (WSNs) have various applications which include zone surveillance, environmental monitoring, event tracking where the operation mode is long term. WSNs are characterized by low-powered and battery-operated sensor devices with a finite source of energy. Due to the dense deployment of these devices practically it is impossible to replace the batteries. The finite source of energy should be utilized in a meaningful way to maximize the overall network lifetime. In the space domain, there is a high correlation among sensor surveillance constituting the large volume of the sensor network topology. Each consecutive observation constitutes the temporal correlation depending on the physical phenomenon nature of the sensor nodes. These spatio-temporal correlations can be efficiently utilized in order to enhance the maximum savings in energy uses. In this paper, we have proposed a Spatial and Temporal Correlation-based Data Redundancy Reduction (STCDRR) protocol which eliminates redundancy at the source level and aggregator level. The estimated performance score of proposed algorithms is approximately 7.2 when the score of existing algorithms such as the KAB (K-means algorithm based on the ANOVA model and Bartlett test) and ED (Euclidian distance) are 5.2, 0.5, respectively. It reflects that the STCDRR protocol can achieve a higher data compression rate, lower false-negative rate, lower false-positive rate. These results are valid for numeric data collected from a real data set. This experiment does not consider non-numeric values.
Crop Recommendation System Using Support Vector Machine Considering Indian Dataset
Dr Tapas Kumar Mishra, Dr Sambit Kumar Mishra, Kanaparthi Jeevan Sai., Shreyas Peddi., Manideep Surusomayajula
Source Title: Lecture Notes in Networks and Systems, Quartile: Q4, DOI Link
View abstract ⏷
Since a long years, agriculture is considered as a major profession for livelihoods of the Indians. Still, agriculture is not profitable as many farmers take the worse step as they cannot survive from the burden of loans. So, one such place where there is yet large scope to develop is agriculture. In comparison with other countries, India has the highest production rate in agriculture. However, still, most agricultural fields are underdeveloped due to the lack of deployment of ecosystem control technologies. Agriculture when combined with technology can bring the finest results. Crop yield depends on multiple climatic conditions such as air temperature, soil temperature, humidity, and soil moisture. In general, farmers depend on self-monitoring and experience for harvesting fields. Scarcity of water is a main issue in todays life. This scarcity is affecting people worldwide. So water is also a vital component of crop yield, here we are considering rainfall instead direct water. Predicting the crop selection/yield in advance of its harvest would help the policymakers and farmers for taking appropriate measures for farming, marketing, and storage. Thus, in this paper we propose a crop selection using machine learning technique as support vector machine (SVM) and polynomial regression. This model will help the farmers to know the yield of their crop before cultivating the agricultural field and thus help them to make the appropriate decisions. It attempts to solve the issue by building a prototype of an interactive prediction system. Accurate yield prediction is required to be done after understanding the functional relationship between yield and these parameters because along with all advances in the machines and technologies used in farming, useful and accurate information about different matters also plays a significant role in it. In this paper, we have simulated SVM and polynomial regression technique to predict which crop can yield better profit. Both of the models are simulated comprehensively on the Indian dataset, and an analytical report has been presented.
A Smart Logistic Classification Method for Remote Sensed Image Land Cover Data
Dr Sambit Kumar Mishra, Madhusmita Sahu., Rasmita Dash.,Deepak Puthal
Source Title: SN Computer Science, Quartile: Q1, DOI Link
View abstract ⏷
A smart system integrates appliances of sensing, acquisition, classification and managing with regard to interpreting and analyzing a situation to generate decisions depending on the available data in a predictive way. Remotely sensed images are an essential tool for evaluating and analyzing land cover dynamics, particularly for forest-cover change. The remote data gathered for this operation from different sensors are of high spatial resolution and thus suffer from high interclass and low intraclass vulnerability issues which retards classification accuracy. To address this problem, in this research analysis, a smart logistic fusion-based supervised multi-class classification (SLFSMC) model is proposed to obtain a thematic map of different land cover types and thereby performing smart actions. In the pre-processing stage of the proposed work, a pair of closing and opening morphological operations is employed to produce the fused image to exploit the contextual information of adjacent pixels. Thereafter quality assessment of the fused image is estimated on four fusion metrics. In the second phase, this fused image is taken as input to the proposed classifiers. Afterward, a multi-class classification model is designed based on the supervised learning concept to generate maps for analyzing and exporting decisions based on any critical climatic situation. In our paper, for estimating the performance of proposed SLFSMC among few conventional classification techniques such as the Naïve Bayes classifier, decision tree, Support vector machine, and K-nearest neighbors, a statistical tool called as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is involved. We have implemented proposed SLFSMC system on some of the regions of Victoria, a state of Australia, after the deforestation caused due to different reasons.
Task Allocation in Containerized Cloud Computing Environment
Dr Sambit Kumar Mishra, Md Akram Khan., Anisha Kumari., Bibhudatta Sahoo
Source Title: 2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), DOI Link
View abstract ⏷
Containerization technology makes use of operating system-level virtualization to pack application that runs with required libraries and is isolated from other processes on the same host. The lightweight easy deployment of containers made them popular at many data centers. It has captured the market of virtual machines and emerged as lightweight technology that offers better microservices support. Many organizations are widely deploying container technology for handling their diverse and unexpected workload derived from modern applications such as Edge/ Fog computing, Big Data, and IoT in either proprietary clusters or public, private cloud data centers. In the cloud computing environment, scheduling plays a pivotal role. In the same way in container technology, scheduling also plays a critical role in achieving the optimum utilization of available resources. Designing an efficient scheduler is itself a challenging task. The challenges arise from various aspects like the diversity of computing resources and maintaining fairness among numerous tenants, sharing resources with each other as per their requirements, unexpected variation in resource demands and heterogeneity of jobs, etc. This survey provides a multi-perspective overview of container scheduling. Here, we have organized the container scheduling problem into four categories based on the type of optimization algorithm applied to get the linear programming Modeling, heuristic, meta-heuristic, machine learning, and artificial intelligence-based mathematical model. In the previous research work has been done on either Virtual machine placements to Physical Machines or Container instances to Physical machines. This leads to either underutilized PMs or over-utilized PMs. But in this paper, we try to combine both virtualization technology Containers as well as VMs. The primary aim is to optimize resource utilization in terms of CPU time. in this paper, we proposed a meta-heuristics algorithm named Sorted Task-based allocation. Simulation results show that the proposed Sorted TBA algorithm performs better than the Random and Unsorted TBA algorithms.
Analysis of Machine Learning Technologies for the Detection of Diabetic Retinopathy
Dr Sambit Kumar Mishra, Biswabijayee Chandra Sekhar Mohanty., Sonali Mishra
Source Title: Machine Learning for Healthcare Applications, DOI Link
View abstract ⏷
In Todays world, disease diagnosis plays a vital role in the area of medical imaging. Medical imaging is the method and procedure of making visual descriptions of the interior of a body for clinical investigation and clinical mediation, as well as visual depiction of the function of some organs or tissues. Medical imaging also deals with disease detection. We can get a better view of detecting the disease by using machine learning in medical imaging. So Now what is Machine Learning (ML)? ML is an artificial intelligence (AI) utilization that presents the system with the capacity to learn and develop itself. It mainly focuses on the development of computer programs that can access the data and use it for themselves. In this chapter we will focus on detection Diabetic retinopathy using machine learning. Diabetes is a type of disease that result in too much sugar in blood. There are three main types of diabetes. Diabetic retinopathy is one of them. Diabetic retinopathy is an eye infection brought about by the inconvenience of diabetes and we ought to recognize it right on time for effective treatment. As the disease advances, the sight of a patient may begin to break down and lead to diabetic retinopathy. Thus, two groups were recognized, in particular non-proliferative diabetic retinopathy and proliferative diabetic retinopathy. We should detect it as soon as possible as it can cause permanent loss of vision. By using ML in medical imaging we can detect it much faster and more accurately. In this chapter we will analyze about different ML technologies, algorithms and models to diagnose diabetic retinopathy in an efficient manner to support the healthcare system.
Energy-Aware Task Allocation for Multi-Cloud Networks
Dr Sambit Kumar Mishra, Sonali Mishra., Ahmed Alsayat., N Z Jhanjhi., Mamoona Humayun., Kshira Sagar Sahoo., Ashish Kr Luhach
Source Title: IEEE Access, Quartile: Q1, DOI Link
View abstract ⏷
In recent years, the growth rate of Cloud computing technology is increasing exponentially, mainly for its extraordinary services with expanding computation power, the possibility of massive storage, and all other services with the maintained quality of services (QoSs). The task allocation is one of the best solutions to improve different performance parameters in the cloud, but when multiple heterogeneous clouds come into the picture, the allocation problem becomes more challenging. This research work proposed a resource-based task allocation algorithm. The same is implemented and analyzed to understand the improved performance of the heterogeneous multi-cloud network. The proposed task allocation algorithm (Energy-aware Task Allocation in Multi-Cloud Networks (ETAMCN)) minimizes the overall energy consumption and also reduces the makespan. The results show that the makespan is approximately overlapped for different tasks and does not show a significant difference. However, the average energy consumption improved through ETAMCN is approximately 14%, 6.3%, and 2.8% in opposed to the random allocation algorithm, Cloud Z-Score Normalization (CZSN) algorithm, and multi-objective scheduling algorithm with Fuzzy resource utilization (FR-MOS), respectively. An observation of the average SLA-violation of ETAMCN for different scenarios is performed.