Resource management in fog computing: Overview and mathematical foundation
Source Title: Swarm Intelligence: Theory and Applications in Fog Computing, Beyond 5G Networks, and Information Security, DOI Link
View abstract ⏷
Fog computing is a distributed computing paradigm that extends the capabilities of cloud computing to the edge of the network, closer to the data source or user. Resource management in fog computing is a complex task due to the heterogeneity of devices, dynamic workloads, limited resources, energy efficiency, task offloading, load balancing, quality of service (QoS) management, security, and privacy concerns. It plays a crucial role in optimizing the performance and efficiency of fog computing systems. The chapter delves into the challenges posed by the diverse nature of devices, dynamic workloads, and distributed architecture, emphasizing the need for adaptive resource allocation strategies. It provides a systematic and mathematical approach to resource management, including the formulation of optimization problems such as the Knapsack Problem, Traveling Salesman Problem, Transportation Problem, Vehicular Routing Problem, and N-Queens Problem. Furthermore, it underscores the significance of load balancing, task offloading, and resource provisioning as adaptive strategies to dynamically allocate resources, ensuring optimal utilization without causing underutilization. It offers valuable insights into the complexities of managing resources in fog computing and provides a holistic view of the challenges, strategies, and mathematical formulations involved in resource management across various contexts
Optimal deployment of multiple IoT applications on the fog computing: A metaheuristic-based approach
Dr Md Muzakkir Hussain, Dr Dinesh Reddy Vemula, Sai Sri Ram Kumar Macha., Pavan Kumar Chinta., Prajwal Katakam., Md Muzakkir Hussain., Ilche Georgievski
Source Title: Swarm Intelligence: Theory and Applications in Fog Computing, Beyond 5G Networks, and Information Security, DOI Link
View abstract ⏷
As more IoT devices are generating massive amounts of data today than ever before, there has been an outstanding need for solutions that can manage this data effectively. Cloud computing was a solution to this problem, but as new advancements were made in real-time data analysis and decision-making capabilities, the need for solutions with significantly reduced latency emerged. This requirement gave rise to fog computing, which introduces the challenge of placing application modules in such a way that the optimization can be maximized and latency can be further minimized to meet modern requirements. In this chapter, we propose to place these application modules utilizing the Particle Swarm Optimization (PSO) algorithm. Our work also compares the results with a few other module placement algorithms. PSO algorithms explore multiple solutions and look for efficient task allocation strategies that align with the principles of social swarm behavior. Moreover, our proposed system is designed to handle fluctuating levels of resource availability present in dynamic environments, such as those offered by large fog computing infrastructures. We evaluate the performance of our system in terms of energy consumption, cost of execution on the cloud, and total network usage by making simulations in iFogSim. We observe that the application module placement using our system leads to a significant optimization of these key parameters
Evolutionary Algorithms for Edge Server Placement in Vehicular Edge Computing
Source Title: IEEE Access, Quartile: Q1, DOI Link
View abstract ⏷
Vehicular Edge Computing (VEC) is a critical enabler for intelligent transportation systems (ITS). It provides low-latency and energy-efficient services by offloading computation to the network edge. Effective edge server placement is essential for optimizing system performance, particularly in dynamic vehicular environments characterized by mobility and variability. The Edge Server Placement Problem (ESPP) addresses the challenge of minimizing latency and energy consumption while ensuring scalability and adaptability in real-world scenarios. This paper proposes a framework to solve the ESPP using real-world vehicular mobility traces to simulate realistic conditions. To achieve optimal server placement, we evaluate the effectiveness of several advanced evolutionary algorithms. These include the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Teaching-Learning-Based Optimization (TLBO). Each algorithm is analyzed for its ability to optimize multiple objectives under varying network conditions. Our results show that ACO performs the best, producing well-distributed pareto-optimal solutions and balancing trade-offs effectively. GA and PSO exhibit faster convergence and better energy efficiency, making them suitable for scenarios requiring rapid decisions. The proposed framework is validated through extensive simulations and compared with state-of-the-art methods. It consistently outperforms them in reducing latency and energy consumption. This study provides actionable insights into algorithm selection and deployment strategies for VEC, addressing mobility, scalability, and resource optimization challenges. The findings contribute to the development of robust, scalable VEC infrastructures, enabling the efficient implementation of next-generation ITS applications
MATSFT: User query-based multilingual abstractive text summarization for low resource Indian languages by fine-tuning mT5
Source Title: Alexandria Engineering Journal, Quartile: Q1, DOI Link
View abstract ⏷
User query-based summarization is a challenging research area of natural language processing. However, the existing approaches struggle to effectively manage the intricate long-distance semantic relationships between user queries and input documents. This paper introduces a user query-based multilingual abstractive text summarization approach for the Indian low-resource languages by fine-tuning the multilingual pre-trained text-to-text (mT5) transformer model (MATSFT). The MATSFT employs a co-attention mechanism within a shared encoderdecoder architecture alongside the mT5 model to transfer knowledge across multiple low-resource languages. The Co-attention captures cross-lingual dependencies, which allows the model to understand the relationships and nuances between the different languages. Most multilingual summarization datasets focus on major global languages like English, French, and Spanish. To address the challenges in the LRLs, we created an Indian language dataset, comprising seven LRLs and the English language, by extracting data from the BBC news website. We evaluate the performance of the MATSFT using the ROUGE metric and a language-agnostic target summary evaluation metric. Experimental results show that MATSFT outperforms the monolingual transformer model, pre-trained MTM, mT5 model, NLI model, IndicBART, mBART25, and mBART50 on the IL dataset. The statistical paired t-test indicates that the MATSFT achieves a significant improvement with a -value of 0.05 compared to other models.
Swarm Intelligence Theory and Applications in Fog Computing, Beyond 5G Networks, and Information Security
Source Title: Swarm Intelligence Theory and Applications in Fog Computing, Beyond 5G Networks, and Information Security, DOI Link
View abstract ⏷
This book offers a comprehensive overview of the theory and practical applications of swarm intelligence in fog computing, beyond 5G networks, and information security. The introduction section provides a background on swarm intelligence and its applications in real-world scenarios. The subsequent chapters focus on the practical applications of swarm intelligence in fog-edge computing, beyond 5G networks, and information security. The book explores various techniques such as computation offloading, task scheduling, resource allocation, spectrum management, radio resource management, wireless caching, joint resource optimization, energy management, path planning, UAV placement, and intelligent routing. Additionally, the book discusses the applications of swarm intelligence in optimizing parameters for information transmission, data encryption, and secure transmission in edge networks, multi-cloud systems, and 6G networks. The book is suitable for researchers, academics, and professionals interested in swarm intelligence and its applications in fog computing, beyond 5G networks, and information security. The book concludes by summarizing the key takeaways from each chapter and highlighting future research directions in these areas.
Autism Spectrum Disorder Prediction Using Particle Swarm Optimization and Convolutional Neural Networks
Source Title: Lecture notes in networks and systems, Quartile: Q4, DOI Link
View abstract ⏷
The integration of PSO with CNN provides a promising approach for classifying ASD using sMRI data. ASD is a behavioral disorder that impacts a persons lifetime tendency to reciprocate with society. The variability and intensity of ASD symptoms, in addition to the fact that they share symptoms with other mental disorders, make an early diagnosis difficult. The key limitation of CNN is selecting the best parameters. To overcome this, we use PSO as an optimization approach within the CNN to choose the most relevant parameters to train the network. In the proposed approach, we initialize a swarm of particles, where each particle represents a unique configuration of CNN hyperparameters, including the number of convolutional layers, learning rates, filter sizes, and batch sizes. To evaluate the swarm in PSO, we use a fitness function, such as accuracy, to measure each particles performance. The performance of the proposed approach for ASD prediction outperformed that of the other optimizers with a high convergence rate.
A class of parameter choice strategies for the finite dimensional iterated weighted Tikhonov regularization scheme
Source Title: Numerical Algorithms, Quartile: Q1, DOI Link
View abstract ⏷
Recently, Reddy and Pradeep (2023) have proposed a class of parameter choice strategies to choose the regularization parameter for the finite dimensional weighted Tikhonov regularization scheme. In this article, we explore the iterated weighted Tikhonov scheme in the finite dimensional context, discuss its convergence analysis and propose a class of parameter choice strategies to choose the regularization parameter. Furthermore we establish optimal rate of convergence as O?j(?+1)j(?+1)+1 based on the proposed strategies. The schemes performance is illustrated through numerical experiments with an efficient finite dimensional approximation of an operator
Optimal Deployment of Multiple IoT Applications on the Fog Computing
Source Title: Swarm Intelligence, Quartile: Q2, DOI Link
View abstract ⏷
As more IoT devices are generating massive amounts of data today than ever before, there has been an outstanding need for solutions that can manage this data effectively. Cloud computing was a solution to this problem, but as new advancements were made in real-time data analysis and decision-making capabilities, the need for solutions with significantly reduced latency emerged. This requirement gave rise to fog computing, which introduces the challenge of placing application modules in such a way that the optimization can be maximized and latency can be further minimized to meet modern requirements. In this chapter, we propose to place these application modules utilizing the Particle Swarm Optimization (PSO) algorithm. Our work also compares the results with a few other module placement algorithms. PSO algorithms explore multiple solutions and look for efficient task allocation strategies that align with the principles of social swarm behavior. Moreover, our proposed system is designed to handle fluctuating levels of resource availability present in dynamic environments, such as those offered by large fog computing infrastructures. We evaluate the performance of our system in terms of energy consumption, cost of execution on the cloud, and total network usage by making simulations in iFogSim. We observe that the application module placement using our system leads to a significant optimization of these key parameters
Application Aware Computation Offloading in Vehicular Fog Computing (VFC)
Source Title: Data Science Journal, Quartile: Q2, DOI Link
View abstract ⏷
-
Energy efficient resource management in data centers using imitation-based optimization
Source Title: Energy Informatics, Quartile: Q2, DOI Link
View abstract ⏷
Cloud computing is the paradigm for delivering streaming content, office applications, software functions, computing power, storage, and more as services over the Internet. It offers elasticity and scalability to the service consumer and profit to the provider. The success of such a paradigm has resulted in a constant increase in the providers infrastructure, most notably data centers. Data centers are energy-intensive installations that require power for the operation of the hardware and networking devices and their cooling. To serve cloud computing needs, the data center organizes work as virtual machines placed on physical servers. The policy chosen for the placement of virtual machines over servers is critical for managing the data center resources, and the variability of workloads needs to be considered. Inefficient placement leads to resource waste, excessive power consumption, and increased communication costs. In the present work, we address the virtual machine placement problem and propose an Imitation-Based Optimization (IBO) method inspired by human imitation for dynamic placement. To understand the implications of the proposed approach, we present a comparative analysis with state-of-the-art methods. The results show that, with the proposed IBO, the energy consumption decreases at an average of 7%, 10%, 11%, 28%, 17%, and 35% compared to Hybrid meta-heuristic, Extended particle swarm optimization, particle swarm optimization, Genetic Algorithm, Integer Linear Programming, and Hybrid Best-Fit, respectively. With growing workloads, the proposed approach can achieve monthly cost savings of 201.4 euro and CO2 Savings of 460.92 lbs CO2/month
A discrete cosine transform-based intelligent image steganography scheme using quantum substitution box
Source Title: Quantum Information Processing, Quartile: Q2, DOI Link
View abstract ⏷
Everyday dealing with enormous amounts of sensitive data requires its protection and communication over the insecure network. The field of Steganography always attracted researchers for significant amount of scientific attention to protect and communicate sensitive data. This paper presents a secure steganography scheme for hiding Gray-scale secret image into a Color cover image by replacing cover image bits in frequency domain using modified quantum substitution box (S-Box). The inclusion of modified quantum S-Box for concealing secret bits in randomly selected any of the two channels of cover image ensures enhanced security. In the proposed scheme, we first performed discrete cosine transform (DCT) on the cover image. Then, quantum S-box is applied to locate the position of DCT coefficients where least significant bits are substituted intelligently based on the relative ordering of DCT frequencies. This relative ordering is achieved by traversing DCT coefficients in a zigzag manner where less important pixels have been altered more effectively without any major loss in image quality. The security of proposed method is examined by key space, key sensitivity parameters and robustness analysis. Additionally, the conducted simulation results demonstrate that our proposed steganography scheme has better visual image quality in terms of MSE, PSNR, UQI, SSIM, RMSE parameters as compared to other state-of-the-art works.
Classification of Autism Spectrum Disorder Based on Brain Image Data Using Deep Neural Networks
Dr Dinesh Reddy Vemula, Ms Polavarapu Bhagya Lakshmi, Shantanu Ghosh., Sandeep Singh Sengar
Source Title: Smart Innovation, Systems and Technologies, Quartile: Q4, DOI Link
View abstract ⏷
Autism spectrum disorder (ASD) is a neuro-developmental disorder that affects 1% of children and has a lifetime effect on communication and interaction. Early prediction can address this problem by decreasing the severity. This paper presents a deep learning-based transfer learning applied to resting state fMRI images for predicting the autism disorder features. We worked with CNN and different transfer learning models such as Inception-V3, Resnet, Densenet, VGG16, and Mobilenet. We performed extensive experiments and provided a comparative study for different transfer learning models to predict the classification of ASD. Results demonstrated that VGG16 achieves high classification accuracy of 95.8% and outperforms the rest of the transfer learning models proposed in this paper and has an average improvement of 4.96% in terms of accuracy.
Latency Aware – Resource Planning in Edge Using Fuzzy Logic
Source Title: 2023 2nd International Conference on Ambient Intelligence in Health Care (ICAIHC), DOI Link
View abstract ⏷
As a potential paradigm for enabling effective and low-latency computation at the network's edge, edge computing has recently come into the spotlight. In edge computing environments, resource allocation is essential for ensuring the best possible resource utilization while still satisfying application requirements. Traditional resource allocation algorithms, however, struggle to effectively capture the uncertainties and ambiguity associated with resource availability and application needs because of the dynamic and varied nature of edge environments. This research offers a fuzzy logic-based method for planning to allocate resources in edge computing. Fuzzy logic offers a flexible and understandable framework for modeling and reasoning with imperfect and ambiguous data. The suggested method offers a more reliable and adaptable resource allocation system that can successfully address the uncertainties present in edge computing by utilizing fuzzy logic. The resource allocation process incorporates fuzzy membership functions to capture the vagueness of resource availability and application requirements. Fuzzy rules are defined to map the linguistic variables representing resource availability, application demands, and performance objectives to appropriate resource allocation decisions. The fuzzy inference engine then utilizes these rules to make intelligent decisions regarding resource allocation, considering the fuzzy inputs and the system's predefined objectives.
Post-quantum distributed ledger technology: a systematic survey
Source Title: Scientific Reports, Quartile: Q1, DOI Link
View abstract ⏷
Blockchain technology finds widespread application across various fields due to its key features such as immutability, reduced costs, decentralization, and transparency. The security of blockchain relies on elements like hashing, digital signatures, and cryptography. However, the emergence of quantum computers and supporting algorithms poses a threat to blockchain security. These quantum algorithms pose a significant threat to both public-key cryptography and hash functions, compelling the redesign of blockchain architectures. This paper investigates the status quo of the post-quantum, quantum-safe, or quantum-resistant cryptosystems within the framework of blockchain. This study starts with a fundamental overview of both blockchain and quantum computing, examining their reciprocal influence and evolution. Subsequently, a comprehensive literature review is conducted focusing on Post-Quantum Distributed Ledger Technology (PQDLT). This research emphasizes the practical implementation of these protocols and algorithms providing extensive comparisons of characteristics and performance. This work will help to foster further research at the intersection of post-quantum cryptography and blockchain systems and give prospective directions for future PQDLT researchers and developers.
Sentiment Analysis for Real-Time Micro Blogs using Twitter Data
Dr Dinesh Reddy Vemula, G Divya., Reshma Banu., G F Ali Ahammed., Nuthanakanti Bhaskar., Murali Kanthi
Source Title: 2023 2nd International Conference for Innovation in Technology (INOCON), DOI Link
View abstract ⏷
The basic purpose of sentiment analysis is to determine how someone feels when they comment or express their feelings or emotions. Positive, neutral, and negative emotions are the three categories into which emotions are divided. Everyone will use and apply this analysis on social media; online; everyone expresses their opinions by clicking on the like, remark, or share buttons. Using the Random Forest, SVM, and Nave Bayes algorithms, the Twitter tweets in this study were identified as positive or negative, with F1-Scores of 0.224, 0.410, and 0.702, respectively, and accuracy values of 50%, 52%, and 73%.
Deep learning image-based automated application on classification of tomato leaf disease by pre-trained deep convolutional neural networks
Source Title: Mehran University Research Journal of Engineering and Technology, DOI Link
View abstract ⏷
-
Music Generation Using Deep Learning
Source Title: Lecture Notes in Electrical Engineering, Quartile: Q4, DOI Link
View abstract ⏷
We explore the usage of char-RNN which is special type of recurrent neural network (RNN) in generating music pieces and propose an approach to do so. First, we train a model using existing music data. The generating model mimics the music patterns in such a way that we humans enjoy. The generated model does not replicate the training data but understands and creates patterns to generate new music. We generate honest quality music which should be good and melodious to hear. By tuning, the generated music can be beneficial for composers, film makers, artists in their tasks, and it can also be sold by companies or individuals. In our paper, we focus more on char ABC-notation because it is reliable to represent music using just sequence of characters. We use bidirectional long short-term memory (LSTM) which takes input as music sequences and observer that the proposed model has more accuracy compared with other models.
Image Description Generator using Residual Neural Network and Long Short-Term Memory
Source Title: Computer Science Journal of Moldova, Quartile: Q3, DOI Link
View abstract ⏷
Human beings can describe scenarios and objects in a picture through vision easily whereas performing the same task with a computer is a complicated one. Generating captions for the objects of an image helps everyone to understand the scenario of the image in a better way. Instinctively describing the content of an image requires the apprehension of computer vision as well as natural language processing. This task has gained huge popularity in the field of technology and there is a lot of research work being carried out. Recent works have been successful in identifying objects in the image but are facing many challenges in generating captions to the given image accurately by understanding the scenario. To address this challenge, we propose a model to generate the caption for an image. Residual Neural Network (ResNet) is used to extract the features from an image. These features are converted into a vector of size 2048. The caption generation for the image is obtained with Long Short-Term Memory (LSTM). The proposed model is experimented on the Flickr8K dataset and obtained an accuracy of 88.4%. The experimental results indicate that our model produces appropriate captions compared to the state of art models.
SONG: A Multi-Objective Evolutionary Algorithm for Delay and Energy Aware Facility Location in Vehicular Fog Networks
Source Title: Sensors, Quartile: Q1, DOI Link
View abstract ⏷
With the emergence of delay- and energy-critical vehicular applications, forwarding sense-actuate data from vehicles to the cloud became practically infeasible. Therefore, a new computational model called Vehicular Fog Computing (VFC) was proposed. It offloads the computation workload from passenger devices (PDs) to transportation infrastructures such as roadside units (RSUs) and base stations (BSs), called static fog nodes. It can also exploit the underutilized computation resources of nearby vehicles that can act as vehicular fog nodes (VFNs) and provide delay- and energy-aware computing services. However, the capacity planning and dimensioning of VFC, which come under a class of facility location problems (FLPs), is a challenging issue. The complexity arises from the spatio-temporal dynamics of vehicular traffic, varying resource demand from PD applications, and the mobility of VFNs. This paper proposes a multi-objective optimization model to investigate the facility location in VFC networks. The solutions to this model generate optimal VFC topologies pertaining to an optimized trade-off (Pareto front) between the service delay and energy consumption. Thus, to solve this model, we propose a hybrid Evolutionary Multi-Objective (EMO) algorithm called Swarm Optimized Non-dominated sorting Genetic algorithm (SONG). It combines the convergence and search efficiency of two popular EMO algorithms: the Non-dominated Sorting Genetic Algorithm (NSGA-II) and Speed-constrained Particle Swarm Optimization (SMPSO). First, we solve an example problem using the SONG algorithm to illustrate the delayenergy solution frontiers and plotted the corresponding layout topology. Subsequently, we evaluate the evolutionary performance of the SONG algorithm on real-world vehicular traces against three quality indicators: Hyper-Volume (HV), Inverted Generational Distance (IGD) and CPU delay gap. The empirical results show that SONG exhibits improved solution quality over the NSGA-II and SMPSO algorithms and hence can be utilized as a potential tool by the service providers for the planning and design of VFC networks.
Enhanced resource provisioning and migrating virtual machines in heterogeneous cloud data center
Source Title: Journal of Ambient Intelligence and Humanized Computing, Quartile: Q1, DOI Link
View abstract ⏷
Data centers have become an indispensable part of modern computing infrastructures. It becomes necessary to manage cloud resources efficiently to reduce those ever-increasing power demands of data centers. Dynamic consolidation of virtual machines (VMs) in a data center is an effective way to map workloads onto servers in a way that requires the least resources possible. It is an efficient way to improve resources utilization and reduce energy consumption in cloud data centers. Virtual machine (VM) consolidation involves host overload/underload detection, VM selection, and VM placement. If a server becomes overloaded, we need techniques to select the proper virtual machines to migrate. By considering the migration overhead and service level of agreement (SLA) violation, we investigate design methodologies to reduce the energy consumption for the whole data center. We propose a novel approach that optimally detects when a host is overloaded using known CPU utilization and a given state configuration. We design a VM selection policy, considering various resource utilization factors to select the VMs. In addition, we propose an improved version of the JAYA approach for VM placement that minimizes the energy consumption by optimally pacing the migrated VMs in a data center. We analyze the performance in terms of energy consumption, performance degradation, and migrations. Using CloudSim, we run simulations and observed that our approach has an average improvement of 24% compared to state-of-the-art approaches in terms of power consumption.
A secure IoT-based micro-payment protocol for wearable devices
Source Title: Peer-to-Peer Networking and Applications, Quartile: Q1, DOI Link
View abstract ⏷
Wearable devices are parts of the essential cost of goods sold (COGS) in the wheel of the Internet of things (IoT), contributing to a potential impact in the finance and banking sectors. There is a need for lightweight cryptography mechanisms for IoT devices because these are resource constraints. This paper introduces a novel approach to an IoT-based micro-payment protocol in a wearable devices environment. This payment model uses an elliptic curve integrated encryption scheme (ECIES) to encrypt and decrypt the communicating messages between various entities. The proposed protocol allows the customer to buy the goods using a wearable device and send the mobile applications confidential payment information. The application creates a secure session between the customer, banks and merchant. The static security analysis and informal security methods indicate that the proposed protocol is withstanding the various security vulnerabilities involved in mobile payments. For logical verification of the correctness of security properties using the formal way of Burrows-Abadi-Needham (BAN) logic confirms the proposed protocols accuracy. The practical simulation and validation using the Scyther and Tamarin tool ensure that the absence of security attacks of our proposed framework. Finally, the performance analysis based on cryptography features and computational overhead of related approaches specify that the proposed micro-payment protocol for wearable devices is secure and efficient.
A Dynamic Model and Algorithm for Real-Time Traffic Management
Dr Dinesh Reddy Vemula, Bhaskar N., Teja M N V M S., Sree N L., Harshitha L., Bhargav P V
Source Title: Smart Innovation, Systems and Technologies, Quartile: Q4, DOI Link
View abstract ⏷
The work portrays a summary of traffic congestion which has been a persistent problem in many cities in India. The major problems that lead to traffic congestion in India are primarily associated with one or combination of the factors such as signal failures, inadequate law enforcement and relatively poor traffic management practices. The traffic congestion shall be treated as a grave issue as it significantly reduces the freight vehicles speed, increases wait time at the checkpoints and toll plazas, uncountable loss of productive man-hours spending unnecessary journey time, physical and mental fatigue of humans. In addition, the cars that are waiting in traffic jams contribute to 40% increased pollution than those which are normally moving on the roads by way of increased fuel wastage and therefore causing excessive carbon dioxide emissions which would result in frequent repairs and replacements. To avoid such unwarranted and multi-dimensional losses to mankind, we developed a technological solution and our experiments on real-time data show that proposed approach is able to reduce the waiting time and travelling time for the users.
Non-invertible Cancellable Template for Fingerprint Biometric
Dr Dinesh Reddy Vemula, Kavati I., Kumar G K., Babu E S., Cheruku R., Gopalachari M V
Source Title: Lecture Notes in Networks and Systems, Quartile: Q4, DOI Link
View abstract ⏷
We propose an approach for generation of secure and non-invertible fingerprint templates. Firstly, we have to find the points around the reference point and select the n points sorted in ascending order. Then we have to construct a n sided polygon from the n selected points. The polygon created will have all its points connected to the reference minutia which will in turn divide the polygon into n triangles. The area and semi perimeter of the triangle, the angle between the two lines joining the reference minutiae from the two points is calculated and the orientation of the points in the triangle is taken. These all features together constitute the feature vector. This feature vector will be projected onto the 4D-space and the binary string will be generated from it. Then Discrete Fourier Transform (DFT) will be applied on the generated binary string. To achieve non invertibility the obtained DFT matrix will be multiplied by a user key. At last, the proposed work will be inspected by using the FVC databases and various metrics will be used to check the performance.
Analyzing Student Performance in Programming Education Using Classification Techniques
Source Title: 2022 International Conference on Advancements in Smart, Secure and Intelligent Computing, DOI Link
View abstract ⏷
Programming Skills play a crucial role in any computer engineering student's life to apply the concepts in solving any real world problem as well to crack a secure job in the dream company. To achieve this they should assess their performance in programming, analyze and improve their skills regularly. Many students are even undergoing mental stress and depression and even attempting suicides out of the stress if the considered scores and performance are not met. With the help of analyzing the programming skills one can enhance their scores and performance on a regular basis, introspect and can deliberately practice for better improvement. This reduces the stress, anxiety and depression on students' minds in securing good scores in their academics and in building their career to achieve the goal. This analysis helps even professors to improvise the teaching and learning outcomes of students and increase their performance in whichever field they are working in. We made a comparison of different machine learning algorithms based on 200 classification instances. This analysis helped us in analyzing the statistics of students' performance.
CybSecMLC: A Comparative Analysis on Cyber Security Intrusion Detection Using Machine Learning Classifiers
Source Title: Communications in Computer and Information Science, Quartile: Q3, DOI Link
View abstract ⏷
With the rapid growth of the Internet and smartphone and wireless communication-based applications, new threats, vulnerabilities, and attacks also increased. The attackers always use communication channels to violate security features. The fast-growing of security attacks and malicious activities create a lot of damage to society. The network administrators and intrusion detection systems (IDS) were also unable to identify the possibility of network attacks. However, many security mechanisms and tools are evolved to detect the vulnerabilities and risks involved in wireless communication. Apart from that machine learning classifiers (MLCs) also practical approaches to detect intrusion attacks. These MLCs differentiated the network traffic data as two parts one is abnormal and other regular. Many existing systems work on the in-depth analysis of specific attacks in network intrusion detection systems. This paper presents a comprehensive and detailed inspection of some existing MLCs for identifying the intrusions in the wireless network traffic. Notably, we analyze the MLCs in terms of various dimensions like feature selection and ensemble techniques to identify intrusion detection. Finally, we evaluated MLCs using the NSL-KDD dataset and summarize their effectiveness using a detailed experimental evolution.
Extended Graph Convolutional Networks for 3D Object Classification in Point Clouds
Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link
View abstract ⏷
Point clouds are a popular way to represent 3D data. Due to the sparsity and irregularity of the point cloud data, learning features directly from point clouds become complex and thus huge importance to methods that directly consume points. This paper focuses on interpreting the point cloud inputs using the graph convolutional networks (GCN). Further, we extend this model to detect the objects found in the autonomous driving datasets and the miscellaneous objects found in the non-autonomous driving datasets. We proposed to reduce the runtime of a GCN by allowing the GCN to stochastically sample fewer input points from point clouds to infer their larger structure while preserving its accuracy. Our proposed model offer improved accuracy while drastically decreasing graph building and prediction runtime.
Techniques for Solving Shortest Vector Problem
Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link
View abstract ⏷
Lattice-based crypto systems are regarded as secure and believed to be secure even against quantum computers. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic schemes. For more than 30 years now, the Shortest Vector Problem has been at the heart of a thriving research field and finding a new efficient algorithm turned out to be out of reach. This problem has a great many applications such as optimization, communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as the Closest Vector Problem. We present the average case and worst case hardness results for the Shortest Vector Problem. Further this work explore efficient algorithms solving the Shortest Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The experimental results on various lattices show that the Metropolis algorithm works better than other algorithms with varying sizes of lattices.