A Comparative Analysis of Different Machine Learning Techniques to Predict Stock Price
Rout S., Alaparthi K., Ahamad Sharief S.A., Chakka S., Devaraj T., Deepak Raj B., Senapati D.
Conference paper, 3rd International Conference on Advancements in Smart, Secure and Intelligent Computing, ASSIC 2025, 2025, DOI Link
View abstract ⏷
Stock price prediction plays an essential function in the investment landscape, allowing investors to estimate the future value of a company's share. More individual/retail investors have been investing in stock market since last few years. Therefore, accurately forecasting stock prices has become important and also challenging. This paper explores different machine learning algorithms like Linear Regression (LR), Decision Tree Regressor (DTR), Random Forest Regressor (RFR) and Support Vector Regressor (SVR). The obtained machine learning models were evaluated under various performance metrics like Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) and $R2$ Error. After implementing and evaluating these models, we compared their performance, considering the ten years of data from the dataset available in Yahoo Finance. Our findings revealed that linear regression consistently outperformed the other algorithms, making it the most effective choice for stock price prediction. This insight underscores the importance of leveraging machine learning methods in financial forecasting and supports the growing need for reliable investment strategies.
SHIELD: Security-Aware Scheduling for Real-Time DAGs on Heterogeneous Systems
Senapati D., Bhagat P., Karfa C., Sarkar A.
Article, ACM Transactions on Cyber-Physical Systems, 2025, DOI Link
View abstract ⏷
Many control applications in real-time cyber-physical systems are represented as Directed Acyclic Graphs (DAGs) due to complex interactions among their functional components, and executed on distributed heterogeneous platforms. Data communication between dependent task nodes running on different processing elements are often realized through message transmission over a public network, and are hence susceptible to multiple security threats such as snooping, alteration, and spoofing. Several alternative security protocols having varying security strengths and associated implementation overheads are available in the market, for incorporating confidentiality, integrity, and authentication on the transmitted messages. While message size and correspondingly its associated transmission overheads may be marginally increased due to the assignment of security protocols, significant computation overheads must be incurred for securing the message at the location of its source task node and for unlocking security/message extraction at the destination. Obtained security strengths and associated computation overheads vary depending on the set of protocols chosen for a given message from an available pool of protocols. Given lower bounds on the security demands of an application’s messages, selecting the appropriate protocols for each message such that a system’s overall security is maximized while satisfying constraints related to the resource, task precedence and deadline, is a challenging and computationally hard problem. In this article, we propose an efficient heuristic strategy called SHIELD for security-aware real-time scheduling of DAG-structured applications to be executed on distributed heterogeneous systems. The efficacy of the proposed scheduler is exhibited through extensive simulation-based experiments using two DAG-structured application benchmarks. Our performance evaluation results demonstrate that SHIELD significantly outperforms two greedy baseline strategies SHIELDb in terms of solution generation times (i.e., runtimes) and SHIELDf in terms of achieved security utility. Additionally, a case study on the Traction Control application in automotive systems has been included to exhibit the applicability of SHIELD in real-world settings.
ERS: Energy-efficient Real-time DAG Scheduling on Uniform Multiprocessor Embedded Systems
Senapati D., Maurya D., Sarkar A., Karfa C.
Conference paper, Proceedings of the IEEE International Conference on VLSI Design, 2024, DOI Link
View abstract ⏷
Nowadays, many embedded systems such as mobiles and laptops to satellites and robotic systems, are often driven by limited energy sources like batteries. Hence, these devices are not only judged by their real-time and functional performance but also by their efficiencies in terms of energy management. Energy minimization is one of the primary design requirements for distributed embedded systems. The growing importance of complex applications in the distributed system introduces significant challenges in reducing energy consumption. This work addresses the problem of scheduling a real-time application abstracted as a directed acyclic graph (DAG), on a Dynamic Voltage and Frequency Scaling (DVFS) enabled uniform multiprocessor system, by proposing an efficient heuristic strategy called Energy-efficient Real-time DAG Scheduler (ERS). ERS effectively selects the appropriate processing frequency for each task-to-processor pair in the system such that overall energy saving is maximized while satisfying constraints related to resource, task precedence and deadline. We have evaluated the performance of the proposed framework using real-world benchmark applications. Obtained results reveal that ERS is able to deliver better performance in terms of energy savings than state-of-the-art works such as GSPM, SSPM, and PSPM.
Energy-Aware Real-Time Scheduling of Multiple Periodic DAGs on Heterogeneous Systems
Article, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2023, DOI Link
View abstract ⏷
Many of today's complex cyber-physical systems (CPSs) are represented as a set of independent co-executing real-time control applications, where each such application is represented as a precedence-constrained task graph. The applications execute in infinite loops, periodically acquiring data from the environment through sensors at a particular frequency, processing the same, and then producing processed data via actuators. These CPSs often execute under stringent resource constraints (such as limited energy budgets) in distributed networked environments and may be heterogeneous to be able to satisfactorily meet stipulated performance specifications. This work presents a list-based energy-aware scheduler called DVFS-enabled periodic multi-DAG real-time scheduler for heterogeneous systems (DPMRS) for a set of real-time control applications co-executing in a heterogeneous distributed environment. DPMRS introduces a novel approach for the integrated behavioral representation of a set of co-executing real-time DAG-structured applications. Each task in this integrated representation is then scheduled by determining its relative execution start time on a particular processor, which operates at an appropriately chosen frequency when the task runs on this processor. The overall objective of DPMRS is to minimize aggregate energy consumed in the execution of all tasks. The efficacy of the proposed scheduler has been exhibited through extensive simulation experiments using benchmark task graphs from different application domains. Additionally, a case study on automotive control systems has been included to show the applicability of the proposed work in real-world settings.
TMDS: Temperature-aware Makespan Minimizing DAG Scheduler for Heterogeneous Distributed Systems
Senapati D., Rajesh K., Karfa C., Sarkar A.
Article, ACM Transactions on Design Automation of Electronic Systems, 2023, DOI Link
View abstract ⏷
To meet application-specific performance demands, recent embedded platforms often involve the use of intricate micro-architectural designs and very small feature sizes leading to complex chips with multi-million gates. Such ultra-high gate densities often make these chips susceptible to inappropriate surges in core temperatures. Temperature surges above a specific threshold may throttle processor performance, enhance cooling costs, and reduce processor life expectancy. This work proposes a generic temperature management strategy that can be easily employed to adapt existing state-of-the-art task graph schedulers so that schedules generated by them never violate stipulated thermal bounds. The overall temperature-aware task graph scheduling problem has first been formally modeled as a constraint optimization formulation whose solution is shown to be prohibitively expensive in terms of computational overheads. Based on insights obtained through the formal model, a new fast and efficient heuristic algorithm called TMDS has been designed. Experimental evaluation over diverse test case scenarios shows that TMDS is able to deliver lower schedule lengths compared to the temperature-aware versions of four prominent makespan minimizing algorithms, namely, HEFT, PEFT, PPTS, and PSLS. Additionally, a case study with an adaptive cruise controller in automotive systems has been included to exhibit the applicability of TMDS in real-world settings.
Performance-Effective DAG Scheduling for Heterogeneous Distributed Systems
Conference paper, ACM International Conference Proceeding Series, 2022, DOI Link
View abstract ⏷
The problem of scheduling Directed Acyclic Graphs (DAGs) in order to minimize schedule length (also known as makespan), is known to be a challenging as well as computationally hard problem. Therefore, researchers have endeavored towards the design of various heuristic solution generation techniques both for homogeneous as well as heterogeneous computing platforms. Traditionally, list scheduling heuristics are known to generate efficient schedules within reasonable time complexities. In this research work, we first focus on a makespan minimizing DAG scheduler for heterogeneous distributed systems. Secondly, we also formulated a problem, which is an extension of the first work, by considering a real-time application on a heterogeneous platform. We proposed an algorithm, named PRESTO that aims to minimize a generic penalty function while satisfying the resource, precedence and timing constraints. This generic penalty function can suitably tune to various optimization problems in different application domains.
PRESTO: A Penalty-Aware Real-Time Scheduler for Task Graphs on Heterogeneous Platforms
Article, IEEE Transactions on Computers, 2022, DOI Link
View abstract ⏷
Scheduling real-time applications modelled as directed acyclic graphs on heterogeneous distributed platforms is known to be a challenging as well as a computationally demanding problem. This article deals with the design of an efficient scheduler for executing a real-time task graph on a distributed platform consisting of a set of fully connected heterogeneous processors. The objective of the scheduling strategy is to minimize a generic penalty function which can be amicably adopted toward its deployment in various application domains such as real-time embedded systems, cloud/fog computing, industrial automation and IoTs, smart grids, automotive and avionic systems, etc. We have first encoded the problem as a constraint satisfaction problem and then developed an efficient list-based heuristic scheduling algorithm called Penalty-aware REal-time Scheduler for Task graphs on heterOgeneous platforms (PRESTO), to generate a minimal penalty deadline-meeting static schedule. The generic efficacy of PRESTO is exhibited through extensive simulation-based experiments using standard benchmark task graphs. The practical applicability of PRESTO in diverse scenarios have further been exhibited by using the scheme in two different real-world case studies, the first of which relates to automotive embedded systems, while the second is in the domain of fog computing.
HMDS: A Makespan Minimizing DAG Scheduler for Heterogeneous Distributed Systems
Conference paper, ACM Transactions on Embedded Computing Systems, 2021, DOI Link
View abstract ⏷
The problem of scheduling Directed Acyclic Graphs in order to minimize makespan (schedule length), is known to be a challenging and computationally hard problem. Therefore, researchers have endeavored towards the design of various heuristic solution generation techniques both for homogeneous as well as heterogeneous computing platforms. This work first presents HMDS-Bl, a list-based heuristic makespan minimization algorithm for task graphs on fully connected heterogeneous platforms. Subsequently, HMDS-Bl has been enhanced by empowering it with a low-overhead depth-first branch and bound based search approach, resulting in a new algorithm called HMDS. HMDS has been equipped with a set of novel tunable pruning mechanisms, which allow the designer to obtain a judicious balance between performance (makespan) and solution generation times, depending on the specific scenario at hand. Experimental analyses using randomly generated DAGs as well as benchmark task graphs, have shown that HMDS is able to comprehensively outperform state-of-the-art algorithms such as HEFT, PEFT, PPTS, etc., in terms of archived makespans while incurring bounded additional computation time overhead.
SLAQA: Quality-level Aware Scheduling of Task Graphs on Heterogeneous Distributed Systems
Roy S.K., Devaraj R., Sarkar A., Senapati D.
Article, ACM Transactions on Embedded Computing Systems, 2021, DOI Link
View abstract ⏷
Continuous demands for higher performance and reliability within stringent resource budgets is driving a shift from homogeneous to heterogeneous processing platforms for the implementation of today's cyber-physical systems (CPSs). These CPSs are typically represented as Directed-acyclic Task Graph (DTG) due to the complex interactions between their functional components that are often distributed in nature. In this article, we consider the problem of scheduling a real-time application modelled as a single DTG, where tasks may have multiple implementations designated as quality-levels, with higher quality-levels producing more accurate results and contributing to higher rewards/Quality-of-Service for the system. First, we introduce an optimal solution using Integer Linear Programming (ILP) for a DTG with multiple quality-levels, to be executed on a heterogeneous distributed platform. However, this ILP-based optimal solution exhibits high computational complexity and does not scale for moderately large problem sizes. Hence, we propose two low-overhead heuristic algorithms called Global Slack Aware Quality-level Allocator (G-SLAQA) and Total Slack Aware Quality-level Allocator (T-SLAQA), which are able to produce satisfactorily efficient as well as fast solutions within a reasonable time. G-SLAQA, the baseline heuristic, is greedier and faster than its counter-part T-SLAQA, whose performance is at least as efficient as G-SLAQA. The efficiency of all the proposed schemes have been extensively evaluated through simulation-based experiments using benchmark and randomly generated DTGs. Through the case study of a real-world automotive traction controller, we generate schedules using our proposed schemes to demonstrate their practical applicability.
Handling unlabeled data in gene regulatory network
Conference paper, Advances in Intelligent Systems and Computing, 2013, DOI Link
View abstract ⏷
A gene is treated as a unit of heredity in a living organism. It resides on a stretch of DNA. Gene Regulatory Network (GRN) is a network of transcription dependency among genes of an organism. A GRN can be inferred from microarray data either by unsupervised or by supervised approach. It has been observed that supervised methods yields more accurate result as compared to unsupervised methods. Supervised methods require both positive and negative data for training. In Biological literature only positive example is available as Biologist are unable to state whether two genes are not interacting. A common adopted solution is to consider a random subset of unlabeled example as negative. Random selection may degrade the performance of the classifier. It is usually expected that, when labeled data are limited, the learning performance can be improved by exploiting unlabeled data. In this paper we propose a novel approach to filter out reliable and strong negative data from unlabeled data, so that a supervised model can be trained properly. We tested this method for predicting regulation in E. Coli and observed better result as compared to other unsupervised and supervised methods. This method is based on the principle of dividing the whole domain into gene clusters and then finds the best informative cluster for further classification. © 2013 Springer-Verlag.
Zone centroid distance and standard deviation based feature matrix for odia handwritten character recognition
Conference paper, Advances in Intelligent Systems and Computing, 2013, DOI Link
View abstract ⏷
Optical character recognition (OCR) is a type of document image analysis where scanned digital image that contains either machine printed or handwritten script input into an OCR software engine and translating it into an editable machine readable digital text format. In this paper we designed a novel and robust two stage recognition system for Odia handwritten characters as well as we prepare a standard deviation and zone centroid average distance based feature matrix for more accuracy while training and testing the Neural Network. The OHCR System is based on the algorithm of feed forward BPNN in two stage to perform the optimum feature extraction and recognition. The Odia characters are classified into four groups according to similarity of their shapes and features. The system uses ANN in two stages, having different parameters, the first stage classifies the characters into similar groups and in the second stage individual characters are recognized. © 2013 Springer-Verlag.
A novel approach to text line and word segmentation on odia printed documents
Conference paper, 2012 3rd International Conference on Computing, Communication and Networking Technologies, ICCCNT 2012, 2012, DOI Link
View abstract ⏷
The OCR is an electronic conversion of scanned images of handwritten, typewritten or printed text into machine-encoded text. The Optical Character System is available for various languages, such as English, Chinese and Arabic script, but it is commercially not available for Odia script. We have taken a step to develop OCR system for Odia language. The OCR is popular for its various applications potentials in banks, library automation, post-offices, defense organizations and language processing. Line and Word segmentation is one of the important steps of OCR system. The accuracy of the word/character recognition is directly affected by the correctness/ incorrectness of text-line and word segmentation. In this paper we have proposed a robust method for segmentation of individual text lines of Odia printed document image file. The segmented text line is the input for the word segmentation method which produces segmented words. Both foreground and background information are used in the proposed method. We have tested our method on scanned Odia scripts as well as some multi-script documents and obtained encouraging result. This technique is based on the intensities of pixels in the document. © 2012 IEEE.