Admission Help Line

18900 00888

Admissions 2026 Open — Apply!

Faculty Dr Ashu Abdul

Dr Ashu Abdul

Associate Professor

Department of Computer Science and Engineering

Contact Details

ashu.a@srmap.edu.in

Office Location

SR Block, Level 5, Cabin No: 12

Education

2019
Chang Gung University Taiwan
Taiwan
2015
M.Tech
IIIT Bhubaneswar India
India
2009
B.Tech
Shadan College of Engineering, JNTU-Hyderabad India
India

Experience

  • Aug 2019 to Jan 2020, Assistant Professor | Vardhaman College of Engineering, Hyderabad
  • Feb 2013 to Dec 2014, Assistant Professor | National Institute of Science and Technology, Berhampur
  • June 2012 to Jan 2013, Assistant Professor | National Institute of Science and Technology, Berhampur

Research Interest

  • To design and implement a personal digital assistant for summarizing the emails/messages received by the users.
  • To implement a music application which recommends music to the users based on their geographical location, time of the day, their current emotion and the music listening history.
  • To study and design a medical chatbot with natural language processing for the breast cancer patients.

Awards

  • GATE 2010 Qualified.
  • Sun Cerified Java Programmer (SCJP) in 2008.

Memberships

  • IEEE Member

Publications

  • MATSFT: User query-based multilingual abstractive text summarization for low resource Indian languages by fine-tuning mT5

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr M Krishna Siva Prasad, Phani Siginamsetty

    Source Title: Alexandria Engineering Journal, Quartile: Q1, DOI Link

    View abstract ⏷

    User query-based summarization is a challenging research area of natural language processing. However, the existing approaches struggle to effectively manage the intricate long-distance semantic relationships between user queries and input documents. This paper introduces a user query-based multilingual abstractive text summarization approach for the Indian low-resource languages by fine-tuning the multilingual pre-trained text-to-text (mT5) transformer model (MATSFT). The MATSFT employs a co-attention mechanism within a shared encoder–decoder architecture alongside the mT5 model to transfer knowledge across multiple low-resource languages. The Co-attention captures cross-lingual dependencies, which allows the model to understand the relationships and nuances between the different languages. Most multilingual summarization datasets focus on major global languages like English, French, and Spanish. To address the challenges in the LRLs, we created an Indian language dataset, comprising seven LRLs and the English language, by extracting data from the BBC news website. We evaluate the performance of the MATSFT using the ROUGE metric and a language-agnostic target summary evaluation metric. Experimental results show that MATSFT outperforms the monolingual transformer model, pre-trained MTM, mT5 model, NLI model, IndicBART, mBART25, and mBART50 on the IL dataset. The statistical paired t-test indicates that the MATSFT achieves a significant improvement with a -value of 0.05 compared to other models.
  • Evolutionary Algorithms for Edge Server Placement in Vehicular Edge Computing

    Dr Md Muzakkir Hussain, Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr Firoj Gazi, Ms Surayya A

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    Vehicular Edge Computing (VEC) is a critical enabler for intelligent transportation systems (ITS). It provides low-latency and energy-efficient services by offloading computation to the network edge. Effective edge server placement is essential for optimizing system performance, particularly in dynamic vehicular environments characterized by mobility and variability. The Edge Server Placement Problem (ESPP) addresses the challenge of minimizing latency and energy consumption while ensuring scalability and adaptability in real-world scenarios. This paper proposes a framework to solve the ESPP using real-world vehicular mobility traces to simulate realistic conditions. To achieve optimal server placement, we evaluate the effectiveness of several advanced evolutionary algorithms. These include the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Teaching-Learning-Based Optimization (TLBO). Each algorithm is analyzed for its ability to optimize multiple objectives under varying network conditions. Our results show that ACO performs the best, producing well-distributed pareto-optimal solutions and balancing trade-offs effectively. GA and PSO exhibit faster convergence and better energy efficiency, making them suitable for scenarios requiring rapid decisions. The proposed framework is validated through extensive simulations and compared with state-of-the-art methods. It consistently outperforms them in reducing latency and energy consumption. This study provides actionable insights into algorithm selection and deployment strategies for VEC, addressing mobility, scalability, and resource optimization challenges. The findings contribute to the development of robust, scalable VEC infrastructures, enabling the efficient implementation of next-generation ITS applications
  • Early Childhood Autism Screening Through Facial Feature Extraction

    Dr Ashu Abdul, Mr Mekala Sanjeev Kumar, Sai Karthik Nallamothu., Nitul Dutta., George Ghinea., Beri Surya Samantha

    Source Title: 2024 Eighth International Conference on Parallel, Distributed and Grid Computing (PDGC), DOI Link

    View abstract ⏷

    Autism Spectrum Disorder (ASD) is a type of neurological disorder which affects the human’s communication skills, social skills, thinking skills etc. A person with autism tends to have social issues such as less interaction, less eye contact, less understanding, impaired language and issues with verbal and non verbal abilities. Autistic persons experience repetitive behaviour and often are hyper or hypo sensitive to external stimuli. This disorder is caused due to developmental changes in the structure of the brain. A person with autism will have different symptoms compared to other people with autism. It is mainly caused due to genetics, siblings with ASD, being born with low birth weight or having older parents. ASD can be cured in early stages in children if the right diagnosis is followed, such as conducting medical or neurological examinations, testing cognitive and language abilities of children, periodic or frequent observations such as blood tests and hearing tests. A child around or below 10 years can be detected easily with autism compared to adults. So, It is crucial to determine whether the child is having disorder or not at an early stage. Deep learning is one the most advancing areas in computer science, It solves the problems where Machine Learning fails. In this research, Deep Learning models, especially models based on Transfer Learning such as VGG16, InceptionV3, Efficient-Net-B0 and B7 were used to detect autism using facial images without any need of MRI or FMRI. The highest accuracy has been achieved around 85%
  • Mmrag: Multimodal Medical Retrieval Augmented Generation System

    Dr Ashu Abdul, Phani Siginamsetty, Chang Fu Kuo., Jenhui Chen., Jatindra Kumar Dash

    Source Title: ssrn, DOI Link

    View abstract ⏷

    Accurate interpretation of medical images is crucial for effective diagnosis and treatment planning,yet remains challenging due to data complexity, variability, and hallucination issues. To addressthese challenges, we introduce a Multimodal Medical Retrieval-Augmented Generation (MMRAG)approach for automating radiology report generation from chest x-rays and brain MRI scans. This approach involves fine-tuning the idefics-80B parameters model with Quantized Low-Rank Adaptation(QLoRA), enhancing efficiency in processing large-scale multimodal data by reducing model weightsand improving inference speed. By converting the dataset into multimodal embeddings and creatinga unified vector space for images and text, the proposed system retrieves the most relevant reportfrom a pre-constructed vector store when presented with a new image. Using Retrieval-AugmentedGeneration (RAG) with the fine-tuned model, it generates comprehensive radiology reports, significantly improving the efficiency and thoroughness of automated medical report generation. Theproposed MMRAG potentially reduces radiologists workload and enhances diagnostic accuracy byintegrating multimodal learning with retrieval-augmented approach, addressing critical challenges inmedical imaging, including hallucination mitigation and computational efficiency during inference.Evaluation on publicly available datasets like MIMIC-CXR and CGBrainMRI demonstrates superiorperformance compared to existing approaches.
  • Empowering Quality of Recommendations by Integrating Matrix Factorization Approaches With Louvain Community Detection

    Dr Ashu Abdul, Dr Murali Krishna Enduri, Dr T Jaya Lakshmi, Ms Tokala Srilatha, Jenhui Chen

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    Recommendation systems play an important role in creating personalized content for consumers, improving their overall experiences across several applications. Providing the user with accurate recommendations based on their interests is the recommender system’s primary goal. Collaborative filtering-based recommendations with the help of matrix factorization techniques is very useful in practical uses. Owing to the expanding size of the dataset and as the complexity increases, there arises an issue in delivering accurate recommendations to the users. The efficient functioning of the recommendation system undergoes the scalability challenge in controlling large and varying datasets. This paper introduces an innovative approach by integrating matrix factorization techniques and community detection methods where the scalability in recommendation systems will be addressed. The steps involved in the proposed approach are: 1) The rating matrix is modeled as a bipartite network. 2) Communities are generated from the network. 3) Extract the rating matrices that belong to the communities and apply MF to these matrices in parallel. 4) Merge the predicted rating matrices belonging to the communities and evaluate root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE). In our paper different matrix factorization approaches like basic MF, NMF, SVD++, and FANMF are taken along with the Louvain community detection method for dividing the communities. The experimental analysis is performed on five different diverse datasets to enhance the quality of the recommendation. To determine the method’s efficiency, the evaluation metrics RMSE, MSE, and MAE are used, and the time required to evaluate the computation is also computed. It is observed in the results that almost 95% of our results are proven effective by getting lower RMSE, MSE, and MAE values. Thus, the main aim of the user will be satisfied in getting accurate recommendations based on the user experiences.
  • Deep learning based RAGAE-SVM for Chronic kidney disease diagnosis on internet of health things platform

    Dr Ashu Abdul, Prabhakar Kandukuri., Kuchipudi Prasanth Kumar., Velagapudi Sreenivas., G Ramesh., Venkateswarlu Gundu

    Source Title: Multimedia Tools and Applications, Quartile: Q1, DOI Link

    View abstract ⏷

    Chronic kidney disease (CKD) is a prominent disease that causes loss of functionality in the kidney. Doctors can now more easily gather patient health status data due to the growth of the Internet of Health Things (IoHT). The IoHT data contains a huge number of redundant data, making it challenging to predict CKD disease quickly and accurately. In healthcare applications like feature-based classification, a variety of disease diagnosis systems were used to address this problem. Current disease detection algorithms suffer from imbalanced dataset processing, low-accuracy feature learning, and high computational power requirements. Thus, deep learning-based clinical decision support systems have been developed to solve these complexities. To remove outliers from medical data, data collected with IoHT devices is first pre-processed using an enhanced K-means clustering technique. The Synthetic Minority over Sampling Technique is used to balance data because the IoHT dataset is highly imbalanced. The classifier detects anomalies in less time because of the implemented processing step. The balanced CKD dataset is then presented for use with a novel classifier called Residual Attention-Gated Autoencoder with Support Vector Machine. In order to improve accurate detection, the adopted classifier can learn and extract features. The proposed method results in an increased accuracy of 99.43% within 70.1 s of computation time. The mean intersection of the union and kappa coefficient of the proposed method is 98.15% and 98.1%, respectively.
  • Improving preliminary clinical diagnosis accuracy through knowledge filtering techniques in consultation dialogues

    Dr Ashu Abdul, Phani Siginamsetty, Binghong Chen., Jenhui Chen

    Source Title: Computer Methods and Programs in Biomedicine, Quartile: Q1, DOI Link

    View abstract ⏷

    Symptom descriptions by ordinary people are often inaccurate or vague when seeking medical advice, which often leads to inaccurate preliminary clinical diagnoses. To address this issue, we propose a deep learning model named the knowledgeable diagnostic transformer (KDT) for the natural language processing (NLP)-based preliminary clinical diagnoses. The KDT extracts symptom-disease relation triples (h,r,t) from patient symptom descriptions by using a proposed bipartite medical knowledge graph (bMKG). To avoid too many relation triples causing the knowledge noise issue, we propose a knowledge inclusion-exclusion approach (KIA) to eliminate undesirable triples (a knowledge filtering layer). Next, we combine token embedding techniques with the transformer model to predict the diseases that patients may encounter. To train the KDT, a medical diagnosis question-answering dataset (named MDQA dataset) containing large-scale, high-quality questions (patient syndrome description) and answering (diagnosis) corpora with 2.6M entries (1.07GB in size) in Mandarin was built. We also train the KDT with the National Institutes of Health (NIH) English dataset (MedQuAD). The KDT marks a transformative approach by achieving a remarkable accuracy of 99% for different evaluation metrics when compared with the baseline transformers used for the NLP-based preliminary clinical diagnoses approaches. In essence, our study not only demonstrates the effectiveness of the KDT in enhancing diagnostic precision but also underscores its potential to revolutionize the field of preliminary clinical diagnoses. By harnessing the power of knowledge-based approaches and advanced NLP techniques, we have paved the way for more accurate and reliable diagnoses, ultimately benefiting both healthcare providers and patients. The KDT has the potential to significantly reduce misdiagnoses and improve patient outcomes, marking a pivotal advancement in the realm of medical diagnostics.
  • Node Significance Analysis in Complex Networks Using Machine Learning and Centrality Measures

    Dr Murali Krishna Enduri, Dr Ashu Abdul, Dr Satish Anamalamudi, Koduru Hajarathaiah., Jenhui Chen

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    The study addresses the limitations of traditional centrality measures in complex networks, especially in disease-spreading situations, due to their inability to fully grasp the intricate connection between a node's functional importance and structural attributes. To tackle this issue, the research introduces an innovative framework that employs machine learning techniques to evaluate the significance of nodes in transmission scenarios. This framework incorporates various centrality measures like degree, clustering coefficient, Katz, local relative change in average clustering coefficient, average Katz, and average degree (LRACC, LRAK, and LRAD) to create a feature vector for each node. These methods capture diverse topological structures of nodes and incorporate the infection rate, a critical factor in understanding propagation scenarios. To establish accurate labels for node significance, propagation tests are simulated using epidemic models (SIR and Independent Cascade models). Machine learning methods are employed to capture the complex relationship between a node's true spreadability and infection rate. The performance of the machine learning model is compared to traditional centrality methods in two scenarios. In the first scenario, training and testing data are sourced from the same network, highlighting the superior accuracy of the machine learning approach. In the second scenario, training data from one network and testing data from another are used, where LRACC, LRAK, and LRAD outperform the machine learning methods.
  • Graph-based zero-shot learning for classifying natural and computer-generated image

    Dr Ashu Abdul, K Vara Prasad., B Srikanth., Lakshmikanth Paleti., K Kranthi Kumar., Sunitha Pachala

    Source Title: Multimedia Tools and Applications, Quartile: Q1, DOI Link

    View abstract ⏷

    The zero-shot image classification is a stimulating problem that attains the human recognition level depending upon the tiny quantity of trained images. Image classification was an essential phenomenon in the computer vision process. Therefore, the major problem was solving the classification process, and generally, the processing of entire data for the extraction process was complicated. To solve this problem, a proposed novel technique was Buffalo-based Graph Neural Zero-short Learning (BbGZSL) that aimed to classify the image types as natural and computer-generated. In the first stage, the denoised process was performed to eliminate the data noise and convert the colour image into a grey scale image. Then, a feature extraction process was performed to extract the required features based on the buffalo fitness features of the proposed model. Furthermore, the extracted features were stored using the learning memory. Finally, perform the unseen image testing and matching process for classifying the image. In addition, the proposed BbGZSL mechanism was implemented in the Python tool with several performance assessments. The proposed model gained 97.06% accuracy, f-score and Recall, as well as 97.07% precision for the tested unseen image dataset.
  • Path Loss Prediction Using Machine Learning Models for in-vivo Wireless Nanosensor Networks in Cardiac Health Monitoring

    Dr Ashu Abdul, Dr Manjula R, Parsh Jadon., Krishna Sharma., Venkatesh Sharma.,

    Source Title: 19th International Conference on Information Assurance and Security (IAS 2023), DOI Link

    View abstract ⏷

    -
  • Application Aware Computation Offloading in Vehicular Fog Computing (VFC)

    Dr Ashu Abdul, Dr Dinesh Reddy Vemula, Dr Md Muzakkir Hussain

    Source Title: Data Science Journal, Quartile: Q2, DOI Link

    View abstract ⏷

    -
  • Enhanced resource provisioning and migrating virtual machines in heterogeneous cloud data center

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr Md Muzakkir Hussain, Ilaiah Kavati., Sonam Maurya., Morampudi Mahesh Kumar

    Source Title: Journal of Ambient Intelligence and Humanized Computing, Quartile: Q1, DOI Link

    View abstract ⏷

    Data centers have become an indispensable part of modern computing infrastructures. It becomes necessary to manage cloud resources efficiently to reduce those ever-increasing power demands of data centers. Dynamic consolidation of virtual machines (VMs) in a data center is an effective way to map workloads onto servers in a way that requires the least resources possible. It is an efficient way to improve resources utilization and reduce energy consumption in cloud data centers. Virtual machine (VM) consolidation involves host overload/underload detection, VM selection, and VM placement. If a server becomes overloaded, we need techniques to select the proper virtual machines to migrate. By considering the migration overhead and service level of agreement (SLA) violation, we investigate design methodologies to reduce the energy consumption for the whole data center. We propose a novel approach that optimally detects when a host is overloaded using known CPU utilization and a given state configuration. We design a VM selection policy, considering various resource utilization factors to select the VMs. In addition, we propose an improved version of the JAYA approach for VM placement that minimizes the energy consumption by optimally pacing the migrated VMs in a data center. We analyze the performance in terms of energy consumption, performance degradation, and migrations. Using CloudSim, we run simulations and observed that our approach has an average improvement of 24% compared to state-of-the-art approaches in terms of power consumption.
  • LWC: EFFICIENT LIGHTWEIGHT BLOCK CIPHERS FOR PROVIDING SECURITY TO CONSTRAINED DEVICES A SOLUTION FOR IOT DEVICES

    Dr Ashu Abdul, Garlapati Narayana

    Source Title: Journal of Theoretical and Applied Information Technology, Quartile: Q3, DOI Link

    View abstract ⏷

    -
  • Abstractive Text Summarization with Fine-Tuned Transformer

    Dr Ashu Abdul, Mr Siginamsetty Venkata Phanidra Kumar

    Source Title: Lecture Notes in Electrical Engineering, Quartile: Q4, DOI Link

    View abstract ⏷

    We investigate an encoder–decoder-based on the bi- directional transformers with a self-attention function to generate abstractive text summaries for an input document. In the proposed approach, we fine-tune the transformer by changing the activation function from the rectified linear activation unit (ReLU) to the parametric rectified linear activation unit (PReLU). By introducing the PReLU activation function, we can store the long-term dependencies from the input document by reducing the data loss which was more when we used the ReLU function. Apart from that, we introduce a self-attention function for keeping track of the important keywords and the out-of-vocabulary words presented in the input document. We employ CNN/DailyMail dataset, Inshorts dataset, and customized IndiaToday data to achieve abstractive text summarization. The proposed model achieves a better ROUGE score when compared with the sequence-to-sequence with long short-term memory network and the traditional transformers model.
  • Vision-Based Facial Detection and Recognition for Attendance System Using Reinforcement Learning

    Dr Ashu Abdul, Phani Siginamsetty

    Source Title: Smart Innovation, Systems and Technologies, Quartile: Q4, DOI Link

    View abstract ⏷

    We propose a reinforcement learning (RL)-based attendance system (RLAS) for marking attendance of the students presented during a class using the frames captured by a video camera. The RLAS comprises of agent module and the environment module. In the agent module, we fine-tune the multi- task cascaded convolution network (MTCNN) module and the ArcFace module to identify the students present in class. The MTCNN module consists of two neural networks, the P-Network (P-Net) and the R-Network (R-Net). In the P-Net, we add 2 convolutional layers for extracting the latent features from the facial images of the students. Similarly, we modify the R-Net by adding two dense layers for detecting the bounding boxes from the frames captured by the video camera. Based on the latent features obtained from the fine-tuned MTCNN, the ArcFace identifies the students present in the class. The environment module of the RLAS uses the reward function to evaluate the output generated from the agent module. If the agent module correctly identifies all the students presented in the frames captured by the camera, then the reward function marks the attendance to those students. Else, the environment module back-propagates the error obtained from the reward function to the agent module. To evaluate the RLAS, we created a dataset of 1, 20, 000 different images of 2400 students studying at our university. In our experimental evaluation, we observed that the fine-tuned MTCNN along with the ArcFace provides the transfer learning mechanism to the RLAS. Therefore, the RLAS obtains less time complexity than the different variants of the MTCNN and CNN models.
  • VIRTUES AND SHORTCOMINGS OF ARTIFICIAL INTELLIGENCE IN GRAPHIC DESIGN ARENA

    Dr Ashu Abdul, Siripurapu Phani Sindhura

    Source Title: International Journal of Advanced Research in Engineering and Technology, DOI Link

    View abstract ⏷

    -
  • Extended Graph Convolutional Networks for 3D Object Classification in Point Clouds

    Dr Ashu Abdul, Dr Dinesh Reddy Vemula, Dr Sanjay Kumar, Sai Rishvanth Katragadda

    Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link

    View abstract ⏷

    Point clouds are a popular way to represent 3D data. Due to the sparsity and irregularity of the point cloud data, learning features directly from point clouds become complex and thus huge importance to methods that directly consume points. This paper focuses on interpreting the point cloud inputs using the graph convolutional networks (GCN). Further, we extend this model to detect the objects found in the autonomous driving datasets and the miscellaneous objects found in the non-autonomous driving datasets. We proposed to reduce the runtime of a GCN by allowing the GCN to stochastically sample fewer input points from point clouds to infer their larger structure while preserving its accuracy. Our proposed model offer improved accuracy while drastically decreasing graph building and prediction runtime.
  • Techniques for Solving Shortest Vector Problem

    Dr Ashu Abdul, Dr M Mahesh Kumar, Dr Sriramulu Bojjagani, Dr Dinesh Reddy Vemula, P Ravi

    Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link

    View abstract ⏷

    Lattice-based crypto systems are regarded as secure and believed to be secure even against quantum computers. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic schemes. For more than 30 years now, the Shortest Vector Problem has been at the heart of a thriving research field and finding a new efficient algorithm turned out to be out of reach. This problem has a great many applications such as optimization, communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as the Closest Vector Problem. We present the average case and worst case hardness results for the Shortest Vector Problem. Further this work explore efficient algorithms solving the Shortest Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The experimental results on various lattices show that the Metropolis algorithm works better than other algorithms with varying sizes of lattices.
  • Strategic AI-driven Intelligence Modelling for Identification and Mitigation of Cyberattack on Banking Systems

    Dr Ashu Abdul, Amina Baba Adam

    Source Title: International Journal of Engineering Research and Technology, DOI Link

    View abstract ⏷

    -
  • Intelligent Data Compression Policy for Hadoop Performance Optimization

    Dr Ashu Abdul, Mir Wajahat Hussain., Diptendu Sinha Roy., Hemant Kumar Reddy

    Source Title: Advances in Intelligent Systems and Computing, DOI Link

    View abstract ⏷

    Hadoop can deal with Zeta-level data, but the huge request for Disk I/O and Network utilization often appears as the limitations in Hadoop. During different job execution phases of Hadoop, the production of intermediate data is enormous, and transferring the same data over the network to the “reduce” process becomes an overload. In this paper, we discuss an intelligent data compression policy to overcome these limitations and to improve the performance of Hadoop. An intelligent compression policy is devised that starts compression at an apt time when all the map tasks are not completed in the job. This policy reduces the data transfer time in a network. The results are evaluated by running several benchmarks, which shows an improvement of about 8–15% during job execution and depicts the merits of the proposed compression policy.

Patents

  • A system and method for performing multilingual multimodal summarization for multimodal input

    Dr Ashu Abdul

    Patent Application No: 202241073648, Date Filed: 19/12/2022, Date Published: 30/12/2022, Status: Published

  • A system and method for multimodal multilingual input summarization using quantum motivated processors

    Dr Ashu Abdul

    Patent Application No: 202341005519, Date Filed: 27/01/2023, Date Published: 24/02/2023, Status: Granted

  • A system and a method for prediction of the strength of concrete

    Dr Ashu Abdul

    Patent Application No: 202341087257, Date Filed: 20/12/2023, Date Published: 05/01/2024, Status: Published

  • A healthcare summarization system and a method thereof

    Dr Ashu Abdul, Dr Krishna Prasad

    Patent Application No: 202441005845, Date Filed: 29/01/2024, Date Published: 09/02/2024, Status: Published

  • A system and a method for generating trading coupons

    Dr Ashu Abdul

    Patent Application No: 202341087665, Date Filed: 21/12/2023, Date Published: 12/01/2024, Status: Published

  • A system and a method for deriving multilingual meeting minutes

    Dr Ashu Abdul

    Patent Application No: 202441001022, Date Filed: 05/01/2024, Date Published: 09/02/2024, Status: Published

  • A system and a method for healthcare data processing and decision support

    Dr Ashu Abdul

    Patent Application No: 202441076761, Date Filed: 09/10/2024, Date Published: 18/10/2024, Status: Published

  • A system and a method for automated exam evaluation and personalized learning feedback

    Dr Ashu Abdul

    Patent Application No: 202541018210, Date Filed: 01/03/2025, Date Published: 14/03/2025, Status: Published

  • A system and a method for managing api calls in a large language model

    Dr Ashu Abdul

    Patent Application No: 202441096836, Date Filed: 07/12/2024, Date Published: 13/12/2024, Status: Published

  • A system and a method for personalized e-content generation based on  student performance in education

    Dr Amit Kumar Mandal, Dr M Krishna Siva Prasad, Dr Ashu Abdul

    Patent Application No: 202441003347, Date Filed: 17/01/2024, Date Published: 09/02/2024, Status: Published

  • A method  for automated counting of poultry in poultry farm using deep learning techniques

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr Priyanka

    Patent Application No: 202141052193, Date Filed: 15/11/2021, Date Published: 03/12/2021, Status: Published

  • System and method for generating structured queries from  natural language inputs

    Dr Ashu Abdul

    Patent Application No: 202441096460, Date Filed: 06/12/2024, Date Published: 13/12/2024, Status: Published

  • A system for climate control in indoor saffron cultivation  and a method thereof

    Dr Ashu Abdul

    Patent Application No: 202441103777, Date Filed: 27/12/2024, Date Published: 03/01/2025, Status: Published

  • A system and a method for automatic face detection and media capturing

    Dr Ashu Abdul

    Patent Application No: 202241032986, Date Filed: 09/06/2022, Date Published: 17/06/2022, Status: Granted

  • A system and method for multimodal multilingual input summarization using quantum motivated processors

    Dr Ashu Abdul

    Patent Application No: 202341005519, Date Filed: 27/01/2023, Date Published: 24/02/2023, Status: Granted

Projects

  • Multilingual Minutes of Meeting – MMoM

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul

    Funding Agency: All Industrial consultancy Projects - SRM Global Holding Private Ltd, Budget Cost (INR) Lakhs: 14.75, Status: Ongoing

  • Requirement Analysis for Medical Chatbot

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul

    Funding Agency: Sponsoring Agency - GPEMC, Budget Cost (INR) Lakhs: 22.08, Status: On Going

  • Designing the Technical Architecture for Emotional Intelligence

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul

    Funding Agency: All Industrial consultancy Projects - Cheers Wisdom Pvt. Ltd., Budget Cost (INR) Lakhs: 1.226, Status: Completed

Scholars

Doctoral Scholars

  • Mr Degala Chenchupradeep
  • Mr Siginamsetty Venkata Phanidra Kumar

Interests

  • Artificial Intelligence
  • Data Science
  • Machine Learning

Thought Leaderships

There are no Thought Leaderships associated with this faculty.

Top Achievements

Education
2009
B.Tech
Shadan College of Engineering, JNTU-Hyderabad India
India
2015
M.Tech
IIIT Bhubaneswar India
India
2019
Chang Gung University Taiwan
Taiwan
Experience
  • Aug 2019 to Jan 2020, Assistant Professor | Vardhaman College of Engineering, Hyderabad
  • Feb 2013 to Dec 2014, Assistant Professor | National Institute of Science and Technology, Berhampur
  • June 2012 to Jan 2013, Assistant Professor | National Institute of Science and Technology, Berhampur
Research Interests
  • To design and implement a personal digital assistant for summarizing the emails/messages received by the users.
  • To implement a music application which recommends music to the users based on their geographical location, time of the day, their current emotion and the music listening history.
  • To study and design a medical chatbot with natural language processing for the breast cancer patients.
Awards & Fellowships
  • GATE 2010 Qualified.
  • Sun Cerified Java Programmer (SCJP) in 2008.
Memberships
  • IEEE Member
Publications
  • MATSFT: User query-based multilingual abstractive text summarization for low resource Indian languages by fine-tuning mT5

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr M Krishna Siva Prasad, Phani Siginamsetty

    Source Title: Alexandria Engineering Journal, Quartile: Q1, DOI Link

    View abstract ⏷

    User query-based summarization is a challenging research area of natural language processing. However, the existing approaches struggle to effectively manage the intricate long-distance semantic relationships between user queries and input documents. This paper introduces a user query-based multilingual abstractive text summarization approach for the Indian low-resource languages by fine-tuning the multilingual pre-trained text-to-text (mT5) transformer model (MATSFT). The MATSFT employs a co-attention mechanism within a shared encoder–decoder architecture alongside the mT5 model to transfer knowledge across multiple low-resource languages. The Co-attention captures cross-lingual dependencies, which allows the model to understand the relationships and nuances between the different languages. Most multilingual summarization datasets focus on major global languages like English, French, and Spanish. To address the challenges in the LRLs, we created an Indian language dataset, comprising seven LRLs and the English language, by extracting data from the BBC news website. We evaluate the performance of the MATSFT using the ROUGE metric and a language-agnostic target summary evaluation metric. Experimental results show that MATSFT outperforms the monolingual transformer model, pre-trained MTM, mT5 model, NLI model, IndicBART, mBART25, and mBART50 on the IL dataset. The statistical paired t-test indicates that the MATSFT achieves a significant improvement with a -value of 0.05 compared to other models.
  • Evolutionary Algorithms for Edge Server Placement in Vehicular Edge Computing

    Dr Md Muzakkir Hussain, Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr Firoj Gazi, Ms Surayya A

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    Vehicular Edge Computing (VEC) is a critical enabler for intelligent transportation systems (ITS). It provides low-latency and energy-efficient services by offloading computation to the network edge. Effective edge server placement is essential for optimizing system performance, particularly in dynamic vehicular environments characterized by mobility and variability. The Edge Server Placement Problem (ESPP) addresses the challenge of minimizing latency and energy consumption while ensuring scalability and adaptability in real-world scenarios. This paper proposes a framework to solve the ESPP using real-world vehicular mobility traces to simulate realistic conditions. To achieve optimal server placement, we evaluate the effectiveness of several advanced evolutionary algorithms. These include the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Teaching-Learning-Based Optimization (TLBO). Each algorithm is analyzed for its ability to optimize multiple objectives under varying network conditions. Our results show that ACO performs the best, producing well-distributed pareto-optimal solutions and balancing trade-offs effectively. GA and PSO exhibit faster convergence and better energy efficiency, making them suitable for scenarios requiring rapid decisions. The proposed framework is validated through extensive simulations and compared with state-of-the-art methods. It consistently outperforms them in reducing latency and energy consumption. This study provides actionable insights into algorithm selection and deployment strategies for VEC, addressing mobility, scalability, and resource optimization challenges. The findings contribute to the development of robust, scalable VEC infrastructures, enabling the efficient implementation of next-generation ITS applications
  • Early Childhood Autism Screening Through Facial Feature Extraction

    Dr Ashu Abdul, Mr Mekala Sanjeev Kumar, Sai Karthik Nallamothu., Nitul Dutta., George Ghinea., Beri Surya Samantha

    Source Title: 2024 Eighth International Conference on Parallel, Distributed and Grid Computing (PDGC), DOI Link

    View abstract ⏷

    Autism Spectrum Disorder (ASD) is a type of neurological disorder which affects the human’s communication skills, social skills, thinking skills etc. A person with autism tends to have social issues such as less interaction, less eye contact, less understanding, impaired language and issues with verbal and non verbal abilities. Autistic persons experience repetitive behaviour and often are hyper or hypo sensitive to external stimuli. This disorder is caused due to developmental changes in the structure of the brain. A person with autism will have different symptoms compared to other people with autism. It is mainly caused due to genetics, siblings with ASD, being born with low birth weight or having older parents. ASD can be cured in early stages in children if the right diagnosis is followed, such as conducting medical or neurological examinations, testing cognitive and language abilities of children, periodic or frequent observations such as blood tests and hearing tests. A child around or below 10 years can be detected easily with autism compared to adults. So, It is crucial to determine whether the child is having disorder or not at an early stage. Deep learning is one the most advancing areas in computer science, It solves the problems where Machine Learning fails. In this research, Deep Learning models, especially models based on Transfer Learning such as VGG16, InceptionV3, Efficient-Net-B0 and B7 were used to detect autism using facial images without any need of MRI or FMRI. The highest accuracy has been achieved around 85%
  • Mmrag: Multimodal Medical Retrieval Augmented Generation System

    Dr Ashu Abdul, Phani Siginamsetty, Chang Fu Kuo., Jenhui Chen., Jatindra Kumar Dash

    Source Title: ssrn, DOI Link

    View abstract ⏷

    Accurate interpretation of medical images is crucial for effective diagnosis and treatment planning,yet remains challenging due to data complexity, variability, and hallucination issues. To addressthese challenges, we introduce a Multimodal Medical Retrieval-Augmented Generation (MMRAG)approach for automating radiology report generation from chest x-rays and brain MRI scans. This approach involves fine-tuning the idefics-80B parameters model with Quantized Low-Rank Adaptation(QLoRA), enhancing efficiency in processing large-scale multimodal data by reducing model weightsand improving inference speed. By converting the dataset into multimodal embeddings and creatinga unified vector space for images and text, the proposed system retrieves the most relevant reportfrom a pre-constructed vector store when presented with a new image. Using Retrieval-AugmentedGeneration (RAG) with the fine-tuned model, it generates comprehensive radiology reports, significantly improving the efficiency and thoroughness of automated medical report generation. Theproposed MMRAG potentially reduces radiologists workload and enhances diagnostic accuracy byintegrating multimodal learning with retrieval-augmented approach, addressing critical challenges inmedical imaging, including hallucination mitigation and computational efficiency during inference.Evaluation on publicly available datasets like MIMIC-CXR and CGBrainMRI demonstrates superiorperformance compared to existing approaches.
  • Empowering Quality of Recommendations by Integrating Matrix Factorization Approaches With Louvain Community Detection

    Dr Ashu Abdul, Dr Murali Krishna Enduri, Dr T Jaya Lakshmi, Ms Tokala Srilatha, Jenhui Chen

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    Recommendation systems play an important role in creating personalized content for consumers, improving their overall experiences across several applications. Providing the user with accurate recommendations based on their interests is the recommender system’s primary goal. Collaborative filtering-based recommendations with the help of matrix factorization techniques is very useful in practical uses. Owing to the expanding size of the dataset and as the complexity increases, there arises an issue in delivering accurate recommendations to the users. The efficient functioning of the recommendation system undergoes the scalability challenge in controlling large and varying datasets. This paper introduces an innovative approach by integrating matrix factorization techniques and community detection methods where the scalability in recommendation systems will be addressed. The steps involved in the proposed approach are: 1) The rating matrix is modeled as a bipartite network. 2) Communities are generated from the network. 3) Extract the rating matrices that belong to the communities and apply MF to these matrices in parallel. 4) Merge the predicted rating matrices belonging to the communities and evaluate root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE). In our paper different matrix factorization approaches like basic MF, NMF, SVD++, and FANMF are taken along with the Louvain community detection method for dividing the communities. The experimental analysis is performed on five different diverse datasets to enhance the quality of the recommendation. To determine the method’s efficiency, the evaluation metrics RMSE, MSE, and MAE are used, and the time required to evaluate the computation is also computed. It is observed in the results that almost 95% of our results are proven effective by getting lower RMSE, MSE, and MAE values. Thus, the main aim of the user will be satisfied in getting accurate recommendations based on the user experiences.
  • Deep learning based RAGAE-SVM for Chronic kidney disease diagnosis on internet of health things platform

    Dr Ashu Abdul, Prabhakar Kandukuri., Kuchipudi Prasanth Kumar., Velagapudi Sreenivas., G Ramesh., Venkateswarlu Gundu

    Source Title: Multimedia Tools and Applications, Quartile: Q1, DOI Link

    View abstract ⏷

    Chronic kidney disease (CKD) is a prominent disease that causes loss of functionality in the kidney. Doctors can now more easily gather patient health status data due to the growth of the Internet of Health Things (IoHT). The IoHT data contains a huge number of redundant data, making it challenging to predict CKD disease quickly and accurately. In healthcare applications like feature-based classification, a variety of disease diagnosis systems were used to address this problem. Current disease detection algorithms suffer from imbalanced dataset processing, low-accuracy feature learning, and high computational power requirements. Thus, deep learning-based clinical decision support systems have been developed to solve these complexities. To remove outliers from medical data, data collected with IoHT devices is first pre-processed using an enhanced K-means clustering technique. The Synthetic Minority over Sampling Technique is used to balance data because the IoHT dataset is highly imbalanced. The classifier detects anomalies in less time because of the implemented processing step. The balanced CKD dataset is then presented for use with a novel classifier called Residual Attention-Gated Autoencoder with Support Vector Machine. In order to improve accurate detection, the adopted classifier can learn and extract features. The proposed method results in an increased accuracy of 99.43% within 70.1 s of computation time. The mean intersection of the union and kappa coefficient of the proposed method is 98.15% and 98.1%, respectively.
  • Improving preliminary clinical diagnosis accuracy through knowledge filtering techniques in consultation dialogues

    Dr Ashu Abdul, Phani Siginamsetty, Binghong Chen., Jenhui Chen

    Source Title: Computer Methods and Programs in Biomedicine, Quartile: Q1, DOI Link

    View abstract ⏷

    Symptom descriptions by ordinary people are often inaccurate or vague when seeking medical advice, which often leads to inaccurate preliminary clinical diagnoses. To address this issue, we propose a deep learning model named the knowledgeable diagnostic transformer (KDT) for the natural language processing (NLP)-based preliminary clinical diagnoses. The KDT extracts symptom-disease relation triples (h,r,t) from patient symptom descriptions by using a proposed bipartite medical knowledge graph (bMKG). To avoid too many relation triples causing the knowledge noise issue, we propose a knowledge inclusion-exclusion approach (KIA) to eliminate undesirable triples (a knowledge filtering layer). Next, we combine token embedding techniques with the transformer model to predict the diseases that patients may encounter. To train the KDT, a medical diagnosis question-answering dataset (named MDQA dataset) containing large-scale, high-quality questions (patient syndrome description) and answering (diagnosis) corpora with 2.6M entries (1.07GB in size) in Mandarin was built. We also train the KDT with the National Institutes of Health (NIH) English dataset (MedQuAD). The KDT marks a transformative approach by achieving a remarkable accuracy of 99% for different evaluation metrics when compared with the baseline transformers used for the NLP-based preliminary clinical diagnoses approaches. In essence, our study not only demonstrates the effectiveness of the KDT in enhancing diagnostic precision but also underscores its potential to revolutionize the field of preliminary clinical diagnoses. By harnessing the power of knowledge-based approaches and advanced NLP techniques, we have paved the way for more accurate and reliable diagnoses, ultimately benefiting both healthcare providers and patients. The KDT has the potential to significantly reduce misdiagnoses and improve patient outcomes, marking a pivotal advancement in the realm of medical diagnostics.
  • Node Significance Analysis in Complex Networks Using Machine Learning and Centrality Measures

    Dr Murali Krishna Enduri, Dr Ashu Abdul, Dr Satish Anamalamudi, Koduru Hajarathaiah., Jenhui Chen

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    The study addresses the limitations of traditional centrality measures in complex networks, especially in disease-spreading situations, due to their inability to fully grasp the intricate connection between a node's functional importance and structural attributes. To tackle this issue, the research introduces an innovative framework that employs machine learning techniques to evaluate the significance of nodes in transmission scenarios. This framework incorporates various centrality measures like degree, clustering coefficient, Katz, local relative change in average clustering coefficient, average Katz, and average degree (LRACC, LRAK, and LRAD) to create a feature vector for each node. These methods capture diverse topological structures of nodes and incorporate the infection rate, a critical factor in understanding propagation scenarios. To establish accurate labels for node significance, propagation tests are simulated using epidemic models (SIR and Independent Cascade models). Machine learning methods are employed to capture the complex relationship between a node's true spreadability and infection rate. The performance of the machine learning model is compared to traditional centrality methods in two scenarios. In the first scenario, training and testing data are sourced from the same network, highlighting the superior accuracy of the machine learning approach. In the second scenario, training data from one network and testing data from another are used, where LRACC, LRAK, and LRAD outperform the machine learning methods.
  • Graph-based zero-shot learning for classifying natural and computer-generated image

    Dr Ashu Abdul, K Vara Prasad., B Srikanth., Lakshmikanth Paleti., K Kranthi Kumar., Sunitha Pachala

    Source Title: Multimedia Tools and Applications, Quartile: Q1, DOI Link

    View abstract ⏷

    The zero-shot image classification is a stimulating problem that attains the human recognition level depending upon the tiny quantity of trained images. Image classification was an essential phenomenon in the computer vision process. Therefore, the major problem was solving the classification process, and generally, the processing of entire data for the extraction process was complicated. To solve this problem, a proposed novel technique was Buffalo-based Graph Neural Zero-short Learning (BbGZSL) that aimed to classify the image types as natural and computer-generated. In the first stage, the denoised process was performed to eliminate the data noise and convert the colour image into a grey scale image. Then, a feature extraction process was performed to extract the required features based on the buffalo fitness features of the proposed model. Furthermore, the extracted features were stored using the learning memory. Finally, perform the unseen image testing and matching process for classifying the image. In addition, the proposed BbGZSL mechanism was implemented in the Python tool with several performance assessments. The proposed model gained 97.06% accuracy, f-score and Recall, as well as 97.07% precision for the tested unseen image dataset.
  • Path Loss Prediction Using Machine Learning Models for in-vivo Wireless Nanosensor Networks in Cardiac Health Monitoring

    Dr Ashu Abdul, Dr Manjula R, Parsh Jadon., Krishna Sharma., Venkatesh Sharma.,

    Source Title: 19th International Conference on Information Assurance and Security (IAS 2023), DOI Link

    View abstract ⏷

    -
  • Application Aware Computation Offloading in Vehicular Fog Computing (VFC)

    Dr Ashu Abdul, Dr Dinesh Reddy Vemula, Dr Md Muzakkir Hussain

    Source Title: Data Science Journal, Quartile: Q2, DOI Link

    View abstract ⏷

    -
  • Enhanced resource provisioning and migrating virtual machines in heterogeneous cloud data center

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr Md Muzakkir Hussain, Ilaiah Kavati., Sonam Maurya., Morampudi Mahesh Kumar

    Source Title: Journal of Ambient Intelligence and Humanized Computing, Quartile: Q1, DOI Link

    View abstract ⏷

    Data centers have become an indispensable part of modern computing infrastructures. It becomes necessary to manage cloud resources efficiently to reduce those ever-increasing power demands of data centers. Dynamic consolidation of virtual machines (VMs) in a data center is an effective way to map workloads onto servers in a way that requires the least resources possible. It is an efficient way to improve resources utilization and reduce energy consumption in cloud data centers. Virtual machine (VM) consolidation involves host overload/underload detection, VM selection, and VM placement. If a server becomes overloaded, we need techniques to select the proper virtual machines to migrate. By considering the migration overhead and service level of agreement (SLA) violation, we investigate design methodologies to reduce the energy consumption for the whole data center. We propose a novel approach that optimally detects when a host is overloaded using known CPU utilization and a given state configuration. We design a VM selection policy, considering various resource utilization factors to select the VMs. In addition, we propose an improved version of the JAYA approach for VM placement that minimizes the energy consumption by optimally pacing the migrated VMs in a data center. We analyze the performance in terms of energy consumption, performance degradation, and migrations. Using CloudSim, we run simulations and observed that our approach has an average improvement of 24% compared to state-of-the-art approaches in terms of power consumption.
  • LWC: EFFICIENT LIGHTWEIGHT BLOCK CIPHERS FOR PROVIDING SECURITY TO CONSTRAINED DEVICES A SOLUTION FOR IOT DEVICES

    Dr Ashu Abdul, Garlapati Narayana

    Source Title: Journal of Theoretical and Applied Information Technology, Quartile: Q3, DOI Link

    View abstract ⏷

    -
  • Abstractive Text Summarization with Fine-Tuned Transformer

    Dr Ashu Abdul, Mr Siginamsetty Venkata Phanidra Kumar

    Source Title: Lecture Notes in Electrical Engineering, Quartile: Q4, DOI Link

    View abstract ⏷

    We investigate an encoder–decoder-based on the bi- directional transformers with a self-attention function to generate abstractive text summaries for an input document. In the proposed approach, we fine-tune the transformer by changing the activation function from the rectified linear activation unit (ReLU) to the parametric rectified linear activation unit (PReLU). By introducing the PReLU activation function, we can store the long-term dependencies from the input document by reducing the data loss which was more when we used the ReLU function. Apart from that, we introduce a self-attention function for keeping track of the important keywords and the out-of-vocabulary words presented in the input document. We employ CNN/DailyMail dataset, Inshorts dataset, and customized IndiaToday data to achieve abstractive text summarization. The proposed model achieves a better ROUGE score when compared with the sequence-to-sequence with long short-term memory network and the traditional transformers model.
  • Vision-Based Facial Detection and Recognition for Attendance System Using Reinforcement Learning

    Dr Ashu Abdul, Phani Siginamsetty

    Source Title: Smart Innovation, Systems and Technologies, Quartile: Q4, DOI Link

    View abstract ⏷

    We propose a reinforcement learning (RL)-based attendance system (RLAS) for marking attendance of the students presented during a class using the frames captured by a video camera. The RLAS comprises of agent module and the environment module. In the agent module, we fine-tune the multi- task cascaded convolution network (MTCNN) module and the ArcFace module to identify the students present in class. The MTCNN module consists of two neural networks, the P-Network (P-Net) and the R-Network (R-Net). In the P-Net, we add 2 convolutional layers for extracting the latent features from the facial images of the students. Similarly, we modify the R-Net by adding two dense layers for detecting the bounding boxes from the frames captured by the video camera. Based on the latent features obtained from the fine-tuned MTCNN, the ArcFace identifies the students present in the class. The environment module of the RLAS uses the reward function to evaluate the output generated from the agent module. If the agent module correctly identifies all the students presented in the frames captured by the camera, then the reward function marks the attendance to those students. Else, the environment module back-propagates the error obtained from the reward function to the agent module. To evaluate the RLAS, we created a dataset of 1, 20, 000 different images of 2400 students studying at our university. In our experimental evaluation, we observed that the fine-tuned MTCNN along with the ArcFace provides the transfer learning mechanism to the RLAS. Therefore, the RLAS obtains less time complexity than the different variants of the MTCNN and CNN models.
  • VIRTUES AND SHORTCOMINGS OF ARTIFICIAL INTELLIGENCE IN GRAPHIC DESIGN ARENA

    Dr Ashu Abdul, Siripurapu Phani Sindhura

    Source Title: International Journal of Advanced Research in Engineering and Technology, DOI Link

    View abstract ⏷

    -
  • Extended Graph Convolutional Networks for 3D Object Classification in Point Clouds

    Dr Ashu Abdul, Dr Dinesh Reddy Vemula, Dr Sanjay Kumar, Sai Rishvanth Katragadda

    Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link

    View abstract ⏷

    Point clouds are a popular way to represent 3D data. Due to the sparsity and irregularity of the point cloud data, learning features directly from point clouds become complex and thus huge importance to methods that directly consume points. This paper focuses on interpreting the point cloud inputs using the graph convolutional networks (GCN). Further, we extend this model to detect the objects found in the autonomous driving datasets and the miscellaneous objects found in the non-autonomous driving datasets. We proposed to reduce the runtime of a GCN by allowing the GCN to stochastically sample fewer input points from point clouds to infer their larger structure while preserving its accuracy. Our proposed model offer improved accuracy while drastically decreasing graph building and prediction runtime.
  • Techniques for Solving Shortest Vector Problem

    Dr Ashu Abdul, Dr M Mahesh Kumar, Dr Sriramulu Bojjagani, Dr Dinesh Reddy Vemula, P Ravi

    Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link

    View abstract ⏷

    Lattice-based crypto systems are regarded as secure and believed to be secure even against quantum computers. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic schemes. For more than 30 years now, the Shortest Vector Problem has been at the heart of a thriving research field and finding a new efficient algorithm turned out to be out of reach. This problem has a great many applications such as optimization, communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as the Closest Vector Problem. We present the average case and worst case hardness results for the Shortest Vector Problem. Further this work explore efficient algorithms solving the Shortest Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The experimental results on various lattices show that the Metropolis algorithm works better than other algorithms with varying sizes of lattices.
  • Strategic AI-driven Intelligence Modelling for Identification and Mitigation of Cyberattack on Banking Systems

    Dr Ashu Abdul, Amina Baba Adam

    Source Title: International Journal of Engineering Research and Technology, DOI Link

    View abstract ⏷

    -
  • Intelligent Data Compression Policy for Hadoop Performance Optimization

    Dr Ashu Abdul, Mir Wajahat Hussain., Diptendu Sinha Roy., Hemant Kumar Reddy

    Source Title: Advances in Intelligent Systems and Computing, DOI Link

    View abstract ⏷

    Hadoop can deal with Zeta-level data, but the huge request for Disk I/O and Network utilization often appears as the limitations in Hadoop. During different job execution phases of Hadoop, the production of intermediate data is enormous, and transferring the same data over the network to the “reduce” process becomes an overload. In this paper, we discuss an intelligent data compression policy to overcome these limitations and to improve the performance of Hadoop. An intelligent compression policy is devised that starts compression at an apt time when all the map tasks are not completed in the job. This policy reduces the data transfer time in a network. The results are evaluated by running several benchmarks, which shows an improvement of about 8–15% during job execution and depicts the merits of the proposed compression policy.
Contact Details

ashu.a@srmap.edu.in

Scholars

Doctoral Scholars

  • Mr Degala Chenchupradeep
  • Mr Siginamsetty Venkata Phanidra Kumar

Interests

  • Artificial Intelligence
  • Data Science
  • Machine Learning

Education
2009
B.Tech
Shadan College of Engineering, JNTU-Hyderabad India
India
2015
M.Tech
IIIT Bhubaneswar India
India
2019
Chang Gung University Taiwan
Taiwan
Experience
  • Aug 2019 to Jan 2020, Assistant Professor | Vardhaman College of Engineering, Hyderabad
  • Feb 2013 to Dec 2014, Assistant Professor | National Institute of Science and Technology, Berhampur
  • June 2012 to Jan 2013, Assistant Professor | National Institute of Science and Technology, Berhampur
Research Interests
  • To design and implement a personal digital assistant for summarizing the emails/messages received by the users.
  • To implement a music application which recommends music to the users based on their geographical location, time of the day, their current emotion and the music listening history.
  • To study and design a medical chatbot with natural language processing for the breast cancer patients.
Awards & Fellowships
  • GATE 2010 Qualified.
  • Sun Cerified Java Programmer (SCJP) in 2008.
Memberships
  • IEEE Member
Publications
  • MATSFT: User query-based multilingual abstractive text summarization for low resource Indian languages by fine-tuning mT5

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr M Krishna Siva Prasad, Phani Siginamsetty

    Source Title: Alexandria Engineering Journal, Quartile: Q1, DOI Link

    View abstract ⏷

    User query-based summarization is a challenging research area of natural language processing. However, the existing approaches struggle to effectively manage the intricate long-distance semantic relationships between user queries and input documents. This paper introduces a user query-based multilingual abstractive text summarization approach for the Indian low-resource languages by fine-tuning the multilingual pre-trained text-to-text (mT5) transformer model (MATSFT). The MATSFT employs a co-attention mechanism within a shared encoder–decoder architecture alongside the mT5 model to transfer knowledge across multiple low-resource languages. The Co-attention captures cross-lingual dependencies, which allows the model to understand the relationships and nuances between the different languages. Most multilingual summarization datasets focus on major global languages like English, French, and Spanish. To address the challenges in the LRLs, we created an Indian language dataset, comprising seven LRLs and the English language, by extracting data from the BBC news website. We evaluate the performance of the MATSFT using the ROUGE metric and a language-agnostic target summary evaluation metric. Experimental results show that MATSFT outperforms the monolingual transformer model, pre-trained MTM, mT5 model, NLI model, IndicBART, mBART25, and mBART50 on the IL dataset. The statistical paired t-test indicates that the MATSFT achieves a significant improvement with a -value of 0.05 compared to other models.
  • Evolutionary Algorithms for Edge Server Placement in Vehicular Edge Computing

    Dr Md Muzakkir Hussain, Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr Firoj Gazi, Ms Surayya A

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    Vehicular Edge Computing (VEC) is a critical enabler for intelligent transportation systems (ITS). It provides low-latency and energy-efficient services by offloading computation to the network edge. Effective edge server placement is essential for optimizing system performance, particularly in dynamic vehicular environments characterized by mobility and variability. The Edge Server Placement Problem (ESPP) addresses the challenge of minimizing latency and energy consumption while ensuring scalability and adaptability in real-world scenarios. This paper proposes a framework to solve the ESPP using real-world vehicular mobility traces to simulate realistic conditions. To achieve optimal server placement, we evaluate the effectiveness of several advanced evolutionary algorithms. These include the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Teaching-Learning-Based Optimization (TLBO). Each algorithm is analyzed for its ability to optimize multiple objectives under varying network conditions. Our results show that ACO performs the best, producing well-distributed pareto-optimal solutions and balancing trade-offs effectively. GA and PSO exhibit faster convergence and better energy efficiency, making them suitable for scenarios requiring rapid decisions. The proposed framework is validated through extensive simulations and compared with state-of-the-art methods. It consistently outperforms them in reducing latency and energy consumption. This study provides actionable insights into algorithm selection and deployment strategies for VEC, addressing mobility, scalability, and resource optimization challenges. The findings contribute to the development of robust, scalable VEC infrastructures, enabling the efficient implementation of next-generation ITS applications
  • Early Childhood Autism Screening Through Facial Feature Extraction

    Dr Ashu Abdul, Mr Mekala Sanjeev Kumar, Sai Karthik Nallamothu., Nitul Dutta., George Ghinea., Beri Surya Samantha

    Source Title: 2024 Eighth International Conference on Parallel, Distributed and Grid Computing (PDGC), DOI Link

    View abstract ⏷

    Autism Spectrum Disorder (ASD) is a type of neurological disorder which affects the human’s communication skills, social skills, thinking skills etc. A person with autism tends to have social issues such as less interaction, less eye contact, less understanding, impaired language and issues with verbal and non verbal abilities. Autistic persons experience repetitive behaviour and often are hyper or hypo sensitive to external stimuli. This disorder is caused due to developmental changes in the structure of the brain. A person with autism will have different symptoms compared to other people with autism. It is mainly caused due to genetics, siblings with ASD, being born with low birth weight or having older parents. ASD can be cured in early stages in children if the right diagnosis is followed, such as conducting medical or neurological examinations, testing cognitive and language abilities of children, periodic or frequent observations such as blood tests and hearing tests. A child around or below 10 years can be detected easily with autism compared to adults. So, It is crucial to determine whether the child is having disorder or not at an early stage. Deep learning is one the most advancing areas in computer science, It solves the problems where Machine Learning fails. In this research, Deep Learning models, especially models based on Transfer Learning such as VGG16, InceptionV3, Efficient-Net-B0 and B7 were used to detect autism using facial images without any need of MRI or FMRI. The highest accuracy has been achieved around 85%
  • Mmrag: Multimodal Medical Retrieval Augmented Generation System

    Dr Ashu Abdul, Phani Siginamsetty, Chang Fu Kuo., Jenhui Chen., Jatindra Kumar Dash

    Source Title: ssrn, DOI Link

    View abstract ⏷

    Accurate interpretation of medical images is crucial for effective diagnosis and treatment planning,yet remains challenging due to data complexity, variability, and hallucination issues. To addressthese challenges, we introduce a Multimodal Medical Retrieval-Augmented Generation (MMRAG)approach for automating radiology report generation from chest x-rays and brain MRI scans. This approach involves fine-tuning the idefics-80B parameters model with Quantized Low-Rank Adaptation(QLoRA), enhancing efficiency in processing large-scale multimodal data by reducing model weightsand improving inference speed. By converting the dataset into multimodal embeddings and creatinga unified vector space for images and text, the proposed system retrieves the most relevant reportfrom a pre-constructed vector store when presented with a new image. Using Retrieval-AugmentedGeneration (RAG) with the fine-tuned model, it generates comprehensive radiology reports, significantly improving the efficiency and thoroughness of automated medical report generation. Theproposed MMRAG potentially reduces radiologists workload and enhances diagnostic accuracy byintegrating multimodal learning with retrieval-augmented approach, addressing critical challenges inmedical imaging, including hallucination mitigation and computational efficiency during inference.Evaluation on publicly available datasets like MIMIC-CXR and CGBrainMRI demonstrates superiorperformance compared to existing approaches.
  • Empowering Quality of Recommendations by Integrating Matrix Factorization Approaches With Louvain Community Detection

    Dr Ashu Abdul, Dr Murali Krishna Enduri, Dr T Jaya Lakshmi, Ms Tokala Srilatha, Jenhui Chen

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    Recommendation systems play an important role in creating personalized content for consumers, improving their overall experiences across several applications. Providing the user with accurate recommendations based on their interests is the recommender system’s primary goal. Collaborative filtering-based recommendations with the help of matrix factorization techniques is very useful in practical uses. Owing to the expanding size of the dataset and as the complexity increases, there arises an issue in delivering accurate recommendations to the users. The efficient functioning of the recommendation system undergoes the scalability challenge in controlling large and varying datasets. This paper introduces an innovative approach by integrating matrix factorization techniques and community detection methods where the scalability in recommendation systems will be addressed. The steps involved in the proposed approach are: 1) The rating matrix is modeled as a bipartite network. 2) Communities are generated from the network. 3) Extract the rating matrices that belong to the communities and apply MF to these matrices in parallel. 4) Merge the predicted rating matrices belonging to the communities and evaluate root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE). In our paper different matrix factorization approaches like basic MF, NMF, SVD++, and FANMF are taken along with the Louvain community detection method for dividing the communities. The experimental analysis is performed on five different diverse datasets to enhance the quality of the recommendation. To determine the method’s efficiency, the evaluation metrics RMSE, MSE, and MAE are used, and the time required to evaluate the computation is also computed. It is observed in the results that almost 95% of our results are proven effective by getting lower RMSE, MSE, and MAE values. Thus, the main aim of the user will be satisfied in getting accurate recommendations based on the user experiences.
  • Deep learning based RAGAE-SVM for Chronic kidney disease diagnosis on internet of health things platform

    Dr Ashu Abdul, Prabhakar Kandukuri., Kuchipudi Prasanth Kumar., Velagapudi Sreenivas., G Ramesh., Venkateswarlu Gundu

    Source Title: Multimedia Tools and Applications, Quartile: Q1, DOI Link

    View abstract ⏷

    Chronic kidney disease (CKD) is a prominent disease that causes loss of functionality in the kidney. Doctors can now more easily gather patient health status data due to the growth of the Internet of Health Things (IoHT). The IoHT data contains a huge number of redundant data, making it challenging to predict CKD disease quickly and accurately. In healthcare applications like feature-based classification, a variety of disease diagnosis systems were used to address this problem. Current disease detection algorithms suffer from imbalanced dataset processing, low-accuracy feature learning, and high computational power requirements. Thus, deep learning-based clinical decision support systems have been developed to solve these complexities. To remove outliers from medical data, data collected with IoHT devices is first pre-processed using an enhanced K-means clustering technique. The Synthetic Minority over Sampling Technique is used to balance data because the IoHT dataset is highly imbalanced. The classifier detects anomalies in less time because of the implemented processing step. The balanced CKD dataset is then presented for use with a novel classifier called Residual Attention-Gated Autoencoder with Support Vector Machine. In order to improve accurate detection, the adopted classifier can learn and extract features. The proposed method results in an increased accuracy of 99.43% within 70.1 s of computation time. The mean intersection of the union and kappa coefficient of the proposed method is 98.15% and 98.1%, respectively.
  • Improving preliminary clinical diagnosis accuracy through knowledge filtering techniques in consultation dialogues

    Dr Ashu Abdul, Phani Siginamsetty, Binghong Chen., Jenhui Chen

    Source Title: Computer Methods and Programs in Biomedicine, Quartile: Q1, DOI Link

    View abstract ⏷

    Symptom descriptions by ordinary people are often inaccurate or vague when seeking medical advice, which often leads to inaccurate preliminary clinical diagnoses. To address this issue, we propose a deep learning model named the knowledgeable diagnostic transformer (KDT) for the natural language processing (NLP)-based preliminary clinical diagnoses. The KDT extracts symptom-disease relation triples (h,r,t) from patient symptom descriptions by using a proposed bipartite medical knowledge graph (bMKG). To avoid too many relation triples causing the knowledge noise issue, we propose a knowledge inclusion-exclusion approach (KIA) to eliminate undesirable triples (a knowledge filtering layer). Next, we combine token embedding techniques with the transformer model to predict the diseases that patients may encounter. To train the KDT, a medical diagnosis question-answering dataset (named MDQA dataset) containing large-scale, high-quality questions (patient syndrome description) and answering (diagnosis) corpora with 2.6M entries (1.07GB in size) in Mandarin was built. We also train the KDT with the National Institutes of Health (NIH) English dataset (MedQuAD). The KDT marks a transformative approach by achieving a remarkable accuracy of 99% for different evaluation metrics when compared with the baseline transformers used for the NLP-based preliminary clinical diagnoses approaches. In essence, our study not only demonstrates the effectiveness of the KDT in enhancing diagnostic precision but also underscores its potential to revolutionize the field of preliminary clinical diagnoses. By harnessing the power of knowledge-based approaches and advanced NLP techniques, we have paved the way for more accurate and reliable diagnoses, ultimately benefiting both healthcare providers and patients. The KDT has the potential to significantly reduce misdiagnoses and improve patient outcomes, marking a pivotal advancement in the realm of medical diagnostics.
  • Node Significance Analysis in Complex Networks Using Machine Learning and Centrality Measures

    Dr Murali Krishna Enduri, Dr Ashu Abdul, Dr Satish Anamalamudi, Koduru Hajarathaiah., Jenhui Chen

    Source Title: IEEE Access, Quartile: Q1, DOI Link

    View abstract ⏷

    The study addresses the limitations of traditional centrality measures in complex networks, especially in disease-spreading situations, due to their inability to fully grasp the intricate connection between a node's functional importance and structural attributes. To tackle this issue, the research introduces an innovative framework that employs machine learning techniques to evaluate the significance of nodes in transmission scenarios. This framework incorporates various centrality measures like degree, clustering coefficient, Katz, local relative change in average clustering coefficient, average Katz, and average degree (LRACC, LRAK, and LRAD) to create a feature vector for each node. These methods capture diverse topological structures of nodes and incorporate the infection rate, a critical factor in understanding propagation scenarios. To establish accurate labels for node significance, propagation tests are simulated using epidemic models (SIR and Independent Cascade models). Machine learning methods are employed to capture the complex relationship between a node's true spreadability and infection rate. The performance of the machine learning model is compared to traditional centrality methods in two scenarios. In the first scenario, training and testing data are sourced from the same network, highlighting the superior accuracy of the machine learning approach. In the second scenario, training data from one network and testing data from another are used, where LRACC, LRAK, and LRAD outperform the machine learning methods.
  • Graph-based zero-shot learning for classifying natural and computer-generated image

    Dr Ashu Abdul, K Vara Prasad., B Srikanth., Lakshmikanth Paleti., K Kranthi Kumar., Sunitha Pachala

    Source Title: Multimedia Tools and Applications, Quartile: Q1, DOI Link

    View abstract ⏷

    The zero-shot image classification is a stimulating problem that attains the human recognition level depending upon the tiny quantity of trained images. Image classification was an essential phenomenon in the computer vision process. Therefore, the major problem was solving the classification process, and generally, the processing of entire data for the extraction process was complicated. To solve this problem, a proposed novel technique was Buffalo-based Graph Neural Zero-short Learning (BbGZSL) that aimed to classify the image types as natural and computer-generated. In the first stage, the denoised process was performed to eliminate the data noise and convert the colour image into a grey scale image. Then, a feature extraction process was performed to extract the required features based on the buffalo fitness features of the proposed model. Furthermore, the extracted features were stored using the learning memory. Finally, perform the unseen image testing and matching process for classifying the image. In addition, the proposed BbGZSL mechanism was implemented in the Python tool with several performance assessments. The proposed model gained 97.06% accuracy, f-score and Recall, as well as 97.07% precision for the tested unseen image dataset.
  • Path Loss Prediction Using Machine Learning Models for in-vivo Wireless Nanosensor Networks in Cardiac Health Monitoring

    Dr Ashu Abdul, Dr Manjula R, Parsh Jadon., Krishna Sharma., Venkatesh Sharma.,

    Source Title: 19th International Conference on Information Assurance and Security (IAS 2023), DOI Link

    View abstract ⏷

    -
  • Application Aware Computation Offloading in Vehicular Fog Computing (VFC)

    Dr Ashu Abdul, Dr Dinesh Reddy Vemula, Dr Md Muzakkir Hussain

    Source Title: Data Science Journal, Quartile: Q2, DOI Link

    View abstract ⏷

    -
  • Enhanced resource provisioning and migrating virtual machines in heterogeneous cloud data center

    Dr Dinesh Reddy Vemula, Dr Ashu Abdul, Dr Md Muzakkir Hussain, Ilaiah Kavati., Sonam Maurya., Morampudi Mahesh Kumar

    Source Title: Journal of Ambient Intelligence and Humanized Computing, Quartile: Q1, DOI Link

    View abstract ⏷

    Data centers have become an indispensable part of modern computing infrastructures. It becomes necessary to manage cloud resources efficiently to reduce those ever-increasing power demands of data centers. Dynamic consolidation of virtual machines (VMs) in a data center is an effective way to map workloads onto servers in a way that requires the least resources possible. It is an efficient way to improve resources utilization and reduce energy consumption in cloud data centers. Virtual machine (VM) consolidation involves host overload/underload detection, VM selection, and VM placement. If a server becomes overloaded, we need techniques to select the proper virtual machines to migrate. By considering the migration overhead and service level of agreement (SLA) violation, we investigate design methodologies to reduce the energy consumption for the whole data center. We propose a novel approach that optimally detects when a host is overloaded using known CPU utilization and a given state configuration. We design a VM selection policy, considering various resource utilization factors to select the VMs. In addition, we propose an improved version of the JAYA approach for VM placement that minimizes the energy consumption by optimally pacing the migrated VMs in a data center. We analyze the performance in terms of energy consumption, performance degradation, and migrations. Using CloudSim, we run simulations and observed that our approach has an average improvement of 24% compared to state-of-the-art approaches in terms of power consumption.
  • LWC: EFFICIENT LIGHTWEIGHT BLOCK CIPHERS FOR PROVIDING SECURITY TO CONSTRAINED DEVICES A SOLUTION FOR IOT DEVICES

    Dr Ashu Abdul, Garlapati Narayana

    Source Title: Journal of Theoretical and Applied Information Technology, Quartile: Q3, DOI Link

    View abstract ⏷

    -
  • Abstractive Text Summarization with Fine-Tuned Transformer

    Dr Ashu Abdul, Mr Siginamsetty Venkata Phanidra Kumar

    Source Title: Lecture Notes in Electrical Engineering, Quartile: Q4, DOI Link

    View abstract ⏷

    We investigate an encoder–decoder-based on the bi- directional transformers with a self-attention function to generate abstractive text summaries for an input document. In the proposed approach, we fine-tune the transformer by changing the activation function from the rectified linear activation unit (ReLU) to the parametric rectified linear activation unit (PReLU). By introducing the PReLU activation function, we can store the long-term dependencies from the input document by reducing the data loss which was more when we used the ReLU function. Apart from that, we introduce a self-attention function for keeping track of the important keywords and the out-of-vocabulary words presented in the input document. We employ CNN/DailyMail dataset, Inshorts dataset, and customized IndiaToday data to achieve abstractive text summarization. The proposed model achieves a better ROUGE score when compared with the sequence-to-sequence with long short-term memory network and the traditional transformers model.
  • Vision-Based Facial Detection and Recognition for Attendance System Using Reinforcement Learning

    Dr Ashu Abdul, Phani Siginamsetty

    Source Title: Smart Innovation, Systems and Technologies, Quartile: Q4, DOI Link

    View abstract ⏷

    We propose a reinforcement learning (RL)-based attendance system (RLAS) for marking attendance of the students presented during a class using the frames captured by a video camera. The RLAS comprises of agent module and the environment module. In the agent module, we fine-tune the multi- task cascaded convolution network (MTCNN) module and the ArcFace module to identify the students present in class. The MTCNN module consists of two neural networks, the P-Network (P-Net) and the R-Network (R-Net). In the P-Net, we add 2 convolutional layers for extracting the latent features from the facial images of the students. Similarly, we modify the R-Net by adding two dense layers for detecting the bounding boxes from the frames captured by the video camera. Based on the latent features obtained from the fine-tuned MTCNN, the ArcFace identifies the students present in the class. The environment module of the RLAS uses the reward function to evaluate the output generated from the agent module. If the agent module correctly identifies all the students presented in the frames captured by the camera, then the reward function marks the attendance to those students. Else, the environment module back-propagates the error obtained from the reward function to the agent module. To evaluate the RLAS, we created a dataset of 1, 20, 000 different images of 2400 students studying at our university. In our experimental evaluation, we observed that the fine-tuned MTCNN along with the ArcFace provides the transfer learning mechanism to the RLAS. Therefore, the RLAS obtains less time complexity than the different variants of the MTCNN and CNN models.
  • VIRTUES AND SHORTCOMINGS OF ARTIFICIAL INTELLIGENCE IN GRAPHIC DESIGN ARENA

    Dr Ashu Abdul, Siripurapu Phani Sindhura

    Source Title: International Journal of Advanced Research in Engineering and Technology, DOI Link

    View abstract ⏷

    -
  • Extended Graph Convolutional Networks for 3D Object Classification in Point Clouds

    Dr Ashu Abdul, Dr Dinesh Reddy Vemula, Dr Sanjay Kumar, Sai Rishvanth Katragadda

    Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link

    View abstract ⏷

    Point clouds are a popular way to represent 3D data. Due to the sparsity and irregularity of the point cloud data, learning features directly from point clouds become complex and thus huge importance to methods that directly consume points. This paper focuses on interpreting the point cloud inputs using the graph convolutional networks (GCN). Further, we extend this model to detect the objects found in the autonomous driving datasets and the miscellaneous objects found in the non-autonomous driving datasets. We proposed to reduce the runtime of a GCN by allowing the GCN to stochastically sample fewer input points from point clouds to infer their larger structure while preserving its accuracy. Our proposed model offer improved accuracy while drastically decreasing graph building and prediction runtime.
  • Techniques for Solving Shortest Vector Problem

    Dr Ashu Abdul, Dr M Mahesh Kumar, Dr Sriramulu Bojjagani, Dr Dinesh Reddy Vemula, P Ravi

    Source Title: International Journal of Advanced Computer Science and Applications, Quartile: Q3, DOI Link

    View abstract ⏷

    Lattice-based crypto systems are regarded as secure and believed to be secure even against quantum computers. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic schemes. For more than 30 years now, the Shortest Vector Problem has been at the heart of a thriving research field and finding a new efficient algorithm turned out to be out of reach. This problem has a great many applications such as optimization, communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as the Closest Vector Problem. We present the average case and worst case hardness results for the Shortest Vector Problem. Further this work explore efficient algorithms solving the Shortest Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The experimental results on various lattices show that the Metropolis algorithm works better than other algorithms with varying sizes of lattices.
  • Strategic AI-driven Intelligence Modelling for Identification and Mitigation of Cyberattack on Banking Systems

    Dr Ashu Abdul, Amina Baba Adam

    Source Title: International Journal of Engineering Research and Technology, DOI Link

    View abstract ⏷

    -
  • Intelligent Data Compression Policy for Hadoop Performance Optimization

    Dr Ashu Abdul, Mir Wajahat Hussain., Diptendu Sinha Roy., Hemant Kumar Reddy

    Source Title: Advances in Intelligent Systems and Computing, DOI Link

    View abstract ⏷

    Hadoop can deal with Zeta-level data, but the huge request for Disk I/O and Network utilization often appears as the limitations in Hadoop. During different job execution phases of Hadoop, the production of intermediate data is enormous, and transferring the same data over the network to the “reduce” process becomes an overload. In this paper, we discuss an intelligent data compression policy to overcome these limitations and to improve the performance of Hadoop. An intelligent compression policy is devised that starts compression at an apt time when all the map tasks are not completed in the job. This policy reduces the data transfer time in a network. The results are evaluated by running several benchmarks, which shows an improvement of about 8–15% during job execution and depicts the merits of the proposed compression policy.
Contact Details

ashu.a@srmap.edu.in

Scholars

Doctoral Scholars

  • Mr Degala Chenchupradeep
  • Mr Siginamsetty Venkata Phanidra Kumar