MATSFT: User query-based multilingual abstractive text summarization for low resource Indian languages by fine-tuning mT5
Source Title: Alexandria Engineering Journal, Quartile: Q1, DOI Link
View abstract ⏷
User query-based summarization is a challenging research area of natural language processing. However, the existing approaches struggle to effectively manage the intricate long-distance semantic relationships between user queries and input documents. This paper introduces a user query-based multilingual abstractive text summarization approach for the Indian low-resource languages by fine-tuning the multilingual pre-trained text-to-text (mT5) transformer model (MATSFT). The MATSFT employs a co-attention mechanism within a shared encoderdecoder architecture alongside the mT5 model to transfer knowledge across multiple low-resource languages. The Co-attention captures cross-lingual dependencies, which allows the model to understand the relationships and nuances between the different languages. Most multilingual summarization datasets focus on major global languages like English, French, and Spanish. To address the challenges in the LRLs, we created an Indian language dataset, comprising seven LRLs and the English language, by extracting data from the BBC news website. We evaluate the performance of the MATSFT using the ROUGE metric and a language-agnostic target summary evaluation metric. Experimental results show that MATSFT outperforms the monolingual transformer model, pre-trained MTM, mT5 model, NLI model, IndicBART, mBART25, and mBART50 on the IL dataset. The statistical paired t-test indicates that the MATSFT achieves a significant improvement with a -value of 0.05 compared to other models.
Extractive Text Summarization of Clinical Text Using Deep Learning Models
Dr M Krishna Siva Prasad, Dr Chandra Shekahr, Sai Teja K., Nithin Datta D., Geetha Sri Abhinay P.,
Source Title: 2024 Second International Conference on Emerging Trends in Information Technology and Engineering (ICETITE), DOI Link
View abstract ⏷
This project focused on using clinical text data from the PubMed dataset to train transformer models and deep learning models for text summarization. The primary goal was to develop a system capable of identifying and extracting meaningful information from large clinical texts. Using transformer models and deep learning techniques, the goal was to improve the search for information in the medical literature. The ROUGE score, a widely accepted metric for automated summary assessment, was used to analyze the performance of the trained models. This project involved not only training and optimizing transformer and deep learning models to obtain a comprehensive summary, but also comparing their ROUGE scores to determine which model outperformed the others. This comparative analysis was necessary to determine the most effective model for extracting important insights from clinical texts. The findings have the potential to significantly impact information in the clinical domain, providing researchers and healthcare professionals with faster access to critical information.
Blockchain-Enabled SDN in Resource Constrained Devices
Source Title: Blockchain-based Cyber Security, DOI Link
View abstract ⏷
-
Revitalizing the single batch environment: a ‘Quest’ to achieve fairness and efficiency
Source Title: International Journal of Computers and Applications, Quartile: Q2, DOI Link
View abstract ⏷
In the realm of computer systems, efficient utilization of the CPU (Central Processing Unit) has always been a paramount concern. Researchers and engineers have long sought ways to optimize process execution on the CPU, leading to the emergence of CPU scheduling as a field of study. In this research, we have analyzed the single offline batch processing and investigated other sophisticated paradigms such as time-sharing operating systems and wildly used algorithms, and their shortcomings. Our work is directed toward two fundamental aspects of scheduling: efficiency and fairness. We propose a novel algorithm for batch processing that operates on a preemptive model, dynamically assigning priorities based on a robust ratio, employing a dynamic time slice, and utilizing periodic sorting to achieve fairness. By engineering this responsive and fair model, the proposed algorithm strikes a delicate balance between efficiency and fairness, providing an optimized solution for batch scheduling while ensuring system responsiveness.
MMSFT: Multilingual Multimodal Summarization by Fine-tuning Transformers
Dr M Krishna Siva Prasad, Phani Siginamsetty, Ashu Abdul., Hiren Kumar Deva Sarma
Source Title: IEEE Access, Quartile: Q1, DOI Link
View abstract ⏷
Multilingual multimodal (MM) summarization, involving the processing of multimodal input (MI) data across multiple languages to generate corresponding multimodal summaries (MS) using a single model, has been under explored. MI data consists of text and associated images, while MS incorporates text alongside relevant images aligned with the MI context. In this paper, we propose an MM summarization model by fine-tuning transformers (MMSFT), focusing on low-resource languages (LRLs) such as the Indian languages. MMSFT comprises multilingual learning for encoder training, incorporating multilingual attention with a forget gate mechanism, followed by MS generation using a decoder. In the proposed approach, we use publicly available multilingual multimodal summarization dataset (M3LS). Evaluation utilizing ROUGE metrics and the language-agnostic target summary metric (LaTM) illustrates MMSFT's significant enhancement over existing MM summarization models like mT5 and VG-mT5. Furthermore, MMSFT yields better or equivalent summaries compared to existing MM summarization models trained separately for each language. Human and statistical evaluation reveal MMSFT's significant improvement over existing models, with a p-value ? 0.05 in paired t-tests.
Extractive text summarization on medical insights using fine-tuned transformers
Dr M Krishna Siva Prasad, Nikitha Lingineni., Y Manisai., Manoj Pennada., Mallesh Gadde., Revanth Sai Aluri
Source Title: International Journal of Computers and Applications, Quartile: Q2, DOI Link
View abstract ⏷
Text summarization is a fundamental Natural language processing task that plays a crucial role in efficiently condensing large textual documents into concise and clear summaries for human comprehension. The amount of data being generated in the medical domain nowadays requires substantial application of the current deep learning approaches such as transformers. The main goal of this research is to extract relevant summaries from the abstracts of the research articles published related to cancer, blood cancer, tinnitus, and Alzheimer's. As the domain-related data requires special attention, our approach uses a fine-tuned transformer model, to guarantee that the summaries produced are not only brief but also accurate. Moreover, as a part of this research, we have effectively collected the information from PubMed and also prepared the data for analysis. A comparative analysis of the Bidirectional and Auto-Regressive Transformers (BART), Text-to-Text Transfer Transformer (T5), Textrank, and Lexrank models on the dataset is carried out in this study to understand the medical insights effectively. The fine-tuned transformer's performance in comparison with other models brings out a newer dimension for future studies.
Multiple Granularity Context Representation based Deep Learning Model for Disaster Tweet Identification
Source Title: 2024 5th International Conference on Innovative Trends in Information Technology, ICITIIT 2024, DOI Link
View abstract ⏷
Twitter has evolved into a pivotal platform for information exchange, particularly during emergencies. However, amidst the vast array of data, identifying tweets relevant to damage assessment remains a significant challenge. In response to this challenge, this study presents a novel approach designed to identify tweets related to damage assessment in times of crises. The challenge lies in sifting through an immense volume of data to isolate tweets pertinent to the specific event. Recent studies suggest that employing contextual word embedding approaches, such as transformers, rather than traditional context-free methods, can enhance the accuracy of disaster detection models. This study leverages multiple granularity level context representation at the character and word levels to bolster the efficiency of deep neural network techniques in distinguishing between disaster-related tweets and unrelated ones. Specifically, the weighted character representation, generated with the self-attention layer, is utilized to discern important information at the fine character level. Concurrently, Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) algorithms are employed in the word-level embedding to capture global context representation. The effectiveness of the proposed learning model is assessed by comparing it with existing models, utilizing evaluation measures viz., accuracy, F1 score, precision, and recall. The results demonstrate the effectiveness of our model compared to existing methods. © 2024 IEEE.
Examining the Sentiment Expressed in Tweets Related to COVID-19 and the Omicron Variant Using Deep Learning Classifiers
Dr M Krishna Siva Prasad, Sanjana Racharla., Bharadwaj Golla., Nandini Jangala., Sailesh Adda.,
Source Title: Lecture notes in electrical engineering, Quartile: Q4, DOI Link
View abstract ⏷
This study employs advanced deep learning models, including convolutional neural networks (CNN), recurrent neural networks (RNN), hybrid architectures, bidirectional long short-term memory (BiLSTM) networks, and transformers, to analyze sentiment in COVID-19 and Omicron-related tweets. The goal is to explore the relationship between social media popularity and classification accuracy while addressing challenges associated with false information during the pandemic. The research aims to enhance accuracy in identifying misinformation, offering insights for public health, digital literacy, and crisis management. Comparative analysis of the models reveals their strengths and weaknesses, establishing a benchmark for future misinformation detection studies. While emphasizing the importance of accurate information during crises, the study acknowledges limitations such as a lack of multilingual analysis, Twitter-centric focus, and potential bias in sentiment analysis datasets. The difficulties in interpreting massive neural networks and the transformative impact of social media on information dissemination are also recognized. Results showcase accuracy metrics for different classifiers, highlighting variations in sentiment analysis performance across datasets. In conclusion, the study contributes to understanding misinformation complexities during the pandemic, providing a nuanced analysis of sentiment in social media. It establishes a foundation for future studies on misinformation detection, emphasizing the crucial role of accurate information in navigating global challenges. However, it falls short in detailing potential social and regulatory repercussions from social media restrictions.
Sentimental Analysis on Drug Reviews Using Fined Tuned Transformers
Dr M Krishna Siva Prasad, Mohith Ram Garaga., Dinesh Paleti., Gulshan Anisha Syed., Bala Ramya Gudivaka., Koushik Maddi.,
Source Title: 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
View abstract ⏷
Main goal of this work is to analysis the drug review by using sentimental analysis. Nowadays we can see media platforms has become a portion of everyones lives, they share most of their views and interest in social media platforms like review sites, twitter etc. As social media has become an easier way to communicate, many individuals are posing their opinions in it which also include drug related reviews like providing useful remedies and providing valuable understandings which makes pharmacological companies more useful. In this work, we are mainly targeting on finding the sentiment score of drug reviews which are acquired from drugs.com and drugslib.com sites. Here we performed some preprocessing techniques on the data and then calculated the accuracy of each model LSTM, RNN, CNN and BERT, on comparing the accuracy we proposed that LSTM is giving the best accuracy when compared to other models with the accuracy of 97%.
A deep dive of deep learning models to Emotion Detection using weighted emotions
Dr M Krishna Siva Prasad, Venkata Karthik Bulusu., Vudathu Yashmitha Sri., Adithi Damera., Vignesh Kode
Source Title: 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
View abstract ⏷
This research paper explores different types of deep learning architectures for emotion detection through the use of Long Short-Term Memory (LSTM) networks. The main focus in this analysis is LSTM, a recurrent neural network (RNN) which can be used to understand cultural context due to its ability of capturing time dependencies in sequences. This study also looks into Bidirectional LSTM (BiLSTM) with Convolutional Neural Networks (CNNs), CNNs and RNNs one after another, independent CNNs and RNNs, and CNNs integrated with LSTM layers. Special attention here is given to the highly flexible and effective LSTM networks that incredibly capture even the most subtle emotional parameters as well as contextual information essential for improving accuracy in detecting emotions.
A Deep Learning Based Approach In The Prediction Of Tinnitus Disease For Large Population Data
Dr M Krishna Siva Prasad, Vamsikrishna Nallapuneni., Aswini Thindi., Hemanjali Popuri., Ashish Koka., Narasimha Batchu
Source Title: 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), DOI Link
View abstract ⏷
Tinnitus is a frequent sensory disorder that puts a lot of strain on the patient. Usually, tinnitus results from disturbances occurring to the sensory systems, such as the peripheral seldom central, the somatosensory system, the head and neck, or a mix of the two. This can be found in people with high stress, anxiety, depression, and hearing disorders. Although there is progress in the medical domain using artificial intelligence (AI), research related to tinnitus using AI is limited. This work aims to bridge the gap using deep-learning techniques for evaluating the patient record by examining various parameters. The proposed research also aims to target the same to understand the severity and possible recommendations for tinnitus disease. Our findings forecast how patients will react to tinnitus treatments. From the patients' electroencephalography (EEG) data, predictive EEG variables are extracted, and later feature selection approaches are used to determine the prominent features. The patient's EEG features are supplemented by AI algorithms for training and forecasting treatment outcomes. Higher accuracy levels of the proposed model using AI help the practitioners suggest the proper diagnosis for the patients and also check the patient's recovery over a period of time.
A Survey on Extraction of Relations using Knowledge Graphs in Various Applications
Dr M Krishna Siva Prasad, Sri Vasavi Chandu., Manogna Grandhi., Chandu Venkata Phaneendra
Source Title: 2023 IEEE Silchar Subsection Conference (SILCON), DOI Link
View abstract ⏷
Over the past ten years, knowledge graphs (KG) have rapidly become a significant field. A growing number of studies are focusing on knowledge graphs to structurally represent various relationships that exist between the entities. Knowledge graphs have quickly become a significant area in AI. A directed, labelled, multi-relational graph with some type of semantics is what is meant by the definition of a knowledge graph, which continues a longstanding tradition of graphs in the AI community. To provide such a synthesis, this paper presents a review of the KG research landscape first demonstrates what the main research areas are, and how they relate to various communities, such as in question-answering, error-recognition, and recommendation systems. Within these communities, the many but related focus of KG research are presented in a unified framework. Deep learning techniques are used by the majority of cutting-edge systems, primarily to target relationships between entities.
Path and information content based semantic similarity measure to estimate word-pair similarity
Dr M Krishna Siva Prasad, Chandra Mohan Dasari., Nilesh Chandra K Pikle., Snehal B Shinde
Source Title: AIP Conference Proceedings, Quartile: Q4, DOI Link
View abstract ⏷
Extracting semantic features of text in natural language processing activities are important for many applications. Measuring semantic similarity of text can be carried out by various methods. Given two concepts or two short texts, the similarity between them can be carried out by similarity measures like corpus based and knowledge based measures. Measures which are corpus based are application specific and this paper focuses on measuring semantic similarity using knowledge based measures. Existing knowledge based measures use either information content or path length between the concepts to evaluate the similarity. Hence, in this paper an approach which uses both information content and path length is designed to evaluate the similarity between the concepts and a thorough analysis is done on the benchmark datasets and with results it is shown that the proposed measure is more efficient than all the existing measures.