Faculty Dr Surochita Pal

Dr Surochita Pal

Assistant Professor

Department of Computer Science and Engineering

Contact Details

surochita.p@srmap.edu.in

Office Location

Education

2025
PhD
Indian Statistical Institute
2017
M.Sc.
Calcutta University
2014
B.Sc.
Calcutta University, Thakurpukur Vivekananda College

Personal Website

Experience

  • November, 2024 to June, 2025 – Visiting Scientist – Indian Statistical Institute, Kolkata
  • September, 2023 to September, 2024 – Project Linked Person – J. C. Bose National Fellowship, Indian Statistical Institute, Kolkata.
  • September, 2022 to August, 2023 – Project Linked Senior Research Fellow – Department of Science and Technology (DST), Indian Statistical Institute, Kolkata.
  • August 2019 to July, 2024 – Senior Research Fellow (SRF) – Indian Statistical Institute, Kolkata
  • August 2017 to July, 2019 – Junior Research Fellow (JRF) – Indian Statistical Institute, Kolkata

Research Interest

  • Computer Vision and Deep Learning focuses on creating and using deep neural networks for tasks like semantic segmentation, object detection, and image classification. Convolutional and transformer-based model experiments, training optimization on high-dimensional image data, and resolving issues with data efficiency and generalization are all part of the work.
  • Fusion of Multimodal Data involves integrating data from various sources, including text, images, and tabular data, in order to enhance inference and learning. Methods include attention-based fusion, joint representation learning, and methods for managing missing data and modality-specific features.
  • Language-Vision Models focuses on developing joint representations of textual and visual information to facilitate tasks such as cross-modal retrieval, visual question answering, and image captioning. In addition to techniques for aligning image regions with textual concepts, research investigates encoder-decoder and dual-encoder architectures.

Awards

  • 2017 – Qualified in GATE
  • 2017 – Junior Research Fellowship – Indian Statistical Institute
  • 2019 – Senior Research Fellowship – Indian Statistical Institute
  • 2022 – Senior Research Fellowship – Department of Science and Technology (DST)

Memberships

  • Reviewer of IEEE Transactions on Fuzzy Systems, IEEE Transactions on Computational Biology and Bioinformatics, IEEE/ACM Transactions on Computational Biology and Bioinformatics, Computer Methods and Programs in Biomedicine, Biomedical Signal Processing and Control, Neural Networks, Information Sciences, Sadhana, WIREs Data Mining and Knowledge Discovery.

Publications

  • Adaptive Class Learning to Screen Diabetic Disorders in Fundus Images of Eye

    Dey S., Dutta P., Bhattacharyya R., Pal S., Mitra S., Raman R.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, DOI Link

    View abstract ⏷

    The prevalence of ocular illnesses is growing globally, presenting a substantial public health challenge. Early detection and timely intervention are crucial for averting visual impairment and enhancing patient prognosis. This research introduces a new framework called Class Extension with Limited Data (CELD) to train a classifier to categorize retinal fundus images. The classifier is initially trained to identify relevant features concerning Healthy and Diabetic Retinopathy (DR) classes and later fine-tuned to adapt to the task of classifying the input images into three classes, viz. Healthy, DR and Glaucoma. This strategy allows the model to gradually enhance its classification capabilities, which is beneficial in situations where there are only a limited number of labeled datasets available. Perturbation methods are also used to identify the input image characteristics responsible for influencing the model’s decision-making process. We achieve an overall accuracy of 91% on publicly available datasets.
  • Weighted Deformable Network for Efficient Segmentation of Lung Tumors in CT

    Pal S., Mitra S., Uma Shankar B.

    Article, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2025, DOI Link

    View abstract ⏷

    The computerized delineation and prognosis of lung cancer is typically based on Computed Tomography (CT) image analysis, whereby the region of interest (ROI) is accurately demarcated and classified. Deep learning in computer vision provides a different perspective to image segmentation. Due to the increasing number of cases of lung cancer and the availability of large volumes of CT scans every day, the need for automated handling becomes imperative. This requires efficient delineation and diagnosis through the design of new techniques for improved accuracy. In this article, we introduce the novel Weighted Deformable U-Net (WDU-Net) for efficient delineation of the tumor region. It incorporates the Deformable Convolution (DC) that can model arbitrary geometric shapes of region of interests. This is enhanced by the Weight Generation (WG) module to suppress unimportant features while highlighting relevant ones. A new Focal Asymmetric Similarity (FAS) loss function helps handle class imbalance. Ablation studies and comparison with state-of-the-art models help establish the effectiveness of WDU-Net with ensemble learning, tested on five publicly available lung cancer datasets. Best results were obtained on the LIDC-IDRI lung tumor test dataset, with an average Dice score of 0.9137, the Hausdorff Distance 95% (HD95) of 5.3852, and Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) of 0.9449.
  • Collective intelligent strategy for improved segmentation of COVID-19 from CT

    Pal S., Mitra S., Shankar B.U.

    Article, Expert Systems with Applications, 2024, DOI Link

    View abstract ⏷

    We propose a novel non-invasive tool, using deep learning and imaging, for delineating COVID-19 infection in lungs. The Ensembling of selective Focus-based Multi-resolution Convolution network (EFMC), employing Leave-One-Patient-Out (LOPO) training, exhibits high sensitivity and precision in outlining infected regions along with assessment of severity. The selective focus mechanism combines contextual with local information, at multiple resolutions, for accurate segmentation. Ensemble learning integrates heterogeneity of decision through different base classifiers. The superiority of EFMC, even with severe class imbalance, is established through comparison with existing state-of-the-art learning models over four publicly-available COVID-19 datasets. The results are suggestive of the relevance of deep learning in providing assistive intelligence to medical practitioners, when they are overburdened with patients as in pandemics.
  • A Comparative Analysis of Deep Learning Architectures for Segmentation in Lung

    Das S.P., Mitra S.

    Conference paper, 2024 IEEE Region 10 Symposium, TENSYMP 2024, 2024, DOI Link

    View abstract ⏷

    This study explores the application of deep learning techniques to segment lung computed tomography (CT) scans, with a focus on cases involving COVID-19 and lung tumors. Utilizing a diverse dataset encompassing a wide range of CT scans, we conduct an extensive evaluation of various state-of-the-art deep neural network architectures. Our experimental results demonstrate the high efficiency and accuracy of deep learning models in performing image segmentation tasks, achieving impressive dice scores of 95.12% and 82.89% on COVID-19 and lung tumor data, respectively. These findings highlight the signif-icant potential of deep learning in medical imaging applications. Furthermore, we conduct thorough ablation studies, meticulously analyzing the performance of each network architecture. These studies provide valuable insights into the specific strengths and limitations of different deep learning approaches, facilitating the identification of the most effective methods for lung CT scan segmentation. This research not only underscores the promising capabilities of deep learning in medical image analysis but also offers a detailed understanding of how various models can be optimized to enhance performance in clinical applications.
  • Deep Ensembling with Multimodal Image Fusion for Efficient Classification of Lung Cancer

    Das S.P., Mitra S.

    Conference paper, 2024 15th International Conference on Computing Communication and Networking Technologies, ICCCNT 2024, 2024, DOI Link

    View abstract ⏷

    This study focuses on the classification of cancerous and healthy slices from multimodal lung images. The data used in the research comprises Computed Tomography (CT) and Positron Emission Tomography (PET) images. The proposed strategy achieves the fusion of PET and CT images by utilizing Principal Component Analysis (PCA) and an Autoencoder. Subsequently, a new ensemble-based classifier developed, Deep Ensembled Multimodal Fusion (DEMF), employing majority voting to classify the sample images under examination. Gradient-weighted Class Activation Mapping (Grad-CAM) employed to visualize the classification accuracy of cancer-Affected images. Given the limited sample size, a random image augmentation strategy employed during the training phase. The DEMF network helps mitigate the challenges of scarce data in computer-Aided medical image analysis. The proposed network compared with state-of-The-Art networks across three publicly available datasets. The network outperforms others based on the metrics-Accuracy, F1Score, Precision, and Recall. The investigation results highlight the effectiveness of the proposed network.
  • A Deployment Framework for Ensuring Business Compliance Using Goal Models

    Deb N., Roy M., Pal S., Bhaumick A., Chaki N.

    Book chapter, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, DOI Link

    View abstract ⏷

    Based on initial research to transform a sequence agnostic goal model into a finite state model (FSM) and then checking them against temporal properties (in CTL), researchers have come up with guidelines for generating compliant finite state models altogether. The proposed guidelines provide a formal approach to prune a non-compliant FSM (generated by the Semantic Implosion Algorithm) and generate FSM-alternatives that satisfy the given temporal property. This paper is an extension of the previous work that implements the proposed guidelines and builds a deployment interface called i∗ToNuSMV 3.0. The working of the framework is demonstrated with the help of some use cases. In the end, a comparative study of the performance between the previous and current versions of the Semantic Implosion Algorithm (SIA) with respect to the size of the solution space and the execution times, respectively, has also been presented.
  • Extracting business compliant finite state models from I* models

    Deb N., Chaki N., Roy M., Pal S., Bhaumick A.

    Conference paper, Advances in Intelligent Systems and Computing, 2020, DOI Link

    View abstract ⏷

    Goal models are primarily used to represent and analyze requirements at an early stage of software development. However, goal models are sequence agnostic and fall short for analyzing temporal properties. Limited works are found in the existing literature that aims to bridge this gap. There are tools that transform a goal model to a finite state model (FSM). The existing works and their implementations can only check whether a given temporal property is satisfied by a goal model or not. However, it does not provide us with a compliant FSM which satisfies the compliance rules. This paper aims to generate a business compliant FSM for a given goal model specification that complies with the business rules (specified in some temporal logic). We have chosen to work with tGRL (textual modeling language for goal-oriented requirement language) as the goal model specification language for representing i* models. The framework extends the current i*ToNuSMV ver2.02 tool by allowing the user to give a CTL property as input along with a goal model. The proposed framework generates a compliant FSM that satisfies the CTL constraint.

Patents

Projects

Scholars

Interests

  • Computer Vision
  • Deep Learning
  • Multimodal Data Fusion
  • Vision–Language Models

Thought Leaderships

There are no Thought Leaderships associated with this faculty.

Top Achievements

Research Area

No research areas found for this faculty.

Recent Updates

No recent updates found.

Education
2014
B.Sc.
Calcutta University, Thakurpukur Vivekananda College
2017
M.Sc.
Calcutta University
2025
PhD
Indian Statistical Institute
Experience
  • November, 2024 to June, 2025 – Visiting Scientist – Indian Statistical Institute, Kolkata
  • September, 2023 to September, 2024 – Project Linked Person – J. C. Bose National Fellowship, Indian Statistical Institute, Kolkata.
  • September, 2022 to August, 2023 – Project Linked Senior Research Fellow – Department of Science and Technology (DST), Indian Statistical Institute, Kolkata.
  • August 2019 to July, 2024 – Senior Research Fellow (SRF) – Indian Statistical Institute, Kolkata
  • August 2017 to July, 2019 – Junior Research Fellow (JRF) – Indian Statistical Institute, Kolkata
Research Interests
  • Computer Vision and Deep Learning focuses on creating and using deep neural networks for tasks like semantic segmentation, object detection, and image classification. Convolutional and transformer-based model experiments, training optimization on high-dimensional image data, and resolving issues with data efficiency and generalization are all part of the work.
  • Fusion of Multimodal Data involves integrating data from various sources, including text, images, and tabular data, in order to enhance inference and learning. Methods include attention-based fusion, joint representation learning, and methods for managing missing data and modality-specific features.
  • Language-Vision Models focuses on developing joint representations of textual and visual information to facilitate tasks such as cross-modal retrieval, visual question answering, and image captioning. In addition to techniques for aligning image regions with textual concepts, research investigates encoder-decoder and dual-encoder architectures.
Awards & Fellowships
  • 2017 – Qualified in GATE
  • 2017 – Junior Research Fellowship – Indian Statistical Institute
  • 2019 – Senior Research Fellowship – Indian Statistical Institute
  • 2022 – Senior Research Fellowship – Department of Science and Technology (DST)
Memberships
  • Reviewer of IEEE Transactions on Fuzzy Systems, IEEE Transactions on Computational Biology and Bioinformatics, IEEE/ACM Transactions on Computational Biology and Bioinformatics, Computer Methods and Programs in Biomedicine, Biomedical Signal Processing and Control, Neural Networks, Information Sciences, Sadhana, WIREs Data Mining and Knowledge Discovery.
Publications
  • Adaptive Class Learning to Screen Diabetic Disorders in Fundus Images of Eye

    Dey S., Dutta P., Bhattacharyya R., Pal S., Mitra S., Raman R.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, DOI Link

    View abstract ⏷

    The prevalence of ocular illnesses is growing globally, presenting a substantial public health challenge. Early detection and timely intervention are crucial for averting visual impairment and enhancing patient prognosis. This research introduces a new framework called Class Extension with Limited Data (CELD) to train a classifier to categorize retinal fundus images. The classifier is initially trained to identify relevant features concerning Healthy and Diabetic Retinopathy (DR) classes and later fine-tuned to adapt to the task of classifying the input images into three classes, viz. Healthy, DR and Glaucoma. This strategy allows the model to gradually enhance its classification capabilities, which is beneficial in situations where there are only a limited number of labeled datasets available. Perturbation methods are also used to identify the input image characteristics responsible for influencing the model’s decision-making process. We achieve an overall accuracy of 91% on publicly available datasets.
  • Weighted Deformable Network for Efficient Segmentation of Lung Tumors in CT

    Pal S., Mitra S., Uma Shankar B.

    Article, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2025, DOI Link

    View abstract ⏷

    The computerized delineation and prognosis of lung cancer is typically based on Computed Tomography (CT) image analysis, whereby the region of interest (ROI) is accurately demarcated and classified. Deep learning in computer vision provides a different perspective to image segmentation. Due to the increasing number of cases of lung cancer and the availability of large volumes of CT scans every day, the need for automated handling becomes imperative. This requires efficient delineation and diagnosis through the design of new techniques for improved accuracy. In this article, we introduce the novel Weighted Deformable U-Net (WDU-Net) for efficient delineation of the tumor region. It incorporates the Deformable Convolution (DC) that can model arbitrary geometric shapes of region of interests. This is enhanced by the Weight Generation (WG) module to suppress unimportant features while highlighting relevant ones. A new Focal Asymmetric Similarity (FAS) loss function helps handle class imbalance. Ablation studies and comparison with state-of-the-art models help establish the effectiveness of WDU-Net with ensemble learning, tested on five publicly available lung cancer datasets. Best results were obtained on the LIDC-IDRI lung tumor test dataset, with an average Dice score of 0.9137, the Hausdorff Distance 95% (HD95) of 5.3852, and Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) of 0.9449.
  • Collective intelligent strategy for improved segmentation of COVID-19 from CT

    Pal S., Mitra S., Shankar B.U.

    Article, Expert Systems with Applications, 2024, DOI Link

    View abstract ⏷

    We propose a novel non-invasive tool, using deep learning and imaging, for delineating COVID-19 infection in lungs. The Ensembling of selective Focus-based Multi-resolution Convolution network (EFMC), employing Leave-One-Patient-Out (LOPO) training, exhibits high sensitivity and precision in outlining infected regions along with assessment of severity. The selective focus mechanism combines contextual with local information, at multiple resolutions, for accurate segmentation. Ensemble learning integrates heterogeneity of decision through different base classifiers. The superiority of EFMC, even with severe class imbalance, is established through comparison with existing state-of-the-art learning models over four publicly-available COVID-19 datasets. The results are suggestive of the relevance of deep learning in providing assistive intelligence to medical practitioners, when they are overburdened with patients as in pandemics.
  • A Comparative Analysis of Deep Learning Architectures for Segmentation in Lung

    Das S.P., Mitra S.

    Conference paper, 2024 IEEE Region 10 Symposium, TENSYMP 2024, 2024, DOI Link

    View abstract ⏷

    This study explores the application of deep learning techniques to segment lung computed tomography (CT) scans, with a focus on cases involving COVID-19 and lung tumors. Utilizing a diverse dataset encompassing a wide range of CT scans, we conduct an extensive evaluation of various state-of-the-art deep neural network architectures. Our experimental results demonstrate the high efficiency and accuracy of deep learning models in performing image segmentation tasks, achieving impressive dice scores of 95.12% and 82.89% on COVID-19 and lung tumor data, respectively. These findings highlight the signif-icant potential of deep learning in medical imaging applications. Furthermore, we conduct thorough ablation studies, meticulously analyzing the performance of each network architecture. These studies provide valuable insights into the specific strengths and limitations of different deep learning approaches, facilitating the identification of the most effective methods for lung CT scan segmentation. This research not only underscores the promising capabilities of deep learning in medical image analysis but also offers a detailed understanding of how various models can be optimized to enhance performance in clinical applications.
  • Deep Ensembling with Multimodal Image Fusion for Efficient Classification of Lung Cancer

    Das S.P., Mitra S.

    Conference paper, 2024 15th International Conference on Computing Communication and Networking Technologies, ICCCNT 2024, 2024, DOI Link

    View abstract ⏷

    This study focuses on the classification of cancerous and healthy slices from multimodal lung images. The data used in the research comprises Computed Tomography (CT) and Positron Emission Tomography (PET) images. The proposed strategy achieves the fusion of PET and CT images by utilizing Principal Component Analysis (PCA) and an Autoencoder. Subsequently, a new ensemble-based classifier developed, Deep Ensembled Multimodal Fusion (DEMF), employing majority voting to classify the sample images under examination. Gradient-weighted Class Activation Mapping (Grad-CAM) employed to visualize the classification accuracy of cancer-Affected images. Given the limited sample size, a random image augmentation strategy employed during the training phase. The DEMF network helps mitigate the challenges of scarce data in computer-Aided medical image analysis. The proposed network compared with state-of-The-Art networks across three publicly available datasets. The network outperforms others based on the metrics-Accuracy, F1Score, Precision, and Recall. The investigation results highlight the effectiveness of the proposed network.
  • A Deployment Framework for Ensuring Business Compliance Using Goal Models

    Deb N., Roy M., Pal S., Bhaumick A., Chaki N.

    Book chapter, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, DOI Link

    View abstract ⏷

    Based on initial research to transform a sequence agnostic goal model into a finite state model (FSM) and then checking them against temporal properties (in CTL), researchers have come up with guidelines for generating compliant finite state models altogether. The proposed guidelines provide a formal approach to prune a non-compliant FSM (generated by the Semantic Implosion Algorithm) and generate FSM-alternatives that satisfy the given temporal property. This paper is an extension of the previous work that implements the proposed guidelines and builds a deployment interface called i∗ToNuSMV 3.0. The working of the framework is demonstrated with the help of some use cases. In the end, a comparative study of the performance between the previous and current versions of the Semantic Implosion Algorithm (SIA) with respect to the size of the solution space and the execution times, respectively, has also been presented.
  • Extracting business compliant finite state models from I* models

    Deb N., Chaki N., Roy M., Pal S., Bhaumick A.

    Conference paper, Advances in Intelligent Systems and Computing, 2020, DOI Link

    View abstract ⏷

    Goal models are primarily used to represent and analyze requirements at an early stage of software development. However, goal models are sequence agnostic and fall short for analyzing temporal properties. Limited works are found in the existing literature that aims to bridge this gap. There are tools that transform a goal model to a finite state model (FSM). The existing works and their implementations can only check whether a given temporal property is satisfied by a goal model or not. However, it does not provide us with a compliant FSM which satisfies the compliance rules. This paper aims to generate a business compliant FSM for a given goal model specification that complies with the business rules (specified in some temporal logic). We have chosen to work with tGRL (textual modeling language for goal-oriented requirement language) as the goal model specification language for representing i* models. The framework extends the current i*ToNuSMV ver2.02 tool by allowing the user to give a CTL property as input along with a goal model. The proposed framework generates a compliant FSM that satisfies the CTL constraint.
Contact Details

surochita.p@srmap.edu.in

Scholars
Interests

  • Computer Vision
  • Deep Learning
  • Multimodal Data Fusion
  • Vision–Language Models

Education
2014
B.Sc.
Calcutta University, Thakurpukur Vivekananda College
2017
M.Sc.
Calcutta University
2025
PhD
Indian Statistical Institute
Experience
  • November, 2024 to June, 2025 – Visiting Scientist – Indian Statistical Institute, Kolkata
  • September, 2023 to September, 2024 – Project Linked Person – J. C. Bose National Fellowship, Indian Statistical Institute, Kolkata.
  • September, 2022 to August, 2023 – Project Linked Senior Research Fellow – Department of Science and Technology (DST), Indian Statistical Institute, Kolkata.
  • August 2019 to July, 2024 – Senior Research Fellow (SRF) – Indian Statistical Institute, Kolkata
  • August 2017 to July, 2019 – Junior Research Fellow (JRF) – Indian Statistical Institute, Kolkata
Research Interests
  • Computer Vision and Deep Learning focuses on creating and using deep neural networks for tasks like semantic segmentation, object detection, and image classification. Convolutional and transformer-based model experiments, training optimization on high-dimensional image data, and resolving issues with data efficiency and generalization are all part of the work.
  • Fusion of Multimodal Data involves integrating data from various sources, including text, images, and tabular data, in order to enhance inference and learning. Methods include attention-based fusion, joint representation learning, and methods for managing missing data and modality-specific features.
  • Language-Vision Models focuses on developing joint representations of textual and visual information to facilitate tasks such as cross-modal retrieval, visual question answering, and image captioning. In addition to techniques for aligning image regions with textual concepts, research investigates encoder-decoder and dual-encoder architectures.
Awards & Fellowships
  • 2017 – Qualified in GATE
  • 2017 – Junior Research Fellowship – Indian Statistical Institute
  • 2019 – Senior Research Fellowship – Indian Statistical Institute
  • 2022 – Senior Research Fellowship – Department of Science and Technology (DST)
Memberships
  • Reviewer of IEEE Transactions on Fuzzy Systems, IEEE Transactions on Computational Biology and Bioinformatics, IEEE/ACM Transactions on Computational Biology and Bioinformatics, Computer Methods and Programs in Biomedicine, Biomedical Signal Processing and Control, Neural Networks, Information Sciences, Sadhana, WIREs Data Mining and Knowledge Discovery.
Publications
  • Adaptive Class Learning to Screen Diabetic Disorders in Fundus Images of Eye

    Dey S., Dutta P., Bhattacharyya R., Pal S., Mitra S., Raman R.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, DOI Link

    View abstract ⏷

    The prevalence of ocular illnesses is growing globally, presenting a substantial public health challenge. Early detection and timely intervention are crucial for averting visual impairment and enhancing patient prognosis. This research introduces a new framework called Class Extension with Limited Data (CELD) to train a classifier to categorize retinal fundus images. The classifier is initially trained to identify relevant features concerning Healthy and Diabetic Retinopathy (DR) classes and later fine-tuned to adapt to the task of classifying the input images into three classes, viz. Healthy, DR and Glaucoma. This strategy allows the model to gradually enhance its classification capabilities, which is beneficial in situations where there are only a limited number of labeled datasets available. Perturbation methods are also used to identify the input image characteristics responsible for influencing the model’s decision-making process. We achieve an overall accuracy of 91% on publicly available datasets.
  • Weighted Deformable Network for Efficient Segmentation of Lung Tumors in CT

    Pal S., Mitra S., Uma Shankar B.

    Article, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2025, DOI Link

    View abstract ⏷

    The computerized delineation and prognosis of lung cancer is typically based on Computed Tomography (CT) image analysis, whereby the region of interest (ROI) is accurately demarcated and classified. Deep learning in computer vision provides a different perspective to image segmentation. Due to the increasing number of cases of lung cancer and the availability of large volumes of CT scans every day, the need for automated handling becomes imperative. This requires efficient delineation and diagnosis through the design of new techniques for improved accuracy. In this article, we introduce the novel Weighted Deformable U-Net (WDU-Net) for efficient delineation of the tumor region. It incorporates the Deformable Convolution (DC) that can model arbitrary geometric shapes of region of interests. This is enhanced by the Weight Generation (WG) module to suppress unimportant features while highlighting relevant ones. A new Focal Asymmetric Similarity (FAS) loss function helps handle class imbalance. Ablation studies and comparison with state-of-the-art models help establish the effectiveness of WDU-Net with ensemble learning, tested on five publicly available lung cancer datasets. Best results were obtained on the LIDC-IDRI lung tumor test dataset, with an average Dice score of 0.9137, the Hausdorff Distance 95% (HD95) of 5.3852, and Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) of 0.9449.
  • Collective intelligent strategy for improved segmentation of COVID-19 from CT

    Pal S., Mitra S., Shankar B.U.

    Article, Expert Systems with Applications, 2024, DOI Link

    View abstract ⏷

    We propose a novel non-invasive tool, using deep learning and imaging, for delineating COVID-19 infection in lungs. The Ensembling of selective Focus-based Multi-resolution Convolution network (EFMC), employing Leave-One-Patient-Out (LOPO) training, exhibits high sensitivity and precision in outlining infected regions along with assessment of severity. The selective focus mechanism combines contextual with local information, at multiple resolutions, for accurate segmentation. Ensemble learning integrates heterogeneity of decision through different base classifiers. The superiority of EFMC, even with severe class imbalance, is established through comparison with existing state-of-the-art learning models over four publicly-available COVID-19 datasets. The results are suggestive of the relevance of deep learning in providing assistive intelligence to medical practitioners, when they are overburdened with patients as in pandemics.
  • A Comparative Analysis of Deep Learning Architectures for Segmentation in Lung

    Das S.P., Mitra S.

    Conference paper, 2024 IEEE Region 10 Symposium, TENSYMP 2024, 2024, DOI Link

    View abstract ⏷

    This study explores the application of deep learning techniques to segment lung computed tomography (CT) scans, with a focus on cases involving COVID-19 and lung tumors. Utilizing a diverse dataset encompassing a wide range of CT scans, we conduct an extensive evaluation of various state-of-the-art deep neural network architectures. Our experimental results demonstrate the high efficiency and accuracy of deep learning models in performing image segmentation tasks, achieving impressive dice scores of 95.12% and 82.89% on COVID-19 and lung tumor data, respectively. These findings highlight the signif-icant potential of deep learning in medical imaging applications. Furthermore, we conduct thorough ablation studies, meticulously analyzing the performance of each network architecture. These studies provide valuable insights into the specific strengths and limitations of different deep learning approaches, facilitating the identification of the most effective methods for lung CT scan segmentation. This research not only underscores the promising capabilities of deep learning in medical image analysis but also offers a detailed understanding of how various models can be optimized to enhance performance in clinical applications.
  • Deep Ensembling with Multimodal Image Fusion for Efficient Classification of Lung Cancer

    Das S.P., Mitra S.

    Conference paper, 2024 15th International Conference on Computing Communication and Networking Technologies, ICCCNT 2024, 2024, DOI Link

    View abstract ⏷

    This study focuses on the classification of cancerous and healthy slices from multimodal lung images. The data used in the research comprises Computed Tomography (CT) and Positron Emission Tomography (PET) images. The proposed strategy achieves the fusion of PET and CT images by utilizing Principal Component Analysis (PCA) and an Autoencoder. Subsequently, a new ensemble-based classifier developed, Deep Ensembled Multimodal Fusion (DEMF), employing majority voting to classify the sample images under examination. Gradient-weighted Class Activation Mapping (Grad-CAM) employed to visualize the classification accuracy of cancer-Affected images. Given the limited sample size, a random image augmentation strategy employed during the training phase. The DEMF network helps mitigate the challenges of scarce data in computer-Aided medical image analysis. The proposed network compared with state-of-The-Art networks across three publicly available datasets. The network outperforms others based on the metrics-Accuracy, F1Score, Precision, and Recall. The investigation results highlight the effectiveness of the proposed network.
  • A Deployment Framework for Ensuring Business Compliance Using Goal Models

    Deb N., Roy M., Pal S., Bhaumick A., Chaki N.

    Book chapter, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, DOI Link

    View abstract ⏷

    Based on initial research to transform a sequence agnostic goal model into a finite state model (FSM) and then checking them against temporal properties (in CTL), researchers have come up with guidelines for generating compliant finite state models altogether. The proposed guidelines provide a formal approach to prune a non-compliant FSM (generated by the Semantic Implosion Algorithm) and generate FSM-alternatives that satisfy the given temporal property. This paper is an extension of the previous work that implements the proposed guidelines and builds a deployment interface called i∗ToNuSMV 3.0. The working of the framework is demonstrated with the help of some use cases. In the end, a comparative study of the performance between the previous and current versions of the Semantic Implosion Algorithm (SIA) with respect to the size of the solution space and the execution times, respectively, has also been presented.
  • Extracting business compliant finite state models from I* models

    Deb N., Chaki N., Roy M., Pal S., Bhaumick A.

    Conference paper, Advances in Intelligent Systems and Computing, 2020, DOI Link

    View abstract ⏷

    Goal models are primarily used to represent and analyze requirements at an early stage of software development. However, goal models are sequence agnostic and fall short for analyzing temporal properties. Limited works are found in the existing literature that aims to bridge this gap. There are tools that transform a goal model to a finite state model (FSM). The existing works and their implementations can only check whether a given temporal property is satisfied by a goal model or not. However, it does not provide us with a compliant FSM which satisfies the compliance rules. This paper aims to generate a business compliant FSM for a given goal model specification that complies with the business rules (specified in some temporal logic). We have chosen to work with tGRL (textual modeling language for goal-oriented requirement language) as the goal model specification language for representing i* models. The framework extends the current i*ToNuSMV ver2.02 tool by allowing the user to give a CTL property as input along with a goal model. The proposed framework generates a compliant FSM that satisfies the CTL constraint.
Contact Details

surochita.p@srmap.edu.in

Scholars