Faculty Dr Tabiya Manzoor Beigh

Dr Tabiya Manzoor Beigh

Assistant Professor

Department of Computer Science and Engineering

Contact Details

tabiya.m@srmap.edu.in

Office Location

Education

2025
PhD
Pondicherry University
2016
M.Tech CSE
Kurukshetra University
2013
B-Level
National Institute of Electronics and Information Technology

Personal Website

Experience

  • June 2013 to July 2014 – Software Developer,Intelli Solutions, Karnal, Haryana, India.

Research Interest

  • Efficient Summarisation of Surveillance videos in the limited resource settings using deep learning models.
  • Detection of abandoned objects in the Surveillance videos using multiple robust background subtraction models and lightweight object classifiers.
  • Predictive Summarization of Surveillance Videos using Deep learning-based optimization techniques.
  • Medical Biomarkers based diagnosis of Alzheimer’s Disease Using Deep Learning Models.

Awards

  • 2014 – GATE – IIT Kharagpur
  • 2018 – Jammu and Kashmir State Eligibility Test- University of Kashmir
  • 2019 – National Eligibility Test – UGC
  • 2015 – Summer Internship programme at the school of Computing and Electrical Engineering, Indian Institute of Technology Mandi, IIT Mandi (Himachal Pradesh)
  • 2019 – 2024 Maulana Azad National Fellowship – University Grants Commission
  • Best Paper Award at 2nd International Conference on Human-centric Smart Computing-2023, University of Engineering & Management (UEM), Jaipur, July 05-06, 2023
  • Best Paper Award at International Conference on Emerging Innovative Technologies in Engineering ICEITE’22,
  • Sri Manakula Vinayagar Engineering College, Madagadipet, Puducherry, July 13-15,2022
  • Best Paper Award at Two Day National Seminar on “Data Handling Techniques: Application in Information Technology”, Gandhi Memorial National Post Graduate College, Ambala Cantt, Haryana, March 04-05, 2016

Memberships

Publications

  • Predictive summarization framework for resource-constrained device surveillance videos

    Beigh T.M., Venkatesan D.V.P., Arumugam J.

    Article, Pattern Recognition, 2026, DOI Link

    View abstract ⏷

    The high occurrence of crime in various cities worldwide has detrimental effects on both the victims and the communities they belong to. Although deep learning techniques are acknowledged for their effectiveness in predicting future events by analyzing past behaviors, those approaches have resulted in poor prediction accuracy for video surveillance data. To tackle this issue, a new framework called the Horned Lizard ZfNet Summarization Framework (HLZSF) was introduced in this research. The primary processes, like filtering, key frame extraction, crime events tracking, and crime prediction with classification, were performed. To filter the noise features continuously, the mathematical steps of the filtering process were processed in the hidden layer with the min-max scalar function. Moreover, the crime events tracking function is executed by processing the food-hunting behaviour of the horned lizard. In addition, the feature selection from the video frame is performed with the horned lizard skin colour changing as the best solution. The Python environment is adopted for this study to validate the video surveillance database. Here, the incorporation of the horned lizard's best solution, which is skin changing behaviour based on a specific object in the deep network, has been employed to earn the finest feature selection and prediction outcome. The accuracy attained by the novel HLZSF is 97.87 %, and the recorded F-score is 97.88 %, precision 98.01 %, and recall 97.8 %, which is the finest outcome compared to alternative models.
  • MRI-Based Biomarker in the Diagnosis of Alzheimer’s Disease Using Attention-UNet

    Arumugam J., Prasanna Venkatesan V., Beigh T.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    Dementia can occur in various forms; some types of dementia are curable and some are not. Among the non-curable forms, Alzheimer’s disease (AD) is the prominent one. There is no effective treatment to cure it but it may reduce its impact if diagnosed early. Accurate classification of AD is necessary for diagnosing and giving the right effective treatment to patients. Specific regions in the brain serve as hotspots for AD, potential imaging biomarkers that contribute effectively thereby improving classification accuracy. We introduce a new Attention-based ’U’ shaped Convolution Neural Network in this study to identify imaging biomarkers of AD using 3D T1-weighted MRI data, which excels at distinguishing between gray matter, white matter, and cerebrospinal fluid (CSF). Our model is improved UNet with an enhanced convolution block attention module named EnCBAMUNet. This makes it ideal for brain imaging, where the differentiation between these tissues is crucial for identifying pathology, such as in neurodegenerative diseases. Our method is tested on both ADNI and OASIS datasets. Classification accuracy of 99.8% for Healthy as control normal(CN) vs Alzheimer’s disease(AD) is obtained in binary classification and 95.5% for multi-classifier Alzheimer’s Disease, Mild Cognitive Impairment(MCI) is a transient state from healthy to disease state and Control Normal as Healthy(AD vs MCI vs CN) in multiclass classification. We visualize regions (hippocampus, ventricles, and some parts of cortex) of the brain responsible for AD using the Three Dimensional Gradient-weighted Class Activation Mapping (3D-Grad-CAM) method of our model due to the deep learning model(DL) that is black-box in nature.
  • Motion Aware Video Surveillance System (MAVSS)

    Beigh T.M., Venkatesan V.P.

    Conference paper, Smart Innovation, Systems and Technologies, 2024, DOI Link

    View abstract ⏷

    Surveillance cameras create enormous amounts of data. Processing huge volumes of data requires an intelligent system to identify objects in varied conditions. The requirement of such a system is a primary concern and can be implemented in different locations which may include housing societies, parks, airports, railway stations, etc. Video is being recorded for a specific duration. The motion of an object plays a vital role in video surveillance videos. Moving objects are the primary candidates to analyze. Surveillance videos have an intrinsic property that most of the time camera would be capturing all static backgrounds without any alterations. This video usually does not have any acoustic features. Any important event is marked by the change or alteration in the environment which could mark the beginning of a normal or an abnormal event. The motion of an object or person is of the most important and examinable features which can assist in a better decision-making process. In this paper, a Motion Aware Video Surveillance system is proposed which takes spatial and spectral features into consideration to detect and track the objects of interest in videos.
  • Object-Based Key Frame Extraction in Videos

    Beigh T.M., Prasannavenkatesan V., Arumugam J.

    Conference paper, 2023 2nd International Conference on Advances in Computational Intelligence and Communication, ICACIC 2023, 2023, DOI Link

    View abstract ⏷

    Due to the significant improvements in communication and network infrastructure, there is a tremendous exchange of visual data. The process of watching videos is time-consuming to the user. It requires an immediate solution to extract the meaningful parts of the video, reducing the user's time and computational storage. In this paper, an object-based video summarization process is proposed. Object detection is done by training a CNN-based object detection model known as the You Only Look Once (YOLO V5) model. Detection is followed by an object-tracking method known as the Kalman filtering algorithm. The saliency score for each object in the frame is computed based on the various objects including motion, the centrality of an object, and other color-based features. The method is evaluated on benchmark datasets namely Open Video Project (OVP) and the custom-made video dataset. It has achieved precision, recall and F-score better than the state-of-the-art methods. The proposed method reduces computational time significantly.

Patents

Projects

Scholars

Interests

  • Artificial Intelligence
  • Computer Vision
  • Deep Learning
  • Machine Learning
  • Medical Imaging

Thought Leaderships

There are no Thought Leaderships associated with this faculty.

Top Achievements

Research Area

No research areas found for this faculty.

Recent Updates

No recent updates found.

Education
2013
B-Level
National Institute of Electronics and Information Technology
2016
M.Tech CSE
Kurukshetra University
2025
PhD
Pondicherry University
Experience
  • June 2013 to July 2014 – Software Developer,Intelli Solutions, Karnal, Haryana, India.
Research Interests
  • Efficient Summarisation of Surveillance videos in the limited resource settings using deep learning models.
  • Detection of abandoned objects in the Surveillance videos using multiple robust background subtraction models and lightweight object classifiers.
  • Predictive Summarization of Surveillance Videos using Deep learning-based optimization techniques.
  • Medical Biomarkers based diagnosis of Alzheimer’s Disease Using Deep Learning Models.
Awards & Fellowships
  • 2014 – GATE – IIT Kharagpur
  • 2018 – Jammu and Kashmir State Eligibility Test- University of Kashmir
  • 2019 – National Eligibility Test – UGC
  • 2015 – Summer Internship programme at the school of Computing and Electrical Engineering, Indian Institute of Technology Mandi, IIT Mandi (Himachal Pradesh)
  • 2019 – 2024 Maulana Azad National Fellowship – University Grants Commission
  • Best Paper Award at 2nd International Conference on Human-centric Smart Computing-2023, University of Engineering & Management (UEM), Jaipur, July 05-06, 2023
  • Best Paper Award at International Conference on Emerging Innovative Technologies in Engineering ICEITE’22,
  • Sri Manakula Vinayagar Engineering College, Madagadipet, Puducherry, July 13-15,2022
  • Best Paper Award at Two Day National Seminar on “Data Handling Techniques: Application in Information Technology”, Gandhi Memorial National Post Graduate College, Ambala Cantt, Haryana, March 04-05, 2016
Memberships
Publications
  • Predictive summarization framework for resource-constrained device surveillance videos

    Beigh T.M., Venkatesan D.V.P., Arumugam J.

    Article, Pattern Recognition, 2026, DOI Link

    View abstract ⏷

    The high occurrence of crime in various cities worldwide has detrimental effects on both the victims and the communities they belong to. Although deep learning techniques are acknowledged for their effectiveness in predicting future events by analyzing past behaviors, those approaches have resulted in poor prediction accuracy for video surveillance data. To tackle this issue, a new framework called the Horned Lizard ZfNet Summarization Framework (HLZSF) was introduced in this research. The primary processes, like filtering, key frame extraction, crime events tracking, and crime prediction with classification, were performed. To filter the noise features continuously, the mathematical steps of the filtering process were processed in the hidden layer with the min-max scalar function. Moreover, the crime events tracking function is executed by processing the food-hunting behaviour of the horned lizard. In addition, the feature selection from the video frame is performed with the horned lizard skin colour changing as the best solution. The Python environment is adopted for this study to validate the video surveillance database. Here, the incorporation of the horned lizard's best solution, which is skin changing behaviour based on a specific object in the deep network, has been employed to earn the finest feature selection and prediction outcome. The accuracy attained by the novel HLZSF is 97.87 %, and the recorded F-score is 97.88 %, precision 98.01 %, and recall 97.8 %, which is the finest outcome compared to alternative models.
  • MRI-Based Biomarker in the Diagnosis of Alzheimer’s Disease Using Attention-UNet

    Arumugam J., Prasanna Venkatesan V., Beigh T.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    Dementia can occur in various forms; some types of dementia are curable and some are not. Among the non-curable forms, Alzheimer’s disease (AD) is the prominent one. There is no effective treatment to cure it but it may reduce its impact if diagnosed early. Accurate classification of AD is necessary for diagnosing and giving the right effective treatment to patients. Specific regions in the brain serve as hotspots for AD, potential imaging biomarkers that contribute effectively thereby improving classification accuracy. We introduce a new Attention-based ’U’ shaped Convolution Neural Network in this study to identify imaging biomarkers of AD using 3D T1-weighted MRI data, which excels at distinguishing between gray matter, white matter, and cerebrospinal fluid (CSF). Our model is improved UNet with an enhanced convolution block attention module named EnCBAMUNet. This makes it ideal for brain imaging, where the differentiation between these tissues is crucial for identifying pathology, such as in neurodegenerative diseases. Our method is tested on both ADNI and OASIS datasets. Classification accuracy of 99.8% for Healthy as control normal(CN) vs Alzheimer’s disease(AD) is obtained in binary classification and 95.5% for multi-classifier Alzheimer’s Disease, Mild Cognitive Impairment(MCI) is a transient state from healthy to disease state and Control Normal as Healthy(AD vs MCI vs CN) in multiclass classification. We visualize regions (hippocampus, ventricles, and some parts of cortex) of the brain responsible for AD using the Three Dimensional Gradient-weighted Class Activation Mapping (3D-Grad-CAM) method of our model due to the deep learning model(DL) that is black-box in nature.
  • Motion Aware Video Surveillance System (MAVSS)

    Beigh T.M., Venkatesan V.P.

    Conference paper, Smart Innovation, Systems and Technologies, 2024, DOI Link

    View abstract ⏷

    Surveillance cameras create enormous amounts of data. Processing huge volumes of data requires an intelligent system to identify objects in varied conditions. The requirement of such a system is a primary concern and can be implemented in different locations which may include housing societies, parks, airports, railway stations, etc. Video is being recorded for a specific duration. The motion of an object plays a vital role in video surveillance videos. Moving objects are the primary candidates to analyze. Surveillance videos have an intrinsic property that most of the time camera would be capturing all static backgrounds without any alterations. This video usually does not have any acoustic features. Any important event is marked by the change or alteration in the environment which could mark the beginning of a normal or an abnormal event. The motion of an object or person is of the most important and examinable features which can assist in a better decision-making process. In this paper, a Motion Aware Video Surveillance system is proposed which takes spatial and spectral features into consideration to detect and track the objects of interest in videos.
  • Object-Based Key Frame Extraction in Videos

    Beigh T.M., Prasannavenkatesan V., Arumugam J.

    Conference paper, 2023 2nd International Conference on Advances in Computational Intelligence and Communication, ICACIC 2023, 2023, DOI Link

    View abstract ⏷

    Due to the significant improvements in communication and network infrastructure, there is a tremendous exchange of visual data. The process of watching videos is time-consuming to the user. It requires an immediate solution to extract the meaningful parts of the video, reducing the user's time and computational storage. In this paper, an object-based video summarization process is proposed. Object detection is done by training a CNN-based object detection model known as the You Only Look Once (YOLO V5) model. Detection is followed by an object-tracking method known as the Kalman filtering algorithm. The saliency score for each object in the frame is computed based on the various objects including motion, the centrality of an object, and other color-based features. The method is evaluated on benchmark datasets namely Open Video Project (OVP) and the custom-made video dataset. It has achieved precision, recall and F-score better than the state-of-the-art methods. The proposed method reduces computational time significantly.
Contact Details

tabiya.m@srmap.edu.in

Scholars
Interests

  • Artificial Intelligence
  • Computer Vision
  • Deep Learning
  • Machine Learning
  • Medical Imaging

Education
2013
B-Level
National Institute of Electronics and Information Technology
2016
M.Tech CSE
Kurukshetra University
2025
PhD
Pondicherry University
Experience
  • June 2013 to July 2014 – Software Developer,Intelli Solutions, Karnal, Haryana, India.
Research Interests
  • Efficient Summarisation of Surveillance videos in the limited resource settings using deep learning models.
  • Detection of abandoned objects in the Surveillance videos using multiple robust background subtraction models and lightweight object classifiers.
  • Predictive Summarization of Surveillance Videos using Deep learning-based optimization techniques.
  • Medical Biomarkers based diagnosis of Alzheimer’s Disease Using Deep Learning Models.
Awards & Fellowships
  • 2014 – GATE – IIT Kharagpur
  • 2018 – Jammu and Kashmir State Eligibility Test- University of Kashmir
  • 2019 – National Eligibility Test – UGC
  • 2015 – Summer Internship programme at the school of Computing and Electrical Engineering, Indian Institute of Technology Mandi, IIT Mandi (Himachal Pradesh)
  • 2019 – 2024 Maulana Azad National Fellowship – University Grants Commission
  • Best Paper Award at 2nd International Conference on Human-centric Smart Computing-2023, University of Engineering & Management (UEM), Jaipur, July 05-06, 2023
  • Best Paper Award at International Conference on Emerging Innovative Technologies in Engineering ICEITE’22,
  • Sri Manakula Vinayagar Engineering College, Madagadipet, Puducherry, July 13-15,2022
  • Best Paper Award at Two Day National Seminar on “Data Handling Techniques: Application in Information Technology”, Gandhi Memorial National Post Graduate College, Ambala Cantt, Haryana, March 04-05, 2016
Memberships
Publications
  • Predictive summarization framework for resource-constrained device surveillance videos

    Beigh T.M., Venkatesan D.V.P., Arumugam J.

    Article, Pattern Recognition, 2026, DOI Link

    View abstract ⏷

    The high occurrence of crime in various cities worldwide has detrimental effects on both the victims and the communities they belong to. Although deep learning techniques are acknowledged for their effectiveness in predicting future events by analyzing past behaviors, those approaches have resulted in poor prediction accuracy for video surveillance data. To tackle this issue, a new framework called the Horned Lizard ZfNet Summarization Framework (HLZSF) was introduced in this research. The primary processes, like filtering, key frame extraction, crime events tracking, and crime prediction with classification, were performed. To filter the noise features continuously, the mathematical steps of the filtering process were processed in the hidden layer with the min-max scalar function. Moreover, the crime events tracking function is executed by processing the food-hunting behaviour of the horned lizard. In addition, the feature selection from the video frame is performed with the horned lizard skin colour changing as the best solution. The Python environment is adopted for this study to validate the video surveillance database. Here, the incorporation of the horned lizard's best solution, which is skin changing behaviour based on a specific object in the deep network, has been employed to earn the finest feature selection and prediction outcome. The accuracy attained by the novel HLZSF is 97.87 %, and the recorded F-score is 97.88 %, precision 98.01 %, and recall 97.8 %, which is the finest outcome compared to alternative models.
  • MRI-Based Biomarker in the Diagnosis of Alzheimer’s Disease Using Attention-UNet

    Arumugam J., Prasanna Venkatesan V., Beigh T.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    Dementia can occur in various forms; some types of dementia are curable and some are not. Among the non-curable forms, Alzheimer’s disease (AD) is the prominent one. There is no effective treatment to cure it but it may reduce its impact if diagnosed early. Accurate classification of AD is necessary for diagnosing and giving the right effective treatment to patients. Specific regions in the brain serve as hotspots for AD, potential imaging biomarkers that contribute effectively thereby improving classification accuracy. We introduce a new Attention-based ’U’ shaped Convolution Neural Network in this study to identify imaging biomarkers of AD using 3D T1-weighted MRI data, which excels at distinguishing between gray matter, white matter, and cerebrospinal fluid (CSF). Our model is improved UNet with an enhanced convolution block attention module named EnCBAMUNet. This makes it ideal for brain imaging, where the differentiation between these tissues is crucial for identifying pathology, such as in neurodegenerative diseases. Our method is tested on both ADNI and OASIS datasets. Classification accuracy of 99.8% for Healthy as control normal(CN) vs Alzheimer’s disease(AD) is obtained in binary classification and 95.5% for multi-classifier Alzheimer’s Disease, Mild Cognitive Impairment(MCI) is a transient state from healthy to disease state and Control Normal as Healthy(AD vs MCI vs CN) in multiclass classification. We visualize regions (hippocampus, ventricles, and some parts of cortex) of the brain responsible for AD using the Three Dimensional Gradient-weighted Class Activation Mapping (3D-Grad-CAM) method of our model due to the deep learning model(DL) that is black-box in nature.
  • Motion Aware Video Surveillance System (MAVSS)

    Beigh T.M., Venkatesan V.P.

    Conference paper, Smart Innovation, Systems and Technologies, 2024, DOI Link

    View abstract ⏷

    Surveillance cameras create enormous amounts of data. Processing huge volumes of data requires an intelligent system to identify objects in varied conditions. The requirement of such a system is a primary concern and can be implemented in different locations which may include housing societies, parks, airports, railway stations, etc. Video is being recorded for a specific duration. The motion of an object plays a vital role in video surveillance videos. Moving objects are the primary candidates to analyze. Surveillance videos have an intrinsic property that most of the time camera would be capturing all static backgrounds without any alterations. This video usually does not have any acoustic features. Any important event is marked by the change or alteration in the environment which could mark the beginning of a normal or an abnormal event. The motion of an object or person is of the most important and examinable features which can assist in a better decision-making process. In this paper, a Motion Aware Video Surveillance system is proposed which takes spatial and spectral features into consideration to detect and track the objects of interest in videos.
  • Object-Based Key Frame Extraction in Videos

    Beigh T.M., Prasannavenkatesan V., Arumugam J.

    Conference paper, 2023 2nd International Conference on Advances in Computational Intelligence and Communication, ICACIC 2023, 2023, DOI Link

    View abstract ⏷

    Due to the significant improvements in communication and network infrastructure, there is a tremendous exchange of visual data. The process of watching videos is time-consuming to the user. It requires an immediate solution to extract the meaningful parts of the video, reducing the user's time and computational storage. In this paper, an object-based video summarization process is proposed. Object detection is done by training a CNN-based object detection model known as the You Only Look Once (YOLO V5) model. Detection is followed by an object-tracking method known as the Kalman filtering algorithm. The saliency score for each object in the frame is computed based on the various objects including motion, the centrality of an object, and other color-based features. The method is evaluated on benchmark datasets namely Open Video Project (OVP) and the custom-made video dataset. It has achieved precision, recall and F-score better than the state-of-the-art methods. The proposed method reduces computational time significantly.
Contact Details

tabiya.m@srmap.edu.in

Scholars