Faculty Mr Yara Srinivas

Mr Yara Srinivas

Assistant Professor

Department of Computer Science and Engineering

Contact Details

srinivas.y@srmap.edu.in

Office Location

Homi J Bhabha Block, Level-3,  Cubicle No: 54.

Education

2024
Ph.D
School of Computer and Information Sciences, University of Hyderabad
India
2015
M.Tech
A.U College of Engineering, Andhra University
India
2011
B.Tech
University College of Engineering, JNTU Kakinada
India

Personal Website

Experience

  • 2016 – 2018 – Assistant Professor – Sphoorthy Engineering College, Nadergul-Hyderabad.
  • 2024 – 2025 – Assistant professor – K L ( Deemed to be) University, Aziznagar-Hyderabad

Research Interest

  • My research interests lie in the broad domain of Computer Vision, with a specific focus on applications such as video analysis, object detection, and digital forensics.
  • I am particularly interested in leveraging deep learning techniques to develop intelligent systems capable of analysing complex visual data, identifying objects in real-time, and enhancing the reliability of forensic investigations.
  • My work emphasizes the design and implementation of robust deep learning models that can perform efficient analysis on medical images.

Awards

  • 2017 – UGC Net JRF – UGC MoE

Memberships

  • IAENG

Publications

  • A Semi-supervised Centroid Base Object Tracking in Video Surveillance Using Deep Detector and Salience Estimation

    Srinivas Y., Ganivada A.

    Conference paper, Lecture Notes in Electrical Engineering, 2025, DOI Link

    View abstract ⏷

    Deep learning-based semi-supervised object tracking system plays a pivotal role in the field of visual object tracking (VOT), due to its high accuracy. In the existing object tracking algorithm, object region is chosen manually. However, many computer vision applications have worked without human interference in recent days. In this consequence, we introduce a semi-supervised tracking algorithm with the help of a deep network, Customized Encoder-Decoder SegNet (CEDSegNet), and salience features, for tracking an object in videos. Specifically, deep learning model employs detection and extraction of object region, says region of interest (ROI) of an object, in a video frame. We use this ROI for estimation of the salience of an object in subsequent frames using log-likelihood. Finally, we apply the mean shift algorithm on detected object for tracking of the object. The qualitative results are obtained from the CDNet2024 dataset. The experimental results showcase the effectiveness of the proposed tracking system.
  • A modified inter-frame difference method for detection of moving objects in videos

    Srinivas Y., Ganivada A.

    Article, International Journal of Information Technology (Singapore), 2025, DOI Link

    View abstract ⏷

    Many methods based on frame difference are developed for the detection of objects in a video. However, selecting moving pixels relevant to an object in a video and detecting the object under different environmental conditions is still a challenging task. In this work, a modified inter-frame difference (MIFD) method for detecting moving objects in a video at various conditions is proposed. The method constructs a motion feature matrix with enhanced intensity values, say motion frame. These intensity values are based on a product of a constant parameter (β). The values of β are different for different video sequences. The proposed model’s motion feature matrix enhances the relevance of pixels associated with an object. It leads to more accurate detection. Ostu’s threshold method is used in the detection process. We experimentally examine the performance of the proposed model using benchmark datasets including changedetection.net (CDNet2014 dataset). The superior performance of the proposed model compared to state-of-the-art methods is demonstrated on the CDNet2014 dataset.
  • A novel ensemble deep learning framework with spatial attention and high-order pooling for COPD detection

    Cherukuvada S., Chaitanya R.K., Janardhan M., Yara S., Shareef S.K.K., Harshini M., Kocherla R.

    Article, Discover Computing, 2025, DOI Link

    View abstract ⏷

    Asthma and chronic obstructive pulmonary disease (COPD) are prevalent lung illnesses requiring accurate classification for effective treatment. Conventional diagnostic methods, though reliable, are often invasive, costly, and require specialized expertise. Recent advancements in machine learning (ML) have enhanced COPD classification using clinical, imaging, and physiological data. This study introduces SAHPN, an ensemble deep learning model that integrates a Fusion Depthwise-Separable Block (FDSB) and a Deep Residual Feature Distillation Block (DRFDB) to optimize feature extraction while reducing memory and computational demands. The classifier employs multiple convolutional branches with varied filter sizes and kernels to capture diverse feature representations. Spatial attention units refine learning by focusing on relevant details, while second-order pooling models high-level feature interactions. A concatenation fusion technique merges outputs into a multimodal representation for improved classification. To further optimize feature extraction and classification, we incorporate the Attack-Leave Optimizer (ALO), which balances guided and random searches for enhanced accuracy. The SAHPN model is evaluated on 7,194 contrast-enhanced CT (CECT) images from 78 participants (3,597 COPD and 3,597 healthy controls), while its generalizability to non-contrast CT (NCCT) images is addressed separately. Results demonstrate that SAHPN outperforms existing models in classification accuracy. The proposed model is novel in its integration of spatial attention, high-order pooling, and the Attack-Leave Optimizer within an ensemble framework, providing a robust, efficient, and scalable approach for COPD diagnosis from contrast-enhanced CT images.
  • A novel deep convolutional encoder–decoder network: application to moving object detection in videos

    Ganivada A., Yara S.

    Article, Neural Computing and Applications, 2023, DOI Link

    View abstract ⏷

    Moving object detection is one of the key applications of video surveillance. Deep convolutional neural networks have gained increasing attention in the field of video surveillance due to their effective feature learning ability. The performance of deep neural networks is often affected by the characteristics of videos like poor illumination and inclement weather conditions. It is important to design an innovative architecture of deep neural networks to deal with the videos effectively. Here, the convolutional layers for the networks require to be in an appropriate number and it’s important to determine the number. In this study, we propose a customized deep convolutional encoder–decoder network, say CEDSegNet, for moving object detection in a video sequence. Here, the CEDSegNet is based on SegNet, and its encoder and decoder parts are chosen to be two. By customizing the SegNet with two encoder and decoder parts, the proposed CEDSegNet improves detection performance, where its parameters are reduced to an extent. The two encoder and decoder parts function towards generating feature maps preserving the fine details of object pixels in videos. The proposed CEDSegNet is tested on multiple video sequences of the CDNet dataset2012. The results obtained using CEDSegNet for moving object detection in the video frames are interpreted qualitatively. Further, the performance of CEDSegNet is evaluated using several quantitative indices. Both the qualitative and quantitative results demonstrate that the performance of CEDSegNet is superior to the state-of-the-network models, VGG16, VGG19, ResNet18 and ResNet50.
  • Detection of Moving Objects and Enhancement Using Motion Features in Various Video Sequences

    Yara S., Ganivada A.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Due to its lack of performance and flexibility, the traditional frame difference method does not handle precise moving object detection by motion of the object region in each frame of the various video sequences. So, the object cannot be seen in the foreground accurately. It remains a serious concern. The time computation for object detection using the three frame difference and five frame difference approaches is longer, and frame information is lost. To address these flaws, a new method known as the inter frame difference method is suggested (MIFD). It detects moving objects in video under various environmental conditions with little data loss in a short period of time. MIFD involves constructing a reference frame, computing inter frame difference, a motion frame and detecting moving object(s) in a frame by drawing a rectangle blobs using connected components in the video sequence. The performance of the proposed algorithm is compared with the previous results of the code book model (CB), self-organizing background subtraction method (SOBS), local binary pattern histogram (LBPH), robust background subtraction for network surveillance in H.264, GMM, VIBE, frame difference, three frame difference, improved three frame difference, and combined three frame difference & background subtraction model. The experimental results demonstrate that the proposed methodology performance is better than the other methods in accurately detecting moving object(s) in video under challenging environmental conditions.

Patents

Projects

Scholars

Interests

  • Artificial Intelligence
  • Computer Vision
  • Deep Learning

Thought Leaderships

There are no Thought Leaderships associated with this faculty.

Top Achievements

Research Area

No research areas found for this faculty.

Recent Updates

No recent updates found.

Education
2011
B.Tech
University College of Engineering, JNTU Kakinada
India
2015
M.Tech
A.U College of Engineering, Andhra University
India
2024
Ph.D
School of Computer and Information Sciences, University of Hyderabad
India
Experience
  • 2016 – 2018 – Assistant Professor – Sphoorthy Engineering College, Nadergul-Hyderabad.
  • 2024 – 2025 – Assistant professor – K L ( Deemed to be) University, Aziznagar-Hyderabad
Research Interests
  • My research interests lie in the broad domain of Computer Vision, with a specific focus on applications such as video analysis, object detection, and digital forensics.
  • I am particularly interested in leveraging deep learning techniques to develop intelligent systems capable of analysing complex visual data, identifying objects in real-time, and enhancing the reliability of forensic investigations.
  • My work emphasizes the design and implementation of robust deep learning models that can perform efficient analysis on medical images.
Awards & Fellowships
  • 2017 – UGC Net JRF – UGC MoE
Memberships
  • IAENG
Publications
  • A Semi-supervised Centroid Base Object Tracking in Video Surveillance Using Deep Detector and Salience Estimation

    Srinivas Y., Ganivada A.

    Conference paper, Lecture Notes in Electrical Engineering, 2025, DOI Link

    View abstract ⏷

    Deep learning-based semi-supervised object tracking system plays a pivotal role in the field of visual object tracking (VOT), due to its high accuracy. In the existing object tracking algorithm, object region is chosen manually. However, many computer vision applications have worked without human interference in recent days. In this consequence, we introduce a semi-supervised tracking algorithm with the help of a deep network, Customized Encoder-Decoder SegNet (CEDSegNet), and salience features, for tracking an object in videos. Specifically, deep learning model employs detection and extraction of object region, says region of interest (ROI) of an object, in a video frame. We use this ROI for estimation of the salience of an object in subsequent frames using log-likelihood. Finally, we apply the mean shift algorithm on detected object for tracking of the object. The qualitative results are obtained from the CDNet2024 dataset. The experimental results showcase the effectiveness of the proposed tracking system.
  • A modified inter-frame difference method for detection of moving objects in videos

    Srinivas Y., Ganivada A.

    Article, International Journal of Information Technology (Singapore), 2025, DOI Link

    View abstract ⏷

    Many methods based on frame difference are developed for the detection of objects in a video. However, selecting moving pixels relevant to an object in a video and detecting the object under different environmental conditions is still a challenging task. In this work, a modified inter-frame difference (MIFD) method for detecting moving objects in a video at various conditions is proposed. The method constructs a motion feature matrix with enhanced intensity values, say motion frame. These intensity values are based on a product of a constant parameter (β). The values of β are different for different video sequences. The proposed model’s motion feature matrix enhances the relevance of pixels associated with an object. It leads to more accurate detection. Ostu’s threshold method is used in the detection process. We experimentally examine the performance of the proposed model using benchmark datasets including changedetection.net (CDNet2014 dataset). The superior performance of the proposed model compared to state-of-the-art methods is demonstrated on the CDNet2014 dataset.
  • A novel ensemble deep learning framework with spatial attention and high-order pooling for COPD detection

    Cherukuvada S., Chaitanya R.K., Janardhan M., Yara S., Shareef S.K.K., Harshini M., Kocherla R.

    Article, Discover Computing, 2025, DOI Link

    View abstract ⏷

    Asthma and chronic obstructive pulmonary disease (COPD) are prevalent lung illnesses requiring accurate classification for effective treatment. Conventional diagnostic methods, though reliable, are often invasive, costly, and require specialized expertise. Recent advancements in machine learning (ML) have enhanced COPD classification using clinical, imaging, and physiological data. This study introduces SAHPN, an ensemble deep learning model that integrates a Fusion Depthwise-Separable Block (FDSB) and a Deep Residual Feature Distillation Block (DRFDB) to optimize feature extraction while reducing memory and computational demands. The classifier employs multiple convolutional branches with varied filter sizes and kernels to capture diverse feature representations. Spatial attention units refine learning by focusing on relevant details, while second-order pooling models high-level feature interactions. A concatenation fusion technique merges outputs into a multimodal representation for improved classification. To further optimize feature extraction and classification, we incorporate the Attack-Leave Optimizer (ALO), which balances guided and random searches for enhanced accuracy. The SAHPN model is evaluated on 7,194 contrast-enhanced CT (CECT) images from 78 participants (3,597 COPD and 3,597 healthy controls), while its generalizability to non-contrast CT (NCCT) images is addressed separately. Results demonstrate that SAHPN outperforms existing models in classification accuracy. The proposed model is novel in its integration of spatial attention, high-order pooling, and the Attack-Leave Optimizer within an ensemble framework, providing a robust, efficient, and scalable approach for COPD diagnosis from contrast-enhanced CT images.
  • A novel deep convolutional encoder–decoder network: application to moving object detection in videos

    Ganivada A., Yara S.

    Article, Neural Computing and Applications, 2023, DOI Link

    View abstract ⏷

    Moving object detection is one of the key applications of video surveillance. Deep convolutional neural networks have gained increasing attention in the field of video surveillance due to their effective feature learning ability. The performance of deep neural networks is often affected by the characteristics of videos like poor illumination and inclement weather conditions. It is important to design an innovative architecture of deep neural networks to deal with the videos effectively. Here, the convolutional layers for the networks require to be in an appropriate number and it’s important to determine the number. In this study, we propose a customized deep convolutional encoder–decoder network, say CEDSegNet, for moving object detection in a video sequence. Here, the CEDSegNet is based on SegNet, and its encoder and decoder parts are chosen to be two. By customizing the SegNet with two encoder and decoder parts, the proposed CEDSegNet improves detection performance, where its parameters are reduced to an extent. The two encoder and decoder parts function towards generating feature maps preserving the fine details of object pixels in videos. The proposed CEDSegNet is tested on multiple video sequences of the CDNet dataset2012. The results obtained using CEDSegNet for moving object detection in the video frames are interpreted qualitatively. Further, the performance of CEDSegNet is evaluated using several quantitative indices. Both the qualitative and quantitative results demonstrate that the performance of CEDSegNet is superior to the state-of-the-network models, VGG16, VGG19, ResNet18 and ResNet50.
  • Detection of Moving Objects and Enhancement Using Motion Features in Various Video Sequences

    Yara S., Ganivada A.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Due to its lack of performance and flexibility, the traditional frame difference method does not handle precise moving object detection by motion of the object region in each frame of the various video sequences. So, the object cannot be seen in the foreground accurately. It remains a serious concern. The time computation for object detection using the three frame difference and five frame difference approaches is longer, and frame information is lost. To address these flaws, a new method known as the inter frame difference method is suggested (MIFD). It detects moving objects in video under various environmental conditions with little data loss in a short period of time. MIFD involves constructing a reference frame, computing inter frame difference, a motion frame and detecting moving object(s) in a frame by drawing a rectangle blobs using connected components in the video sequence. The performance of the proposed algorithm is compared with the previous results of the code book model (CB), self-organizing background subtraction method (SOBS), local binary pattern histogram (LBPH), robust background subtraction for network surveillance in H.264, GMM, VIBE, frame difference, three frame difference, improved three frame difference, and combined three frame difference & background subtraction model. The experimental results demonstrate that the proposed methodology performance is better than the other methods in accurately detecting moving object(s) in video under challenging environmental conditions.
Contact Details

srinivas.y@srmap.edu.in

Scholars
Interests

  • Artificial Intelligence
  • Computer Vision
  • Deep Learning

Education
2011
B.Tech
University College of Engineering, JNTU Kakinada
India
2015
M.Tech
A.U College of Engineering, Andhra University
India
2024
Ph.D
School of Computer and Information Sciences, University of Hyderabad
India
Experience
  • 2016 – 2018 – Assistant Professor – Sphoorthy Engineering College, Nadergul-Hyderabad.
  • 2024 – 2025 – Assistant professor – K L ( Deemed to be) University, Aziznagar-Hyderabad
Research Interests
  • My research interests lie in the broad domain of Computer Vision, with a specific focus on applications such as video analysis, object detection, and digital forensics.
  • I am particularly interested in leveraging deep learning techniques to develop intelligent systems capable of analysing complex visual data, identifying objects in real-time, and enhancing the reliability of forensic investigations.
  • My work emphasizes the design and implementation of robust deep learning models that can perform efficient analysis on medical images.
Awards & Fellowships
  • 2017 – UGC Net JRF – UGC MoE
Memberships
  • IAENG
Publications
  • A Semi-supervised Centroid Base Object Tracking in Video Surveillance Using Deep Detector and Salience Estimation

    Srinivas Y., Ganivada A.

    Conference paper, Lecture Notes in Electrical Engineering, 2025, DOI Link

    View abstract ⏷

    Deep learning-based semi-supervised object tracking system plays a pivotal role in the field of visual object tracking (VOT), due to its high accuracy. In the existing object tracking algorithm, object region is chosen manually. However, many computer vision applications have worked without human interference in recent days. In this consequence, we introduce a semi-supervised tracking algorithm with the help of a deep network, Customized Encoder-Decoder SegNet (CEDSegNet), and salience features, for tracking an object in videos. Specifically, deep learning model employs detection and extraction of object region, says region of interest (ROI) of an object, in a video frame. We use this ROI for estimation of the salience of an object in subsequent frames using log-likelihood. Finally, we apply the mean shift algorithm on detected object for tracking of the object. The qualitative results are obtained from the CDNet2024 dataset. The experimental results showcase the effectiveness of the proposed tracking system.
  • A modified inter-frame difference method for detection of moving objects in videos

    Srinivas Y., Ganivada A.

    Article, International Journal of Information Technology (Singapore), 2025, DOI Link

    View abstract ⏷

    Many methods based on frame difference are developed for the detection of objects in a video. However, selecting moving pixels relevant to an object in a video and detecting the object under different environmental conditions is still a challenging task. In this work, a modified inter-frame difference (MIFD) method for detecting moving objects in a video at various conditions is proposed. The method constructs a motion feature matrix with enhanced intensity values, say motion frame. These intensity values are based on a product of a constant parameter (β). The values of β are different for different video sequences. The proposed model’s motion feature matrix enhances the relevance of pixels associated with an object. It leads to more accurate detection. Ostu’s threshold method is used in the detection process. We experimentally examine the performance of the proposed model using benchmark datasets including changedetection.net (CDNet2014 dataset). The superior performance of the proposed model compared to state-of-the-art methods is demonstrated on the CDNet2014 dataset.
  • A novel ensemble deep learning framework with spatial attention and high-order pooling for COPD detection

    Cherukuvada S., Chaitanya R.K., Janardhan M., Yara S., Shareef S.K.K., Harshini M., Kocherla R.

    Article, Discover Computing, 2025, DOI Link

    View abstract ⏷

    Asthma and chronic obstructive pulmonary disease (COPD) are prevalent lung illnesses requiring accurate classification for effective treatment. Conventional diagnostic methods, though reliable, are often invasive, costly, and require specialized expertise. Recent advancements in machine learning (ML) have enhanced COPD classification using clinical, imaging, and physiological data. This study introduces SAHPN, an ensemble deep learning model that integrates a Fusion Depthwise-Separable Block (FDSB) and a Deep Residual Feature Distillation Block (DRFDB) to optimize feature extraction while reducing memory and computational demands. The classifier employs multiple convolutional branches with varied filter sizes and kernels to capture diverse feature representations. Spatial attention units refine learning by focusing on relevant details, while second-order pooling models high-level feature interactions. A concatenation fusion technique merges outputs into a multimodal representation for improved classification. To further optimize feature extraction and classification, we incorporate the Attack-Leave Optimizer (ALO), which balances guided and random searches for enhanced accuracy. The SAHPN model is evaluated on 7,194 contrast-enhanced CT (CECT) images from 78 participants (3,597 COPD and 3,597 healthy controls), while its generalizability to non-contrast CT (NCCT) images is addressed separately. Results demonstrate that SAHPN outperforms existing models in classification accuracy. The proposed model is novel in its integration of spatial attention, high-order pooling, and the Attack-Leave Optimizer within an ensemble framework, providing a robust, efficient, and scalable approach for COPD diagnosis from contrast-enhanced CT images.
  • A novel deep convolutional encoder–decoder network: application to moving object detection in videos

    Ganivada A., Yara S.

    Article, Neural Computing and Applications, 2023, DOI Link

    View abstract ⏷

    Moving object detection is one of the key applications of video surveillance. Deep convolutional neural networks have gained increasing attention in the field of video surveillance due to their effective feature learning ability. The performance of deep neural networks is often affected by the characteristics of videos like poor illumination and inclement weather conditions. It is important to design an innovative architecture of deep neural networks to deal with the videos effectively. Here, the convolutional layers for the networks require to be in an appropriate number and it’s important to determine the number. In this study, we propose a customized deep convolutional encoder–decoder network, say CEDSegNet, for moving object detection in a video sequence. Here, the CEDSegNet is based on SegNet, and its encoder and decoder parts are chosen to be two. By customizing the SegNet with two encoder and decoder parts, the proposed CEDSegNet improves detection performance, where its parameters are reduced to an extent. The two encoder and decoder parts function towards generating feature maps preserving the fine details of object pixels in videos. The proposed CEDSegNet is tested on multiple video sequences of the CDNet dataset2012. The results obtained using CEDSegNet for moving object detection in the video frames are interpreted qualitatively. Further, the performance of CEDSegNet is evaluated using several quantitative indices. Both the qualitative and quantitative results demonstrate that the performance of CEDSegNet is superior to the state-of-the-network models, VGG16, VGG19, ResNet18 and ResNet50.
  • Detection of Moving Objects and Enhancement Using Motion Features in Various Video Sequences

    Yara S., Ganivada A.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Due to its lack of performance and flexibility, the traditional frame difference method does not handle precise moving object detection by motion of the object region in each frame of the various video sequences. So, the object cannot be seen in the foreground accurately. It remains a serious concern. The time computation for object detection using the three frame difference and five frame difference approaches is longer, and frame information is lost. To address these flaws, a new method known as the inter frame difference method is suggested (MIFD). It detects moving objects in video under various environmental conditions with little data loss in a short period of time. MIFD involves constructing a reference frame, computing inter frame difference, a motion frame and detecting moving object(s) in a frame by drawing a rectangle blobs using connected components in the video sequence. The performance of the proposed algorithm is compared with the previous results of the code book model (CB), self-organizing background subtraction method (SOBS), local binary pattern histogram (LBPH), robust background subtraction for network surveillance in H.264, GMM, VIBE, frame difference, three frame difference, improved three frame difference, and combined three frame difference & background subtraction model. The experimental results demonstrate that the proposed methodology performance is better than the other methods in accurately detecting moving object(s) in video under challenging environmental conditions.
Contact Details

srinivas.y@srmap.edu.in

Scholars