A smartphone authentication system based on touch gesture dynamics
Jacob S., Puthuvath V., Akarsh M., George J., Joseph J., Joy J.
Article, Concurrency and Computation: Practice and Experience, 2023, DOI Link
View abstract ⏷
Different authentication techniques that we use today, are prone to shoulder surfing attacks and mimicry attacks. Thus, keystroke dynamics combined with time and motion-based typing patterns have been studied for years. In this paper, we introduce and evaluate a touch gesture-based application to authenticate a user based on their typing behavior in distinct contexts such as lying, sitting, standing, walking, stationary, climbing up and down the stairs, by leveraging different features extracted from multiple built-in smartphone sensors. We use various attributes including time-based features such as dwell time and flight time and motion-based features such as accelerometer, gyroscope, and magnetometer readings. The proposed authentication model distinguishes the legitimate smartphone owner from impostors using hand gestures, touch and keystroke dynamics. We experimented with different design alternatives such as a combination of motion sensor features, time-based features extracted from multiple devices. In addition, we evaluated the performance using various supervised machine learning algorithms to show how to achieve high authentication accuracy and least equal error rate. A thorough evaluation shows that the system achieves authentication with 99.8% accuracy with a high AUC of 0.99 and EER of 0.11%.
Affect sensing from smartphones through touch and motion contexts
Jacob S., Vinod P., Subramanian A., Menon V.G.
Article, Multimedia Systems, 2023, DOI Link
View abstract ⏷
Affect state of a person has an impact on the intellectual processes that control human behavior. Experiencing negative affect escalates mental problems, and experiencing positive affect states improve imaginative reasoning and thereby enhances one’s behavior and discipline. Hence, this work centers around affect acknowledgment from typing-based context data during the pandemic. In this paper, we present a novel sensing scheme that perceives one’s affect state from their unique contexts. We also aim to study how affect states vary in smartphone users during the pandemic. We collected data from 52 participants over 2 months with an Android application. We exploited the Circumplex Model of Affect (CMA) to infer 25 affect states, leveraging built-in motion and touch sensors on smartphones. We conducted comprehensive experiments by developing machine learning models to predict 25 states. Through our study, we observe that the states of users are heavily pertinent to one’s typing and motion contexts. A thorough evaluation shows that affect prediction model yields an F1-score of 0.90 utilizing diverse contexts. To the best of our knowledge, our work predicts the highest number of affect states (25 states) with better performance compared to state-of-the-art methods.
Context-aware gender and age recognition from smartphone sensors
Sajana T.S., Jacob S., Vinod P., Menon V.G., Shilpa P.S.
Conference paper, Proceedings of International Conference on Computing, Communication, Security and Intelligent Systems, IC3SIS 2022, 2022, DOI Link
View abstract ⏷
Smartphones include multiple sensors to track a device's movement. This research investigated the capability of smartphone motion sensors to determine the user's gender and age in different contexts. A subject's context is an action they engage in, such as sitting or standing. This paper is based on the differences in behavior between male and female smartphone users, precisely, how they hold and manage their devices. To build our approach, we use the MotionSense Dataset. This dataset contains data from accelerometer and gyroscope sensors over time (attitude, gravity, acceleration, and rotationRate). In this study, we consider multiple contexts such as walking, sitting, standing, and jogging. Our method proposes to use smartphone sensors to detect an individual's age and gender with an accuracy of 99.89% if they are seated.
Sentiment analysis using deep learning
Shilpa P.C., Shereen R., Jacob S., Vinod P.
Conference paper, Proceedings of the 3rd International Conference on Intelligent Communication Technologies and Virtual Mobile Networks, ICICV 2021, 2021, DOI Link
View abstract ⏷
Emotion recognition from text is crucial Natural Language Processing task which can contribute enormous benefits to different areas such as artificial intelligence, human interaction with computers etc. Emotions are physiologic thoughts engendered in human reactions to the events. Analysis of these emotions without facial and voice modulation are critical and requires a supervisory approach for proper interpretation of emotions. In spite of these challenges, it's essential to acknowledge the human emotions as they progressively communicate using mistreatment text through social media applications such as Facebook, Twitter etc. In this paper, we propose a sentimental classification of multitude of tweets. Here, we use deep learning techniques to classify the sentiments of an expression into positive or negative emotions. The positive emotions are further classified into enthusiasm, fun, happiness, love, neutral, relief, surprise and negative emotions are classified into anger, boredom, emptiness, hate, sadness, worry. We experimented and evaluated the method using Recurrent Neural Networks and Long short-term memory on three different datasets to show how to achieve high emotion classification accuracy. A through evaluation shows that the system gains emotion prediction on LSTM model with 88.47% accuracy for positive/negative classification and 89.13% and 91.3% accuracy for positive and negative subclass respectively.
Image reconstruction from random samples using multiscale regression framework
Article, Neurocomputing, 2016, DOI Link
View abstract ⏷
Preserving edge details is an important issue in most of the image reconstruction problems. In this paper, we propose a multiscale regression framework for image reconstruction from sparse random samples. A multiscale framework is used here to combine the modeling strengths of parametric and non-parametric statistical techniques in a pyramidal fashion. This algorithm is designed to preserve edge structures using an adaptive filter, where the filter coefficients are derived using locally adapted kernels which take into account both the local density of the available samples, and the actual values of these samples. As such, they are automatically directed and adapted to both the given sampling geometry and the samples' radiometry. Both the upscaling and missing pixel recovery processes are made locally adaptive so that the image structures can be well preserved. Experimental results demonstrate that the proposed method achieves better improvement over the state-of-the-art algorithms in terms of both subjective and objective quality.