Enhanced Cooperative Perception through Asynchronous Vehicle-to-Infrastructure Framework with Delay Mitigation for Connected and Automated Vehicles
Saravanan N.K., Jammula V.C., Yang Y., Wishart J., Zhao J.
Article, SAE International Journal of Connected and Automated Vehicles, 2025, DOI Link
View abstract ⏷
Perception is a key component of automated vehicles (AVs). However, sensors mounted to the AVs often encounter blind spots due to obstructions from other vehicles, infrastructure, or objects in the surrounding area. While recent advancements in planning and control algorithms help AVs react to sudden object appearances from blind spots at low speeds and less complex scenarios, challenges remain at high speeds and complex intersections. Vehicle-to-infrastructure (V2I) technology promises to enhance scene representation for connected and automated vehicles (CAVs) in complex intersections, providing sufficient time and distance to react to adversary vehicles violating traffic rules. Most existing methods for infrastructure-based vehicle detection and tracking rely on LIDAR, RADAR, or sensor fusion methods, such as LIDAR-camera and RADAR-camera. Although LIDAR and RADAR provide accurate spatial information, the sparsity of point cloud data limits their ability to capture detailed object contours of objects far away, resulting in inaccurate 3D object detection results. Furthermore, the absence of LIDAR or RADAR at every intersection increases the cost of implementing V2I technology. To address these challenges, this article proposes a V2I framework that utilizes monocular traffic cameras at road intersections to detect 3D objects. The results from the roadside unit (RSU) are then combined with the on-board system using an asynchronous late fusion method to enhance scene representation. Additionally, the proposed framework provides a time delay compensation module to compensate for the processing and transmission delay from the RSU. Lastly, the V2I framework is tested by simulating and validating a scenario similar to the one described in an industry report by Waymo. The results show that the proposed method improves the scene representation and the CAV's perception range, giving it enough time and space to react to the adversary vehicles.
Validation and Analysis of Driving Safety Assessment Metrics in Real-world Car-Following Scenarios with Aerial Videos
Lu D., Haines S., Jammula V.C., Rath P.K., Yu H., Yang Y., Wishart J.
Conference paper, SAE Technical Papers, 2024, DOI Link
View abstract ⏷
Data-driven driving safety assessment is crucial in understanding the insights of traffic accidents caused by dangerous driving behaviors. Meanwhile, quantifying driving safety through well-defined metrics in real-world naturalistic driving data is also an important step for the operational safety assessment of automated vehicles (AV). However, the lack of flexible data acquisition methods and fine-grained datasets has hindered progress in this critical area. In response to this challenge, we propose a novel dataset for driving safety metrics analysis specifically tailored to car-following situations. Leveraging state-of-the-art Artificial Intelligence (AI) technology, we employ drones to capture high-resolution video data at 12 traffic scenes in the Phoenix metropolitan area. After that, we developed advanced computer vision algorithms and semantically annotated maps to extract precise vehicle trajectories and leader-follower relations among vehicles. These components, in conjunction with a set of defined metrics based on our prior work on Operational Safety Assessment (OSA) by the Institute of Automated Mobility (IAM), allow us to conduct a detailed analysis of driving safety. Our results reveal the distribution of these metrics under various real-world car-following scenarios and characterize the impact of different parameters and thresholds in the metrics. By enabling a data-driven approach to address driving safety in car-following scenarios, our work can empower traffic operators and policymakers to make informed decisions and contribute to a safer, more efficient future for road transportation systems.
Infrastructure-Based LiDAR Monitoring for Assessing Automated Driving Safety
Srinivasan A., Mahartayasa Y., Jammula V.C., Lu D., Como S., Wishart J., Yang Y., Yu H.
Conference paper, SAE Technical Papers, 2022, DOI Link
View abstract ⏷
The successful deployment of automated vehicles (AVs) has recently coincided with the use of off-board sensors for assessments of operational safety. Many intersections and roadways have monocular cameras used primarily for traffic monitoring; however, monocular cameras may not be sufficient to allow for useful AV operational safety assessments to be made in all operational design domains (ODDs) such as low ambient light and inclement weather conditions. Additional sensor modalities such as Light Detecting and Ranging (LiDAR) sensors allow for a wider range of scenarios to be accommodated and may also provide improved measurements of the Operational Safety Assessment (OSA) metrics previously introduced by the Institute of Automated Mobility (IAM). Building on earlier work from the IAM in creating an infrastructure- based sensor system to evaluate OSA metrics in real- world scenarios, this paper presents an approach for real-time localization and velocity estimation for AVs using a network of LiDAR sensors. The LiDAR data are captured by a network of three Luminar LiDAR sensors at an intersection in Anthem, AZ, while camera data are collected from the same intersection. Using the collected LiDAR data, the proposed method uses a distance-based clustering algorithm to detect 3D bounding boxes for each vehicle passing through the intersection. Subsequently, the positions and velocities of each detected bounding box are tracked over time using a combination of two filters. The accuracy of both the localization and velocity estimation using LiDAR is assessed by comparing the LiDAR estimated state vectors against the differential GPS position and velocity measurements from a test vehicle passing through the intersection, as well as against a camera-based algorithm applied on drone video footage It is shown that the proposed method, taking advantage of simultaneous data capture from multiple LiDAR sensors, offers great potential for fast, accurate operational safety assessment of AV's with an average localization error of only 10 cm observed between LiDAR and real-time differential GPS position data, when tracking a vehicle over 170 meters of roadway.
Evaluation of Operational Safety Assessment (OSA) Metrics for Automated Vehicles Using Real-World Data
Conference paper, SAE Technical Papers, 2022, DOI Link
View abstract ⏷
Assurance of the operational safety of automated vehicles (AVs) is crucial to enable commercialization and deployment on public roads. The operational safety must be quantified without ambiguity using well-defined metrics. Several efforts are in place to establish an appropriate set of metrics that can quantify the operational safety of AVs in a technology-neutral way, including the Operational Safety Assessment (OSA) metrics proposed by the Institute of Automated Mobility (IAM). The focus of this work is to compute real-world measurements of the relevant safety envelope OSA metrics in car-following scenarios. This allows for an analysis of the impact of different parameters and thresholds and for an evaluation of the individual usefulness of the safety envelope OSA metrics. The current work complements prior IAM work involving evaluating the safety envelope OSA metrics in car-following scenarios in simulation. Video data were collected from infrastructure-based cameras at a traffic intersection in Anthem, AZ. Pairs of vehicles that either interact with each other or influence each other's decision-making were identified. A methodology was developed using computer vision to localize the vehicles using the video data and fusing them with a map representation to obtain vehicle-vehicle relations and the maneuvers in which they are involved. Longitudinal conflicts in car-following scenarios were filtered to compute the safety envelope OSA metrics. Analysis of the safety envelope OSA metrics results were conducted to identify the usefulness of the various metrics in the car-following scenarios and to make a comparison to the observations from simulation.
CAROM – Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures
Lu D., Jammula V.C., Como S., Wishart J., Chen Y., Yang Y.
Conference paper, Proceedings - IEEE International Conference on Robotics and Automation, 2021, DOI Link
View abstract ⏷
Traffic monitoring cameras are powerful tools for traffic management and essential components of intelligent road infrastructure systems. In this paper, we present a vehicle localization and traffic scene reconstruction framework using these cameras, dubbed as CAROM, i.e., “CARs On the Map”. CAROM processes traffic monitoring videos and converts them to anonymous data structures of vehicle type, 3D shape, position, and velocity for traffic scene reconstruction and replay. Through collaborating with a local department of transportation in the United States, we constructed a benchmarking dataset containing GPS data, roadside camera videos, and drone videos to validate the vehicle tracking results. On average, the localization error is approximately 0.8 m and 1.7 m within the range of 50 m and 120 m from the cameras, respectively.
Integrated Sensing Systems for Monitoring Interrelated Physiological Parameters in Young and Aged Adults: A Pilot Study
Sprowls M., Serhan M., Chou E.-F., Lin L., Frames C., Kucherenko I., Mollaeian K., Li Y., Jammula V., Logeswaran D., Khine M., Yang Y., Lockhart T., Claussen J., Dong L., Chen J.J.-L., Ren J., Gomes C., Kim D., Wu T., Margrett J., Narasimhan B., Forzani E.
Article, International Journal of Prognostics and Health Management, 2021, DOI Link
View abstract ⏷
Acute injury to aged individuals represents a significant challenge to the global healthcare community as these injuries are frequently treated in a reactive method due to the infeasibility of frequent visits to the hospital for biometric monitoring. However, there is potential to prevent a large number of these cases through passive, at-home monitoring of multiple physiological parameters related to various causes that are common to aged adults in general. This research strives to implement wearable devices, ambient “smart home” devices, and minimally invasive blood and urine analysis to test the feasibility of implementation of a multitude of research-level (i.e. not yet clinically validated) methods simultaneously in a “smart system”. The system comprises measures of balance, breathing, heart rate, metabolic rate, joint flexibility, hydration, and physical performance functions in addition to lab testing related to biological aging and mechanical cell strength. A proof-of-concept test is illustrated for two adult males of different ages: a 22-year-old and a 73-year-old matched in body mass index (BMI). The integrated system is test in this work, a pilot study, demonstrating functionality and age-related clinical relevance. The two subjects had physiological measurements taken in several settings during the pilot study: seated, biking, and lying down. Balance measurements indicated changes in sway area of 45.45% and 25.44%, respectively for before/after biking. The 22-year-old and the 73-year-old saw heart rate variabilities of 0.11 and 0.02 seconds at resting conditions, and metabolic rate changes of 277.38% and 222.23%, respectively, in comparison between the biking and seated conditions. A smart camera was used to assess biking speed and the 22-and 73-year-old subjects biked at 60 rpm and 28.5 rpm, respectively. The 22-year-old subject saw a 7 times greater electrical resistance change using a joint flexibility sensor inside of their index finger in comparison with the 73-year-old male. The 22 and 73-year-old males saw respective 28% and 48% increases in their urine ammonium concentration before/after the experiment. The average lengths of the telomere DNA from the two subjects were measured to be 12.1 kb (22-year-old) and 6.9 kb (73-year-old), consistent with their biological ages. The study probed feasibility of 1) multi-metric assessment under free living conditions, and 2) tracking of the various metrics over time.
Wearable Sensor Array Design for Spine Posture Monitoring during Exercise Incorporating Biofeedback
Article, IEEE Transactions on Biomedical Engineering, 2020, DOI Link
View abstract ⏷
Physical therapy (PT) exercise is an evidence-based intervention for non-specific chronic low back pain, spinal deformities and poor posture. Home based PT programs are aimed at strengthening core muscle groups, improving mobility and flexibility, and promoting proper posture. However, assessing unsupervised home-based PT outcomes is a generally difficult problem due to lack of reliable methods to monitor execution correctness and compliance. We propose a monitoring method consisting of a wearable sensor array to monitor three geodesic distances between two points on the surface of the shoulders and one point on the lower back. The sensor array may be built into a custom garment or a light weight harness wirelessly linked to a pattern recognition algorithm implemented in a mobile app. We use a new type of triangular stretch sensor array design which can generate a unique signature for a correct spine therapy exercise when performed by a specific subject. We conducted a pilot test consisting of three experiments: (i) two exercise patterns simulated by a mechanical device, (ii) one PT case of a scoliosis therapy exercise including spinal flexion, extension, and rotation performed by one volunteer patient, and (iii) a set of three lower back flexibility exercises performed by six subjects. Overall, the results of correctness recognition show 70-100% sensitivity and 100% specificity. The pilot test provides key data for further development including clinical trials. The significance of the method includes simplicity of design and training method, ability to test with simulated signals, and potential to provide real time biofeedback.