Journals starting with saia

SAIAD20 * *Safe Artificial Intelligence for Automated Driving
* Attentional Bottleneck: Towards an Interpretable Deep Driving Network
* Detection and Retrieval of Out-of-Distribution Objects in Semantic Segmentation
* Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
* Explaining Autonomous Driving by Learning End-to-End Visual Attention
* Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles
* Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation
* Leveraging combinatorial testing for safety-critical computer vision datasets
* Mind the Gap - A Benchmark for Dense Depth Prediction Beyond Lidar
* Multivariate Confidence Calibration for Object Detection
* Robust Semantic Segmentation by Redundant Networks With a Layer-Specific Loss Contribution and Majority Vote
* Self-Supervised Domain Mismatch Estimation for Autonomous Perception
* Unsupervised Temporal Consistency Metric for Video Segmentation in Highly-Automated Driving
* Using Mixture of Expert Models to Gain Insights into Semantic Segmentation
14 for SAIAD20

SAIAD21 * *Safe Artificial Intelligence for Automated Driving
* Adversarial Robust Model Compression using In-Train Pruning
* Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
* Detecting Anomalies in Semantic Segmentation with Prototypes
* Development Methodologies for Safety Critical Machine Learning Applications in the Automotive Domain: A Survey
* From Evaluation to Verification: Towards Task-oriented Relevance Metrics for Pedestrian Detection in Safety-critical Domains
* Improving Online Performance Prediction for Semantic Segmentation
* Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders
* Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
* Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation
* Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection
* SafeSO: Interpretable and Explainable Deep Learning Approach for Seat Occupancy Classification in Vehicle Interior
* Simulation Driven Design and Test for Safety of AI Based Autonomous Vehicles
* Sparse Activation Maps for Interpreting 3D Object Detection
* Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation
* Unsupervised Temporal Consistency (TC) Loss to Improve the Performance of Semantic Segmentation Networks, An
16 for SAIAD21

Index for "s"


Last update:24-Oct-21 17:32:51
Use price@usc.edu for comments.