Journals starting with saia

SAIAD20 * *Safe Artificial Intelligence for Automated Driving
* Attentional Bottleneck: Towards an Interpretable Deep Driving Network
* Detection and Retrieval of Out-of-Distribution Objects in Semantic Segmentation
* Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
* Explaining Autonomous Driving by Learning End-to-End Visual Attention
* Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles
* Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation
* Leveraging combinatorial testing for safety-critical computer vision datasets
* Mind the Gap - A Benchmark for Dense Depth Prediction Beyond Lidar
* Multivariate Confidence Calibration for Object Detection
* Robust Semantic Segmentation by Redundant Networks With a Layer-Specific Loss Contribution and Majority Vote
* Self-Supervised Domain Mismatch Estimation for Autonomous Perception
* Unsupervised Temporal Consistency Metric for Video Segmentation in Highly-Automated Driving
* Using Mixture of Expert Models to Gain Insights into Semantic Segmentation
14 for SAIAD20

SAIAD21 * *Safe Artificial Intelligence for Automated Driving
* Adversarial Robust Model Compression using In-Train Pruning
* Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
* Detecting Anomalies in Semantic Segmentation with Prototypes
* Development Methodologies for Safety Critical Machine Learning Applications in the Automotive Domain: A Survey
* From Evaluation to Verification: Towards Task-oriented Relevance Metrics for Pedestrian Detection in Safety-critical Domains
* Improving Online Performance Prediction for Semantic Segmentation
* Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders
* Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
* Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation
* Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection
* SafeSO: Interpretable and Explainable Deep Learning Approach for Seat Occupancy Classification in Vehicle Interior
* Simulation Driven Design and Test for Safety of AI Based Autonomous Vehicles
* Sparse Activation Maps for Interpreting 3D Object Detection
* Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation
* Unsupervised Temporal Consistency (TC) Loss to Improve the Performance of Semantic Segmentation Networks, An
16 for SAIAD21

SAIAD23 * *Safe Artificial Intelligence for Automated Driving
* Beyond AUROC and co. for evaluating out-of-distribution detection performance
* Category Differences Matter: A Broad Analysis of Inter-Category Error in Semantic Segmentation
* Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis
* Interpretable Model-Agnostic Plausibility Verification for 2D Object Detectors Using Domain-Invariant Concept Bottleneck Models
* Investigating CLIP Performance for Meta-data Generation in AD Datasets
* Maximum Entropy Information Bottleneck for Uncertainty-aware Stochastic Embedding
* Novel Benchmark for Refinement of Noisy Localization Labels in Autolabeled Datasets for Object Detection, A
* Optimizing Explanations by Network Canonization and Hyperparameter Search
* Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
* RL-CAM: Visual Explanations for Convolutional Networks using Reinforcement Learning
* Uncovering the Inner Workings of STEGO for Safe Unsupervised Semantic Segmentation
12 for SAIAD23

SAIAD24 * *Safe Artificial Intelligence for All Domains
* AdvDenoise: Fast Generation Framework of Universal and Robust Adversarial Patches Using Denoise
* Comprehensive Analysis of Factors Impacting Membership Inference, A
* Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty
* Exploiting CLIP Self-Consistency to Automate Image Augmentation for Safety Critical Scenarios
* Hinge-Wasserstein: Estimating Multimodal Aleatoric Uncertainty in Regression Tasks
* Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study
* Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
* Penalized Inverse Probability Measure for Conformal Classification, The
* Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression
* Reliable Trajectory Prediction and Uncertainty Quantification with Conditioned Diffusion Models
* Run-time Monitoring of 3D Object Detection in Automated Driving Systems Using Early Layer Neural Activation Patterns
* Situation Monitor: Diversity-Driven Zero-Shot Out-of-Distribution Detection using Budding Ensemble Architecture for Object Detection
* Towards Engineered Safe AI with Modular Concept Models
* Towards Weakly-Supervised Domain Adaptation for Lane Detection
* Understanding ReLU Network Robustness Through Test Set Certification Performance
* Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations
17 for SAIAD24

Index for "s"


Last update:20-Jan-25 12:05:33
Use price@usc.edu for comments.