AI safety tests found to rely on 'obvious' trigger words; with easy rephrasing, models labeled 'reasonably safe' suddenly fail, with attacks succeeding up to 98% of the time. New corporate research ...
Introduction The proliferation of deepfake technology, synthetic media generated using advanced artificial intelligence techniques, has emerged as a ...
Abstract: This research evaluates a cognitive AI model for unmanned aerial vehicles (UAV) detection using adversarial machine learning (AML) techniques. We test the model using the VisDrone dataset ...
Abstract: Adversarial Machine Learning (AML), particularly model poisoning, presents a critical threat to Autonomous Vehicles (AVs) in the Internet of Vehicles (IoV) environment. To address this ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results