| AI-POWERED THREAT DETECTION VERSUS AI-DRIVEN ATTACKS, FOCUS DEEPFAKE THREATS: A REVIEW |
| کد مقاله : 1285-NAEC |
| نویسندگان |
|
فاطمه زهرا سلیمی *1، آزیتا شیرازی پور2 1گروه مهندسی کامپیوتر، دانشگاه آزاد اسلامی، واحد تهران غرب، تهران، ایران 2گروه مهندسی کامپیوتر دانشگاه آزاد اسلامی واحد تهران غرب |
| چکیده مقاله |
| Over the past decade, the presence of Artificial Intelligence (AI) in cybersecurity has grown so quickly that the field feels almost unrecognizable compared to its earlier form. Security teams that once relied on tools reacting after something went wrong now find themselves leaning on systems that try to anticipate trouble before it becomes visible. Much of this shift has happened because Machine Learning (ML) and Deep Learning (DL) models can pick up unusual activity far earlier than manual monitoring ever could. Sometimes the models highlight patterns that experts might overlook—or at least notice much later—which allows them to classify new or unfamiliar threats with a speed that used to be practically impossible.But this evolution hasn’t been entirely reassuring. The same AI techniques that strengthen defensive tools have given attackers a new set of opportunities. Adversarial examples, often subtle enough that a human never notices anything off, can mislead otherwise reliable models. And then there are deepfakes—some of them so realistic that even trained analysts need a second look. Many of these deepfakes are created with advanced generative systems such as GANs, which have made fabricating believable audio or video far easier than it should be. Unsurprisingly, these synthetic materials are now turning up in financial fraud, personalized misinformation efforts, and social-engineering schemes designed to manipulate trust rather than exploit code |
| کلیدواژه ها |
| Keywords: AI, Threat Detection, Adversarial Attacks, Deepfake Detection. |
| وضعیت: پذیرفته شده |