Subido por rebeca aguirre

AI-Driven Solutions for Social Engineering Attacks Detection Prevention and Response

Anuncio
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/380806733
AI-Driven Solutions for Social Engineering Attacks: Detection, Prevention,
and Response
Conference Paper · February 2024
DOI: 10.1109/ICCR61006.2024.10533010
CITATIONS
READS
2
370
6 authors, including:
Hussam Fakhouri
Basim Alhadidi
Petra University
Al-Balqa' Applied University
82 PUBLICATIONS 824 CITATIONS
27 PUBLICATIONS 71 CITATIONS
SEE PROFILE
SEE PROFILE
Sharif Naser Makhadmeh
Faten Hamad
University of Jordan
Sultan Qaboos University
81 PUBLICATIONS 1,798 CITATIONS
75 PUBLICATIONS 469 CITATIONS
SEE PROFILE
All content following this page was uploaded by Hussam Fakhouri on 12 June 2024.
The user has requested enhancement of the downloaded file.
SEE PROFILE
2024 2nd International Conference on Cyber Resilience (ICCR) | 979-8-3503-9496-2/24/$31.00 ©2024 IEEE | DOI: 10.1109/ICCR61006.2024.10533010
AI-Driven Solutions for Social Engineering Attacks:
Detection, Prevention, and Response
Hussam N. Fakhouri
Data Science and Artificial Intelligence
Department, Faculty of Information
Technology, University of Petra
Amman, Jordan
Hussam.fakhouri@uop.edu.jo
Basim Alhadidi
Department of Computer Information
Systems,
Al-Balqa Applied University
Salt – Jordan
b_hadidi@bau.edu.jo
Khalil Omar
Computer Science Department.
University of Petra
Amman, Jordan
komar@uop.edu.jo
Sharif Naser Makhadmeh
Data Science and Artificial Intelligence
Department, Faculty of Information
Technology, University of Petra
Amman, Jordan
sharif.makhadmeh@uop.edu.jo
Faten Hamad
Sultan Qaboos University
Sultanate of Oman.
University of Jordan, Amman
Amman, Jordan
Niveen Z. Halalsheh
The University of Jordan, Amman
Amman, Jordan
n.halalsheh@ju.edu.jo
Abstract— With the rapid evolution of cyber threats, social
engineering attacks have become increasingly sophisticated,
leveraging human vulnerabilities to bypass traditional security
measures. While many conventional defense mechanisms have
been overwhelmed, Artificial Intelligence (AI) offers a
promising avenue to detect, prevent, and respond to these
emerging threats. This research analyzes the intricacies of
contemporary social engineering attacks, from their methods of
deployment to their recent adaptations, such as leveraging social
media and mobile apps. By contrasting prior solutions with the
potential of AI-based defenses, we highlight the key role of
machine learning in behavioral pattern recognition, Natural
Language Processing's (NLP) efficacy in identifying phishing
attempts, and predictive analytics' power to anticipate future
attack vectors. Through detailed case studies, we showcase realworld scenarios where AI mechanisms have successfully
countered social engineering ploys. The findings reveal that AIenhanced mechanisms significantly improve the identification
and mitigation of social engineering threats. Specifically, AIdriven behavioral analytics effectively detect subtle,
manipulative cues indicative of phishing and other deceitful
tactics, considerably reducing the incidence of successful
attacks. Furthermore, predictive analytics has shown great
promise in forecasting and preemptively countering potential
cyber threats, In addition, while effective, AI tools must evolve
with the changing tactics of cyber threats, Continuous learning
and updating are necessary to maintain and improve accuracy
and effectiveness.
Keywords— Social Engineering,
Intelligence (AI), machine learning
I.
Attacks,
Artificial
INTRODUCTION
Social engineering, a term now deeply embedded within
the lexicon of cybersecurity, refers to the deliberate
manipulation of individuals into releasing confidential
information or performing actions that typically result in
unauthorized access and potential breaches [1]. This form of
cyber threat capitalizes not on system vulnerabilities but on
human vulnerabilities, exploiting behavioral and
psychological tendencies. In recent years, the techniques and
methods employed by adversaries to execute social
engineering attacks have become more sophisticated,
mirroring the intricacies of our progressively digital-centric
societal framework [2]. As digital systems and networks
become more secure and resilient against direct attacks,
malefactors often find it easier to target the human element,
which remains susceptible to manipulation through tactics like
deception, intimidation, and impersonation. Contemporary
social engineering methods have thus expanded from
traditional phishing emails to more advanced schemes, such
as spear phishing, baiting, and voice impersonation,
leveraging platforms like social media, messaging apps, and
even virtual meeting platforms [3].
The evolving threat landscape necessitates a parallel
evolution in defensive strategies, thereby highlighting the
pivotal role that Artificial Intelligence (AI) is increasingly
playing in cybersecurity [4]. AI's unique capability to analyze
enormous datasets, recognize intricate behavioral patterns,
and facilitate real-time decision-making offers an arsenal of
tools invaluable for mitigating the risks associated with social
engineering attacks. For instance, the discernment of subtle
manipulative cues in textual communications—a task
daunting for human analysts—becomes feasible through
Natural Language Processing (NLP) algorithms trained on a
plethora of such examples, enabling them to issue timely alerts
or proactively block suspect communications. Given these
capabilities, this research endeavors to provide an exhaustive
analysis of the modern landscape of social engineering,
capturing its multi-faceted evolution in tandem with the
burgeoning digital transformations. Alongside, we aim to
dissect the AI's potential and actual contributions in this
domain, examining its role as a central pillar in the
formulation of defensive countermeasures that can
significantly dampen, if not entirely neutralize, the growing
threats posed by sophisticated social engineering tactics [5].
social engineering has adapted to exploit not only existing
communication vectors but also emerging ones.
Cybercriminals are increasingly leveraging technologies like
Machine Learning (ML) to automate and fine-tune their
attacks, thereby calling for equally adaptive countermeasures.
Moreover, the blurring of personal and professional lives on
digital platforms further complicates the security landscape,
making it essential to consider solutions that are not just robust
but also adaptable to diverse contexts [6].
The urgency for enhanced countermeasures is intensified
by the increasing economic and social impact of these attacks.
According to recent estimates, the financial loss attributed to
social engineering attacks is growing exponentially, with
979-8-3503-9496-2/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
repercussions extending beyond monetary loss to include
damage to reputation, loss of intellectual property, and even
potential legal consequences for failure to protect user data
[7]. This amplifies the critical need for more effective, multifaceted defenses that leverage state-of-the-art technologies
like AI.
A. RESEARCH Objective
The main aim of this research is to meticulously dissect
the terrain of contemporary social engineering attacks. This
involves delving into their evolution, the nuanced
complexities they've adopted, and the particular challenges
they pose in our increasingly digitized world. A parallel
objective is to thoroughly explore and critically evaluate the
potential of Artificial Intelligence (AI) in identifying,
mitigating, and addressing these security threats. The research
endeavors to offer a comprehensive chronicle of the evolution
of social engineering methods, focusing particularly on their
adaptability and escalating complexity in the context of the
modern digital landscape. Concurrently, it scrutinizes the
strengths and weaknesses inherent in existing cybersecurity
paradigms, particularly with regard to their effectiveness in
countering social engineering threats. Supplementing this
analysis, the research investigates the myriad roles that
artificial intelligence (AI) could fulfill in combating such
threats, accentuating its competencies in detection, deterrence,
and reactive countermeasures.
B. RESEARCH CONTRIBUTION
This research promises multiple significant contributions.
Firstly, it delivers a comprehensive analysis of modern-day
social engineering attacks, bridging a much-needed gap
between theoretical constructs and tangible real-world
manifestations. This depth of analysis is poised to elevate
understanding and preparation against such threats. Secondly,
a unique framework is proposed, drawing from the insights
harvested. This framework, specifically oriented towards AIaugmented cybersecurity measures against social engineering,
can be envisioned as a blueprint. Institutions and organizations
keen on integrating AI-driven defenses could find this an
invaluable reference. A further contribution is the introduction
of evaluative metrics and tools. These have been tailored to
measure the effectiveness of AI interventions against social
engineering incursions. This toolkit, apart from its immediate
utility, paves the way for its adoption and adaptation in future
research and practical applications. The study also pioneers in
highlighting future trajectories. By elucidating potential
challenges and pitfalls, it offers a roadmap for subsequent
research endeavors, hinting at areas demanding focus. Lastly,
the research transcends mere academic discourse. Its findings
and recommendations have palpable real-world implications.
IT professionals, corporations, and even policy framers could
derive actionable insights, aiding in the refinement and
adoption of AI tools to thwart social engineering threats.
II.
LITERATURE REVIEW
The realm of social engineering, while rooted in age-old
principles of manipulation and deceit, has been dynamically
evolving, especially in the context of our contemporary digital
society. To truly grasp the magnitude and nuances of this
progression, a deep dive into pertinent scholarly literature is
essential. Modern Social Engineering Techniques: At the
forefront of the digital age, malefactors have recalibrated and
refined their strategies to exploit the human element in
cybersecurity. Recent literature has identified a suite of
sophisticated techniques that have gained prominence. Spear
phishing, which targets specific individuals or groups with
personalized lures, has emerged as a particularly effective
variant of traditional phishing attacks [7]. Whaling, targeting
high-profile individuals, showcases the audacity and precision
of contemporary social engineers [8]. Furthermore, vishing
(voice phishing) and smishing (SMS phishing) utilize
telecommunication pathways, reflecting the multifaceted
attack vectors now in play [9]. Implications of Modern
Techniques: The implications of these evolved techniques
stretch beyond mere unauthorized access or data breaches.
The psychological impact on victims, the erosion of trust in
digital communication channels, and the significant financial
repercussions for organizations are profound [10]. For
instance, business email compromise (BEC) attacks, a form of
spear phishing, have led to substantial financial losses, even
crippling organizations [11]. Moreover, as social platforms
become integrated into professional ecosystems, techniques
like baiting through social media platforms underscore the
blurred lines between personal and professional digital spaces,
leading to heightened vulnerabilities [12].
The intricate dance of cybersecurity has historically leaned
on a plethora of defense mechanisms, tailored to counter the
myriad threats birthed by our transition into the digital age.
However, as this section seeks to elucidate, while these
erstwhile solutions held fort against the challenges of their
time, they exhibit marked limitations when pitted against the
nuanced, evolving strategies of contemporary social
engineering [13]. The initial fortresses against social
engineering predominantly revolved around bolstering human
defense through education and awareness campaigns [14].
Immense organizational resources were channeled into
workshops, training modules, and simulated phishing
exercises in a bid to fortify this human firewall [15].
Complementing these were technological bulwarks, ranging
from spam filters and antivirus software to signature-based
detection mechanisms, heralded as the vanguards against
phishing and malware threats [16].
Yet, the evolving threat landscape has exposed chinks in
these defenses. For instance, signature-based mechanisms,
despite their efficacy against known threats, are inherently
reactive, often leaving systems vulnerable to novel, uncharted
attacks [17]. The heavy reliance on human discernment,
despite its merits, has shown its fragility; even the most
astutely trained individual can falter amidst the barrage of
sophisticated digital threats [18]. Moreover, the adaptability,
or rather the rigidity, of traditional solutions is increasingly
evident. Adversaries, in their relentless pursuit of ingenuity,
have often outpaced static defenses like spam filters,
illustrating a stark need for these tools to evolve at par [19]. A
corollary challenge, particularly with spam filters and
antivirus platforms, is their inclination toward false positives,
inadvertently filtering legitimate communications, leading to
operational hiccups and potential lost engagements [20].
Piecing together this narrative underscores a pivotal
revelation: the traditional bastions of cybersecurity, while
foundational, are increasingly outpaced by the dynamism of
modern threats. This stark juxtaposition accentuates the
urgency for innovative, adaptive, and comprehensive
strategies in our ongoing battle against the complexities of
modern social engineering [21]. The ever-advancing realm of
cybersecurity has witnessed a continuous ebb and flow of
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
defenses and threats, each trying to outdo the other. Amidst
this oscillation, the rise and integration of Artificial
Intelligence (AI) into defense mechanisms signal a seminal
shift in the landscape [22]. The foray of AI into cybersecurity
can be attributed to the inherent limitations of traditional, rulebased systems. Confronted with an increasing magnitude and
sophistication of digital threats, these systems often found
themselves ill-equipped to rapidly adapt or identify patterns
within expansive data streams. AI, distinguished by its
capabilities in intricate data processing, pattern recognition,
and predictive modeling, emerged as a beacon, promising an
adaptive and anticipatory defensive posture [23].
The contributions of AI to cybersecurity are multifaceted
and profound. Machine learning, a cardinal subset of AI, has
demonstrated unprecedented prowess in analyzing
voluminous datasets, extracting insights, and discerning
potential threats with remarkable accuracy. Parallelly, the
domain of Natural Language Processing (NLP), which dwells
at the intersection of human language and computation, has
been effectively harnessed to identify and counteract more
covert threats, such as those embedded within seemingly
innocuous communications [24]. However, Concerns, ranging
from AI-driven false positives to the more philosophical and
ethical debates about completely automated systems making
critical decisions sans human intervention, punctuate the
discourse [25] [26].
III.
CONTEMPORARY SOCIAL ENGINEERING ATTACKS
As the digital landscape has expanded and evolved, so too
have the methods employed by adversaries to exploit its
vulnerabilities. Particularly in the sphere of social
engineering, we've witnessed a metamorphosis of tactics that
capitalize on human psychology and behavior. These
techniques, old and new, woven together, create a mosaic of
threats that challenges even the most robust cybersecurity
defenses.
A. Evolution and Adaptation
Historically, social engineering attacks had modest
beginnings, rooted in straightforward deception and
manipulation endeavors. Early instances might have
encompassed simple pretexting over telephonic conversations
or rudimentary phishing attempts via emails [27]. Over time,
as technology embedded itself deeper into everyday life and
professional environments, malefactors saw more intricate
pathways to exploit, leading to an evolution in their modus
operandi. The rise of the internet and, subsequently, social
media platforms and mobile technologies, ushered in an era of
increased complexity for these attacks. Spear phishing,
targeting specific individuals with tailored lures based on
meticulously gathered personal information, emerged as a
refined offshoot of traditional phishing [28]. Techniques such
as baiting, which entices victims with digital carrots, like free
software downloads, became more prevalent. Similarly, the
proliferation of smartphones and mobile apps saw the advent
of smishing, where deceptive text messages aim to manipulate
recipients into divulging sensitive information [29].
However, it's not just the attack vectors that have evolved;
the targets have shifted as well. High-profile individuals, toptier executives, and critical infrastructural entities became
primary targets in what is now termed as "whaling" – attacks
that aim for the big fish, so to speak [30]. Analyzing this
trajectory, it's evident that social engineering attacks, while
retaining their core principle of manipulating human behavior,
have adapted astoundingly in tandem with technological
advancements and societal shifts. This continuous
metamorphosis, both in strategy and complexity, underscores
the criticality of remaining vigilant and ahead of the curve,
necessitating a constantly evolving defense mechanism in the
ever-changing game of cyber cat and mouse [31].
B. Threat Categories and Methods
As the cybersecurity domain grapples with the
multifarious challenges posed by social engineering, it's
imperative to understand the diverse threat categories and the
methods employed within each. By categorizing and delving
deep into these tactics, one gains a clearer perspective on the
nature of the threat landscape and can tailor defenses more
effectively.
1) Phishing: Different types and their methodologies
Phishing remains one of the most prevalent forms of social
engineering attacks. Historically, it involved casting a wide
net in the hopes of ensnaring unsuspecting victims with
generic lures, often through deceptive emails [32]. However,
its methodologies have evolved. Spear phishing, for instance,
targets specific individuals or organizations with meticulously
crafted messages, often leveraging personal information to
enhance its credibility [33]. Another variant, known as
whaling, specifically targets high-ranking officials or
executives, leveraging their access to critical data or financial
resources [34]. Each of these phishing methodologies
underscores the blend of technical subterfuge and
psychological manipulation employed by attackers.
2) Baiting, Quid Pro Quo, and associated tactics
Baiting, as the name suggests, involves dangling
something enticing to lure victims. In the digital realm, this
often translates to offering tempting downloads, like free
software or media files, which in reality are malicious
payloads [35]. Quid Pro Quo, on the other hand, operates on
the principle of exchange. Here, the attacker offers a service
or benefit in return for information or access. A classic
example is an attacker posing as an IT helpdesk personnel,
offering assistance in exchange for login credentials [36].
3) Impersonation techniques and Pretexting
Impersonation in social engineering attacks involves an
attacker assuming a trusted role or identity to deceive the
victim. This can range from pretending to be a co-worker to
mimicking trusted entities like banks or service providers [37].
Pretexting is an associated tactic where the attacker fabricates
a scenario or pretext to obtain information. For instance, they
might pose as a surveyor needing data for a "research study"
or as a tech support representative claiming to need certain
details to "resolve an issue" [38].
4) New-age vectors: Exploits through social media,
mobile platforms, etc.
The proliferation of social media platforms and the
ubiquity of mobile devices have paved the way for novel
attack vectors. Attackers are increasingly leveraging
platforms like Facebook, Twitter, and LinkedIn to gather
information and craft sophisticated attacks [39]. Mobile apps,
with their diverse permissions, present another vulnerability,
especially if users inadvertently download malicious
applications masquerading as legitimate ones. These apps can
then access sensitive data, record conversations, or even track
user movements [40].
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
IV.
AI FUNDAMENTALS IN CYBERSECURITY
In the complex tapestry of cybersecurity, the introduction
and rapid ascension of Artificial Intelligence (AI) have
heralded a transformative phase. AI, with its plethora of tools
and methodologies, has opened up new vistas of possibilities,
promising to reshape the contours of cybersecurity measures,
making them more adaptive, predictive, and resilient [42]. At
the core of AI lies the principle of enabling machines to mimic
and replicate human-like thinking and decision-making
processes. Rather than relying on rigid, pre-defined
algorithms, AI systems learn, adapt, and evolve based on the
data they process, continually refining their operations and
predictions. This dynamic nature of AI stands in stark contrast
to traditional rule-based systems, presenting a paradigm shift
in how computational entities perceive, process, and respond
to data [43].
A critical component of AI, and perhaps its most
recognized facet, is machine learning (ML). Machine learning
can be envisioned as a subset of AI, dedicated to the
development of algorithms that allow computers to learn and
make decisions without explicit programming. In the context
of cybersecurity, ML models are trained using vast datasets,
encompassing both benign and malicious activities. Over
time, these models "learn" to discern patterns, anomalies, and
behaviors, making them exceptionally adept at detecting
threats, even those previously unseen or unknown [44]. The
intricate interplay of AI and its machine learning components
promises a dynamic and responsive cybersecurity landscape.
By harnessing the power of AI, we are not merely adding
another tool to the cybersecurity arsenal; we are
fundamentally redefining the very foundations upon which
our digital defenses are built, making them more attuned to the
ever-evolving threat landscape [45].
V. ROLE OF AI ACROSS DETECTION, PREVENTION, AND
RESPONSE SPECTRA
The integration of Artificial Intelligence (AI) into
cybersecurity heralds a new age of enhanced defense
capabilities. AI's versatility and adaptability position it as a
formidable force, capable of significantly influencing the triad
of cybersecurity: detection, prevention, and response [46].
A. Detection
At the forefront of any robust cybersecurity strategy lies
the ability to swiftly detect threats. Traditional systems, reliant
on predefined signatures, often falter in the face of novel or
evolved threats. AI, with its foundation in data-driven
decision-making, excels in this domain. Machine learning
models, once trained on vast datasets, can identify subtle
patterns and anomalies that might elude conventional systems.
This capability extends beyond mere threat recognition; it
encompasses the anticipation of potential vulnerabilities based
on historical and real-time data, ensuring that defenses are not
merely reactive but also predictive [47].
B. Prevention
While detection is pivotal, the ultimate goal of any
cybersecurity measure is the prevention of threats. AI elevates
preventive measures by continuously refining defense
mechanisms based on the threats it detects and learns from.
For instance, AI-driven systems can automatically adjust
firewall rules or filter settings in real-time, based on emerging
threat patterns. Moreover, AI can simulate potential attack
scenarios, enabling organizations to identify and patch
vulnerabilities proactively, before they can be exploited.
C. Response
Even with the most advanced defenses in place, breaches
or compromises can still occur. Herein lies the third critical
dimension: response. AI's role in response mechanisms
transforms the traditionally manual and time-intensive
processes. Post a breach, AI systems can swiftly analyze the
extent of compromise, identify the intrusion's source, and
recommend or even autonomously implement containment
measures. Furthermore, AI-driven forensics tools can dissect
the breach, extracting valuable insights about the attack
vector, methodologies used, and potential future threats,
thereby continuously refining the system's knowledge base
and enhancing its response for subsequent threats.
VI.
DETECTION: AI'S PIVOTAL ROLE
In the vast domain of cybersecurity, early and accurate
detection remains the linchpin for a successful defense
strategy. While traditional methods have made significant
strides, the integration of Artificial Intelligence (AI) has
imparted an unprecedented depth and sophistication to
detection capabilities. AI, with its myriad of tools and
techniques, has ushered in a transformative era where
detection is not merely about identifying known threats but
also about proactively discerning and mitigating emerging and
unforeseen vulnerabilities.
A. Behavioral Pattern Recognition via Machine Learning
A central tenet of AI's prowess in the realm of detection
lies in its ability to recognize complex behavioral patterns, a
feat achieved primarily through machine learning (ML)
algorithms. Unlike conventional systems that typically rely on
static signatures or predefined rules, machine learning models
are trained on expansive datasets that encompass a vast
spectrum of behaviors, both benign and malicious . The
beauty of ML-driven behavioral pattern recognition is its
dynamic nature. As these models continuously process and
analyze data, they "learn" and refine their understanding of
what constitutes normal behavior for a given system or
network. Any deviation from this established norm, however
subtle, can be flagged as a potential threat. This capability is
especially crucial in detecting zero-day attacks or novel
threats that don't match any known signature but deviate from
typical behavioral patterns . Moreover, the granularity of ML's
behavioral analysis extends beyond mere system interactions.
It can discern patterns at the user level, identifying anomalies
such as an employee accessing sensitive data at odd hours or
a sudden surge in data transfer from a particular device. These
nuanced detections, which might elude traditional systems,
are made possible due to the depth and sophistication of ML
algorithms [44].
B. Utilizing NLP for Pinpointing Phishing Endeavors
Phishing attacks, with their deceptive allure, have long
posed significant threats in the cybersecurity realm. As these
threats become increasingly sophisticated, mirroring genuine
communications to a concerning degree of accuracy, the
challenge of detection has accentuated. Within this complex
landscape, Natural Language Processing (NLP), an offshoot
of Artificial Intelligence (AI) dedicated to understanding and
interpreting human language computationally, stands out as a
potent tool for combating such deceptive maneuvers [45].
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
The core strength of NLP lies in its capability to process
and discern nuances in language. By enabling machines to
analyze, comprehend, and generate linguistic constructs
contextually, NLP offers a novel approach to identify
anomalies often present in phishing communications. These
anomalies, whether they're syntactic discrepancies, semantic
mismatches, or stylistic deviations from standard
communication patterns, can be red flags indicating deceitful
intents. Phishing emails, despite their deceptive design, often
exhibit linguistic patterns that are slightly off-kilter from
genuine communications. Trained NLP algorithms, armed
with vast datasets comprising both legitimate correspondences
and known phishing attempts, can effectively spot these
inconsistencies. Their ability to pinpoint such disparities in
real-time offers a robust line of defense against phishing
endeavors. Moreover, the dynamic nature of NLP ensures that
it remains abreast of evolving phishing strategies. As
malicious actors refine their linguistic tactics or adapt to
emerging communication trends, continuously updated NLP
models can detect these changes, ensuring that the shield
against phishing remains both current and robust [46].
C. Employing Predictive Analytics to Forecast Potential
Attack Vectors
In the ceaseless evolution of the cybersecurity landscape,
the ability to proactively identify and mitigate potential threats
before they manifest has become paramount. Traditionally,
defenses were largely reactionary, responding to threats postemergence. However, with the burgeoning complexity and
volume of cyberattacks, a paradigm shift towards anticipatory
defense mechanisms is crucial. In this context, predictive
analytics, underpinned by advanced algorithms and vast datadriven insights, emerges as a vanguard in forecasting potential
attack vectors. Predictive analytics involves harnessing a
myriad of data sources, both historical and real-time, to extract
patterns, correlations, and trends. By analyzing this data
through sophisticated AI-driven algorithms, it becomes
feasible to make informed predictions about future events or
potential vulnerabilities. In the realm of cybersecurity, this
translates to identifying patterns of behavior or system
interactions that might precede an attack or signal an emerging
vulnerability. For instance, an unusual surge in network traffic
to a specific server or a series of failed login attempts from a
particular geographic region might be indicative of a
forthcoming Distributed Denial of Service (DDoS) attack or a
brute-force attempt, respectively. Predictive analytics can not
only detect these precursors but also extrapolate them to
forecast the nature, magnitude, or even the likely timeframe of
the potential attack [47].
Beyond mere detection, the real strength of predictive
analytics lies in its prescriptive capabilities. By continuously
monitoring and learning from the digital ecosystem, these
analytical models can recommend proactive measures to preemptively bolster defenses or address vulnerabilities. Whether
it's adjusting firewall settings, patching software, or even
altering user access permissions, these prescriptive insights
ensure that the system remains a step ahead of potential threats
[50]. Moreover, as the cyber domain continually evolves, so
does the sophistication of predictive models. Continuous
learning, coupled with feedback loops, ensures that these
models refine their predictions, adapt to new threat
landscapes, and incorporate emerging trends, offering a
dynamic and agile approach to threat forecasting.
VII.
PREVENTION: AI'S PROACTIVE CAPABILITIES
The essence of a robust cybersecurity strategy lies not just
in its ability to detect and respond to threats but, more
crucially, in its capacity to prevent them. As digital threats
become increasingly sophisticated, the conventional
boundaries of preventive measures are being stretched,
necessitating a more advanced, proactive approach. Here,
Artificial Intelligence (AI) steps in, offering a suite of
capabilities that transform the very fabric of preventive
cybersecurity, ensuring that defenses are not just reactive but
anticipatory and resilient.
A. AI-Enhanced Training and User Awareness Campaigns
While technology plays an undeniable role in
cybersecurity, the human element remains both a critical asset
and a potential vulnerability. Historically, user awareness
campaigns and training modules have been instrumental in
fortifying this human firewall. However, with the diverse
range of threats and the dynamic nature of cyber risks,
traditional training methods may fall short in adequately
preparing users. AI's integration into training and awareness
initiatives offers a paradigm shift. Instead of generic, one-sizefits-all training modules, AI enables the creation of
personalized, adaptive training experiences. By analyzing
individual user behavior, past interactions, and even response
times to simulated threats, AI can craft training scenarios
tailored to each user's proficiency level and specific
vulnerabilities [48].
For instance, a user who often clicks on embedded links in
emails might be presented with more rigorous phishing
simulations, while another who frequently downloads
attachments might undergo training focused on malware
threats. These AI-driven simulations are not static; they evolve
based on user responses, ensuring that training remains
challenging, relevant, and engaging. Furthermore, AIenhanced awareness campaigns can leverage real-time data to
alert users about emerging threats. Instead of periodic,
scheduled updates, users can receive just-in-time notifications
about new vulnerabilities, attack vectors, or best practices,
ensuring that they are continually updated and vigilant [49].
B. AI-Driven Robust Multi-Factor Authentication
Mechanisms
The increasing intricacy of the digital landscape and the
surging sophistication of cyberattacks have underscored the
limitations of traditional authentication methods, such as
simple password-based systems. As cyber adversaries
continue to deploy innovative methods to breach defenses,
there's a pressing need to enhance the security and resilience
of authentication processes. Multi-factor authentication
(MFA), a system that requires users to provide two or more
verification factors to gain access, has emerged as a potent
solution. However, with the advent of Artificial Intelligence
(AI), MFA has been elevated to an even higher echelon of
security, ensuring that access control is both robust and
adaptive [50].
AI-enhanced MFA doesn't merely rely on static factors
like passwords, PINs, or smart cards. Instead, it incorporates
dynamic elements, often derived from user behavior and realtime context. One of the most promising AI-driven techniques
in this realm is behavioral biometrics, which focuses on the
unique ways individuals interact with their devices—such as
typing rhythms, mouse movements, or touch dynamics. By
continuously analyzing these patterns, AI systems can discern
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
subtle anomalies, potentially flagging unauthorized access
attempts even if the intruder has the correct credentials [50].
Beyond behavioral aspects, AI-driven MFA also leverages
contextual information to enhance security. For instance,
geolocation data, time of access, or even the nature of the
requested data can be used to determine the authenticity of a
request. If a user typically accesses a system from a specific
location during regular business hours, an access attempt from
a different continent at an odd hour, even with the correct
password, might be flagged as suspicious.
Furthermore, adaptive AI algorithms can adjust
authentication requirements in real time based on perceived
risk. A routine login might require just one or two factors,
while an access attempt deemed high-risk, perhaps due to an
unusual data download request, might trigger additional
authentication challenges. This ensures that security is
heightened when necessary, without compromising user
convenience during regular interactions.
C. Predictive Modeling for Vulnerability Assessment
In the continuous battle to safeguard digital assets and
infrastructures, understanding and anticipating vulnerabilities
has emerged as a cornerstone for effective cybersecurity.
Traditional methods of vulnerability assessment, though
invaluable, often operate reactively, identifying weaknesses
after they have become exploitable or, in unfortunate cases,
after they have been exploited. However, the incorporation of
Artificial Intelligence (AI), particularly predictive modeling,
into this domain is revolutionizing the way organizations
anticipate and mitigate potential weak points in their systems.
Predictive modeling, at its core, leverages vast amounts of
data, both historical and real-time, to forecast potential
outcomes or trends. When applied to vulnerability assessment
in cybersecurity, these models process extensive datasets
related to system configurations, past vulnerabilities, network
traffic patterns, software behaviors, and even external threat
intelligence. Through intricate algorithms, the models then
discern patterns and correlations that might hint at potential
vulnerabilities [51].
One significant advantage of AI-driven predictive
modeling in vulnerability assessment is its capability to
process and analyze vast multitudes of data at granular levels,
far beyond the capability of manual analysis. By ingesting
information from varied sources, such as system logs, patch
histories, and user activities, these models can detect subtle
anomalies or configurations that might lead to vulnerabilities
in the future. Moreover, the dynamic nature of predictive
models ensures that they continuously learn and adapt. As new
vulnerabilities are discovered and patched, or as threat
landscapes evolve, the models update their predictive
algorithms, ensuring that their assessments are always aligned
with the current threat environment. This continuous learning
process is particularly valuable in an era where software
updates are frequent, and new vulnerabilities can emerge
rapidly. Another crucial facet of AI-based predictive
vulnerability assessment is its ability to prioritize. Not all
vulnerabilities carry the same risk, and addressing them
requires resource allocation. Predictive models can evaluate
the potential impact and likelihood of a vulnerability being
exploited, providing organizations with a risk-weighted list,
ensuring that the most critical vulnerabilities are addressed
promptly [52].
VIII. AI-ENHANCED COUNTERMEASURES
In the multifaceted landscape of cybersecurity, response
mechanisms hold paramount importance. Once a threat
penetrates the defensive perimeter, the efficiency, accuracy,
and speed of the response can determine the magnitude of
damage, potential data loss, and the subsequent impact on an
organization's reputation and operational continuity. As
digital threats grow in complexity and speed, traditional
response measures, often reliant on manual intervention, may
not suffice. Enter Artificial Intelligence (AI), which promises
to supercharge response strategies, ensuring they are agile,
precise, and timely.
A. Immediate Threat Neutralization through AI
When a cyber-threat materializes, every second counts.
Delays in detection, containment, or neutralization can
escalate the ramifications, sometimes exponentially. AIdriven response systems, imbued with machine learning and
real-time data processing capabilities, offer a transformative
approach to immediate threat neutralization. At the heart of
such AI-enhanced countermeasures is the ability to recognize
and act upon threats autonomously. Unlike traditional
systems, which might raise an alert and then await human
intervention, AI-driven systems can be programmed to take
pre-defined actions upon detecting certain threat patterns. For
instance, if a potential ransomware activity is detected, the
system can immediately isolate the affected segment of the
network, halting the spread and mitigating broader impact
[45].
B. Post-Attack Analysis Using Machine Learning
Frameworks
The aftermath of a cyberattack, while undeniably
challenging, provides a crucial window of opportunity for
organizations to glean insights, reassess their defenses, and
bolster their preparedness for future threats. Traditional postattack analysis methods, though insightful, might not fully
capture the breadth and depth of modern cyberattacks, given
their evolving complexity. Machine learning (ML), a subset
of Artificial Intelligence (AI) characterized by its ability to
autonomously learn from data, emerges as an indispensable
tool in enhancing the granularity and efficacy of post-attack
analysis. Machine learning frameworks, tailored for
cybersecurity analysis, harness vast amounts of data generated
during and after an attack. This data, which can span system
logs, network traffic patterns, user behaviors, and more, is
processed to identify patterns, correlations, and anomalies
associated with the attack. Unlike static rule-based systems,
ML algorithms evolve their understanding based on the data,
enabling them to uncover subtle attack vectors, trace back
intruder pathways, and even identify latent vulnerabilities that
might have been exploited [52].
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
Table 1: Analysis of AI in Detection of Social Engineering Attacks
Feature
AI
Tools/Techniques
Behavioral
Pattern
Recognition
Real-World
Application
Effectiveness
Scalability
Integration
Capabilities
User Impact
Limitatio
ns
Future
Prospects
-Neural Networks - -Anomaly detection
Decision Trees in user behavior Support Vector
Detecting irregular
Machines
login patterns Clustering
Unusual
Algorithms
transaction
monitoring
High accuracy in
known patterns;
struggles
with
novel behaviors
Good
with
sufficient
resources;
may require
significant
computational
power
Generall
y
wellintegrated
with existing
SIEM systems
Minimal
direct impact
on users
-False
positives
- Needs
extensive
labeled
data
- Development
of algorithms
with lower false
positives
Better
generalization
from less data
Textual
Communication
Analysis
Sentiment
Analysis
Contextual
Embeddings
(BERT, GPT-3) Syntax and Style
Analysis
- Phishing email
detection - Social
media monitoring
for
fraudulent
messages
Analysis
of
communication for
deceptive cues
Good
with
structured attacks;
difficulty
with
nuanced
or
evolved language
Highly
scalable with
cloud-based
models
Can
be
integrated into
email systems
and web filters
Minimal
unless false
positives
block
legitimate
communicatio
n
Evolving
language
of
attackers
- Context
understan
ding
Improved
adaptability to
new forms of
language
More
robust
context
and
anomaly
detection
Predictive Threat
Identification
Time
Series
Analysis
Clustering
Regression Models
- Association Rule
Learning
Forecasting
attack trends
Identifying
emerging social
engineering
schemes
Understanding
attacker behavior
over time
Varied based on
data and model;
can be highly
accurate
with
quality data
Can
be
scalable
but
requires
continuous
data input
Depends on
the
availability
and access to
relevant data
streams
Can improve
proactive
defense
but
may
cause
alert fatigue
- Quality
of
prediction
s varies
with data
- May not
catch
novel,
unseen
attacks
- Real-time data
analysis
for
dynamic
prediction
Enhanced
models
for
better
forecasting
Semantic Analysis
-NLP for Semantic
Understanding
Knowledge Graphs
Semantic
Networks
- Understanding
the
meaning
behind words in
communications Detecting
sophisticated
phishing
Effective
for
well-understood
attack vectors
Scalable with
advanced NLP
models
and
hardware
Can
be
integrated into
content
filtering and
monitoring
solutions
Minimal
unless false
negatives or
positives
interfere with
work
Difficulty
with
polysemo
us words
- More accurate
language
models - Better
handling
of
language
evolution and
context
11. Conclusion
As the digital world continues its relentless expansion, the
intertwined trajectories of cybersecurity and Artificial
Intelligence (AI) are poised to play defining roles in shaping
the future of information protection and threat mitigation. The
evolving tapestry of this landscape suggests a dynamic
interplay of challenges and opportunities, innovations and
threats, with AI standing as both a beacon of hope and a
domain of intricate complexities. The foreseeable future is
likely to witness a surge in AI-driven proactive defense
mechanisms. Rather than merely reacting to cyber threats,
advanced AI systems will increasingly anticipate and
counteract threats even before they materialize. Leveraging
vast data streams from interconnected devices, especially with
the proliferation of the Internet of Things (IoT), AI algorithms
will offer predictive insights with unparalleled granularity,
allowing for more refined threat assessment and mitigation
strategies. Simultaneously, the very nature of cyber threats is
set to undergo transformation. With AI tools becoming more
accessible, cyber adversaries will likely employ AI-driven
strategies, leading to an arms race of sorts in the cyber domain.
This might result in more sophisticated, AI-powered malware,
intelligent phishing campaigns, or even automated hacking
attempts, necessitating even more advanced AI-driven defense
strategies.
IX. REFERENCES
[1]
[2]
A. Sood, S. Zeadally, and R. Bansal, "Cybercrime at a Scale: A
Practical Study of Deployments of HTTP-Based Botnet Command
and Control Panels," IEEE Communications Magazine, vol. 55, no. 7,
pp. 22–28, 2017, doi:10.1109/mcom.2017.1600969.
M. Tang et al., "A simple generic attack on text captchas," in
Proceedings of the 2016 Network And Distributed System Security
Symposium,
San
Diego,
California,
2016,
doi:10.14722/ndss.2016.23154.
[3]
C. Thanh and I. Zelinka, "A survey on artificial intelligence in
malware as next-generation threats," MENDEL, vol. 25, no. 2, pp.
27–34, 2019, doi:10.13164/mendel.2019.2.027.
[4] K. Trieu and Y. Yang, "Artificial intelligence-based password brute
force attacks," in Proceedings of Midwest Association for Information
Systems Conference, St. Louis, Missouri, USA, 2018, pp. 13(39).
[5] T. Truong, I. Zelinka, J. Plucar, M. Čandík, and V. Šulc, "Artificial
intelligence and cybersecurity: past, presence, and future," in
Advances In Intelligent Systems And Computing, pp. 351–63, 2020,
doi:10.1007/978-981-15-0199-9_30.
[6] M. Usman, M. Jan, X. He, and J. Chen, "A survey on representation
learning efforts in cybersecurity domain," ACM Computing Surveys,
vol. 52, no. 6, pp. 1–28, 2020, doi:10.1145/3331174. A. Cani, M.
Gaudesi, E. Sanchez, G. Squillero, and A. Tonda, "Towards
automated malware creation," in Proceedings of the 29th Annual
ACM Symposium On Applied Computing, Gyeongju, Republic of
Korea, 2014, pp. 157–160, doi: 10.1145/2554850.2555157.
[7] S. S. Chakkaravarthy, D. Sangeetha, V. M. Rathnam, K. Srinithi, and
V. Vaidehi, "Futuristic cyber-attacks," International Journal of
Knowledge-Based and Intelligent Engineering Systems, vol. 22, no.
3, pp. 195–204, 2018, doi: 10.3233/kes-180384.
[8] J. Chen, X. Luo, J. Hu, D. Ye, and D. Gong, "An Attack on Hollow
CAPTCHA Using Accurate Filling and Nonredundant Merging,"
IETE Technical Review, vol. 35, sup1, pp. 106–118, 2018,
doi:10.1080/02564602.2018.1520152.
[9] K. Chung, Z. T. Kalbarczyk, and R. K. Iyer, "Availability attacks on
computing systems through alteration of environmental control:
Smart malware approach," in Proceedings of the 10th ACM/IEEE
International Conference on Cyber-Physical Systems, Montreal,
Quebec, Canada, 2019, pp. 1-12.
[10] H. Gao, M. Tang, Y. Liu, P. Zhang, and X. Liu, "Research on the
security of Microsoft’s two-layer CAPTCHA," IEEE Transactions On
Information Forensics And Security, vol. 12, no. 7, pp. 1671-1685,
2017, doi:10.1109/tifs.2017.2682704.
[11] S. Hamadah and D. Aqel, "Cybersecurity becomes smart using
artificial intelligent and machine learning approaches: An overview,"
ICIC Express Letters, Part B: Applications, vol. 11, no. 12, pp. 11151123, 2020, doi:10.24507/icicelb.11.12.1115.
[12] Carter, B. (2021). Impact of social inequalities and discrimination on
vulnerability to crises. K4D Helpdesk Report, 994, 1-26.
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
[13] H. S. Anderson, J. Woodbridge, and B. Filar, "Deepdga:
Adversarially-tuned domain generation and detection," in
Proceedings of the ACM Workshop on Artificial Intelligence and
Security, Vienna, Austria, 2016, pp. 13-21.
[14] A. Babuta, M. Oswald, and A. Janjeva, "Artificial Intelligence and
UK National Security Policy Considerations," Royal United Services
Institute Occasional Paper, 2020.
[15] A. C. Bahnsen, I. Torroledo, L. Camacho, and S. Villegas,
"DeepPhish: Simulating malicious AI," in APWG Symposium on
Electronic Crime Research, London, United Kingdom, 2018, pp. 1-8.
[16] M. Bilal, A. Gani, M. Lali, M. Marjani, and N. Malik, "Social
profiling: A review, taxonomy, and challenges," Cyberpsychology,
Behavior and Social Networking, vol. 22, no. 7, pp. 433-450, 2019,
doi: 10.1089/cyber.2018.0670.
[17] M. Brundage et al., "The malicious use of artificial intelligence:
forecasting, prevention, and mitigation," Future of Humanity
Institute, Oxford, 2018.
[18] E. Bursztein, J. Aigrain, A. Moscicki, and J. C. Mitchell, "The end is
nigh: generic solving of text-based CAPTCHAs," in 8th Usenix
Workshop on Offensive Technologies WOOT ‘14, San Diego, CA,
USA, 2014.
[19] K. Cabaj, Z. Kotulski, B. Księżopolski, and W. Mazurczyk,
"Cybersecurity: trends, issues, and challenges," EURASIP Journal On
Information Security, 2018, doi: 10.1186/s13635-018-0080-0.
[20] Aslan, Ö., Aktuğ, S. S., Ozkan-Okay, M., Yilmaz, A. A., & Akin, E.
(2023). A comprehensive review of cyber security vulnerabilities,
threats, attacks, and solutions. Electronics, 12(6), 1333.
[21] Syafitri, W., Shukur, Z., Asma’Mokhtar, U., Sulaiman, R., & Ibrahim,
M. A. (2022). Social engineering attacks prevention: A systematic
literature review. IEEE access, 10, 39325-39343.
[22] Oseni, A., Moustafa, N., Janicke, H., Liu, P., Tari, Z., & Vasilakos,
A. (2021). Security and privacy for artificial intelligence:
Opportunities and challenges. arXiv preprint arXiv:2102.04661.
[23] H. Gao et al., "Research on the security of microsoft’s two-layer
captcha," IEEE Transactions On Information Forensics And Security,
vol. 12, no. 7, pp. 1671–85, 2017, doi: 10.1109/tifs.2017.2682704.
[24] S. Hamadah and D. Aqel, "Cybersecurity becomes smart using
artificial intelligent and machine learning approaches: An overview,"
ICIC Express Letters, Part B: Applications, vol. 11, no. 12, pp. 1115–
1123, 2020, doi: 10.24507/icicelb.11.12.1115.
[25] B. Hitaj, P. Gasti, G. Ateniese, and F. Perez-Cruz, "PassGAN: A deep
learning approach for password guessing," Applied Cryptography and
Network Security, vol. 11464, pp. 217–37, 2019, doi: 10.1007/978-3030-21568-2_11.
[26] M. Bilal et al., "Social profiling: A review, taxonomy, and
challenges," Cyberpsychology, Behavior and Social Networking, vol.
22, no. 7, pp. 433–50, 2019, doi: 10.1089/cyber.2018.0670.
[27] M. Brundage et al., The malicious use of artificial intelligence:
forecasting, prevention, and mitigation, Oxford: Future of Humanity
Institute, 2018.
[28] E. Bursztein et al., "The end is nigh: generic solving of text-based
CAPTCHAs," in 8th Usenix Workshop on Offensive Technologies
(WOOT ‘14), San Diego, CA, USA, 2014.
[29] K. Cabaj et al., "Cybersecurity: trends, issues, and challenges,"
EURASIP Journal On Information Security, 2018, doi:
10.1186/s13635-018-0080-0.
[30] A. Cani et al., "Towards automated malware creation," in Proceedings
of The 29th Annual ACM Symposium On Applied Computing,
Gyeongju Republic of Korea, 2014, pp. 157–60, doi:
10.1145/2554850.2555157.
[31] F. Hamad, M. Al-Fadel, and H. Fakhouri, "The effect of librarians’
digital skills on technology acceptance in academic libraries in
Jordan," Journal of Librarianship and Information Science, vol. 53,
no. 4, pp. 589-600, 2021.
[32] J. Chen et al., "An Attack on Hollow CAPTCHA Using Accurate
Filling and Nonredundant Merging," IETE Technical Review, vol. 35,
sup1, pp. 106–118, 2018, doi: 10.1080/02564602.2018.1520152.
[33] W. Xu, D. Evans, and Y. Qi, "Feature squeezing: Detecting
adversarial examples in deep neural networks," in Proceedings of the
2018 Network and Distributed System Security Symposium, San
Diego, California, USA, 2018, doi:10.14722/ndss.2018.23198.
[34] Y. Yao, B. Viswanath, J. Cryan, H. Zheng, and B. Zhao, "Automated
crowdturfing attacks and defenses in online review systems," in
Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, Dallas, Texas, USA, 2017,
doi:10.1145/3133956.3133990.
[35] G. Ye, Z. Tang, D. Fang, Z. Zhu, Y. Feng, P. Xu, X. Chen, and Z.
Wang, "Yet another text captcha solver," in Proceedings of the 2018
ACM SIGSAC Conference on Computer and Communications
Security, Toronto, Canada, 2018, doi:10.1145/3243734.3243754.
[36] N. Yu and K. Darling, "A low-cost approach to crack python
CAPTCHAs using AI-based chosen-plaintext attack," Applied
Sciences, vol. 9, no. 10, p. 2010, 2019, doi:10.3390/app9102010.
[37] X. Zhou, M. Xu, Y. Wu, and N. Zheng, "Deep model poisoning attack
on federated learning," Future Internet, vol. 13, no. 3, p. 73, 2021,
doi:10.3390/fi13030073.
[38] Y. Sawa, R. Bhakta, I. G. Harris, and C. Hadnagy, "Detection of social
engineering attacks through natural language processing of
conversations," in 2016 IEEE Tenth International Conference on
Semantic Computing (ICSC), 2016, pp. 262–265.
[39] H. N. Fakhouri, S. Alawadi, F. M. Awaysheh, I. B. Hani, M.
Alkhalaileh, and F. Hamad, "A Comprehensive Study on the Role of
Machine Learning in 5G Security: Challenges, Technologies, and
Solutions," Electronics, vol. 12, no. 22, Art. no. 4604, 2023.
[40] C. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and
D. McClosky, "The Stanford CoreNLP natural language processing
toolkit," in Association for Computational Linguistics (ACL) System
Demonstrations, 2014, pp. 55–60.
[41] F. Mouton, L. Leenen, and H. S. Venter, "Social engineering attack
detection model: Seadm v2," in 2015 International Conference on
Cyberworlds (CW), 2015, pp. 216–223.
[42] F. Mouton, L. Leenen, and H. S. Venter, "Social engineering attack
examples, templates and scenarios," Computers and Security, vol. 59,
pp. 186–209, 2016, doi:10.1016/j.cose.2016.03.004.
[43] Shivamurthaiah, M., Kumar, P., Vinay, S., & Podaralla, R. (2023).
Intelligent Computing: An Introduction to Artificial Intelligence
Book. Shineeks Publishers.
[44] N. T. Nguyen, "An influence analysis of the inconsistency degree on
the quality of collective knowledge for objective case," in Asian
conference on intelligent information and database systems, 2016, pp.
23–32, Berlin: Springer, doi:10.1007/978-3-662-.
[45] J. Nicholson, L. Coventry, and P. Briggs, "Can we fight social
engineering attacks by social means? Assessing social salience as a
means to improve phish detection," in Thirteenth Symposium on
Usable Privacy and Security (SOUPS 2017), 2017, pp. 285–298,
USENIX Association.
[46] T. Peng, I. Harris, and Y. Sawa, "Detecting phishing attacks using
natural language processing and machine learning," in 2018 IEEE
12th International Conference on Semantic Computing (ICSC), 2018,
pp 300–301.
[47] R.-E. Precup, and R. C. David, "Nature-inspired optimization
algorithms for fuzzy controlled servo systems," ButterworthHeinemann, 2019.
[48] A. J. Resnik, "Journal of Marketing Research," vol. 23, no. 3, pp. 305–
306, 1986.
[49] P. M. Saadat Javad, and H. Koofigar, "Training echo state neural
network using harmony search algorithm," International Journal of
Artificial Intelligence, vol. 15, no. 1, pp. 163–179, 2017.
[50] B. H. Abed-alguni, "Island-based cuckoo search with highly
disruptive polynomial mutation," International Journal of Artificial
Intelligence, vol. 17, no. 1, pp. 57–82, 2019.
[51] M. Bezuidenhout, F. Mouton, and H. S. Venter, "Social engineering
attack detection model: Seadm," in 2010 Information Security for
South Africa, pp. 1–8, 2010.
[52] R. Bhakta and I. G. Harris, "Semantic analysis of dialogs to detect
social engineering attacks," in Proceedings of the 2015 IEEE 9th
International Conference on Semantic Computing (IEEE ICSC 2015),
pp. 424–427, 2015.
Authorized licensed use limited to: UNIVERSITY OF JORDAN. Downloaded on June 12,2024 at 18:08:54 UTC from IEEE Xplore. Restrictions apply.
View publication stats
Descargar