ForgeIQ Logo

AI's Double-Edged Sword: Insights from AbbVie's Rachel James on Cybersecurity and the Future of Threat Intelligence

Aug 22, 2025Cybersecurity and AI
Featured image for the news article

In the ever-evolving world of cybersecurity, the emergence of AI has ignited a new arms race. This modern battlefield showcases AI as both a protective shield for defenders and a powerful weapon for those with malicious intentions. The stakes have never been higher as organizations grapple with the challenges of safeguarding their digital domains.

To provide deeper insights into this pressing issue, we spoke with Rachel James, the Principal AI ML Threat Intelligence Engineer at AbbVie, a leading global biopharmaceutical company. With the increasing complexity of cyber threats, understanding how to maneuver through this landscape is paramount.

“In our tools, we’re leveraging built-in AI features provided by vendors, alongside applying Large Language Model (LLM) analysis to our detections and observations,” Rachel shares. This strategic approach allows her team to sift through a sea of security alerts, identifying patterns and spotting potential vulnerabilities before they can be exploited.

James emphasizes the importance of using AI for similarity detection and gap analysis, hinting at plans to integrate external threat intelligence in the near future. This next step could enhance their response capabilities significantly. “We centralize our efforts using a specialized threat intelligence platform called OpenCTI,” she notes, which helps create a comprehensive view of emerging threats amidst digital chaos.

The core of James's strategy is to transform disorganized information into actionable insights through AI, channeling this raw data into a standardized format known as STIX. Ultimately, she envisions a world where language models enhance every aspect of security operations, from managing vulnerabilities to assessing third-party risks.

However, harnessing the power of AI doesn't come without challenges. As a dedicated advocate for ongoing ethical awareness, James is actively involved with the OWASP Top 10 for GenAI. This initiative seeks to elucidate vulnerabilities inherent in generative AI. She cautions that while AI holds great potential, there are fundamental trade-offs organizations must consider:

  • Accepting the unpredictable risks that generative AI introduces.
  • Managing the opacity of AI decision-making processes, which can cloud understanding as models grow more intricate.
  • Accurately assessing the return on investment in AI, as the rapid pace of developments may lead to inflated expectations and underestimated efforts.

James emphasizes that a profound understanding of adversaries is essential for improving cybersecurity strategies. With her extensive experience in cyber threat intelligence, she’s well-placed to unearth insights on how threat actors use AI.

“I’ve conducted thorough research on adversaries’ interests in AI and kept tabs on their tools and tactics via open-source channels and automated dark web collections through my cybershujin GitHub,” she reveals. Her hands-on involvement with developing adversarial techniques proves invaluable.

Where is this all headed for the cybersecurity landscape? James sees exciting parallels between the cyber threat intelligence lifecycle and the data science lifecycle—to her, it represents a unique opportunity for defenders. “With our datasets, we can leverage intelligence sharing remarkably well,” she asserts.

In closing, she offers a message that blends encouragement with a call to action: “Data science and AI will be a fundamental part of every cybersecurity expert’s toolkit moving forward; embrace it." Rachel James will further elaborate on these concepts during her presentation at the AI & Big Data Expo Europe in Amsterdam on September 24-25, 2025, where her topic will delve into embedding AI ethics at scale.

Latest Related News