top of page

Cybersecurity in the AI Era: Alarming risks from "superintelligence" and the business challenge

  • 19 hours ago
  • 4 min read

Artificial intelligence (AI) is creating a seismic shift, reconstructing the operational structure of the global economy. According to research published by Microsoft and Boston Consulting Group (BCG), the labor market will witness a massive transition, with 50-55% of job roles fundamentally changing within the next 2 to 3 years. While not completely replacing humans, AI is setting new expectations for productivity and operational speed.

However, alongside these economic advancements, the rise of this technology is inadvertently "upgrading" the capabilities of high-tech criminals. Speaking at the 2026 World Digital Summit in Geneva (Switzerland), top scientist Geoffrey Hinton—often dubbed the "godfather" of AI—issued a strong warning. He likened the current explosive development of AI to a speeding car without brakes. As organizations increasingly turn AI into their operational "brains," cybersecurity in the AI era is no longer just an IT department issue, but a vital defense line determining the survival of every business.

The dark picture when AI is "weaponized"

The fact that "superintelligence" is falling into the wrong hands is completely altering the methods and scale of data breaches. Below are the most dangerous hidden threats currently present:

1. Hyper-personalized phishing traps

In the past, phishing email schemes were often easily exposed through basic spelling errors or awkward phrasing. Now, generative AI allows hackers to easily craft perfect forged letters, accurately mimicking the writing style of partners or superiors.

Email scams are becoming increasingly sophisticated.
Email scams are becoming increasingly sophisticated.

Data from the security firm SlashNext indicates an exponential growth of over 1,000% in malicious email campaigns involving artificial intelligence within just the past 12 months. These campaigns are not limited to emails but also surround victims via messages and social media, making even professionals highly susceptible to falling into the trap.

2. Deepfake: The thief of trust

A risk that heavily concerns experts is image and voice forgery technology (Deepfake). The latest assessments from Gartner show that this trick will soon become one of the most common methods for property appropriation, completely breaking the human principle of "seeing is believing".

A prime example is the shocking incident in 2024, when a corporation in Hong Kong lost approximately $25 million. Fraudsters sophisticatedly created a fake online meeting, using AI to perfectly recreate the faces and voices of the entire board of directors to order the accounting staff to transfer the funds.

3. The rise of "AI agents" and automation scale risks

Artificial intelligence is helping hackers drastically reduce the time spent searching for system vulnerabilities and automating the distribution of malicious code. According to IBM's Cost of a Data Breach report, the average cost to handle a data breach has now reached $4.45 million. More alarmingly, the emergence of automated AI agents is opening up an entirely new risk paradigm.

According to Yazid Akadiri, an expert at the IT security firm Elastic, the world is moving past the phase where AI is merely a "chat" tool towards the era of "actionable AI". A clear proof is the OpenClaw platform with over 3 million global users, where AI agents created from large language models (LLMs) like OpenAI's ChatGPT or Anthropic's Claude can autonomously execute task sequences. When exploited, these agents can automatically wipe email inboxes or distribute personal information in a split second.


4. Threats targeting AI systems and the cloud supply chain

Last week, financial regulators in the UK and the US had to issue emergency warnings about the Claude Mythos Preview AI model. Guillaume Princen, Anthropic's representative in Paris (France), confirmed that this model possesses capabilities far surpassing humans in detecting software vulnerabilities that have lain dormant for decades. This superior programming capability is so dangerous that Anthropic had to initiate Project Glasswing to seal the model and keep it under control.

Furthermore, reliance on third-party infrastructure creates an "Achilles' heel" for businesses. The incident where Rockstar Games had 78.6 million internal data records stolen on the Snowflake platform by the ShinyHunters hacker group—through a vulnerability from a data analytics provider (Anodot)—demonstrates that hackers are prioritizing attacks on the cloud supply chain rather than confronting core systems directly.

Hackers are prioritizing attacks on cloud supply chains.
Hackers are prioritizing attacks on cloud supply chains.

Additionally, Joachim Nagel, President of the German Central Bank (Bundesbank), warned about the "herd effect". If all financial institutions rely on a few specific AI models, any bias in the input data will lead to mass erroneous decisions, threatening the stability of the entire economic system.

How should organizations and businesses act?

Facing continuous pressure from new technologies, international monetary regulators such as MAS (Singapore), HKMA (Hong Kong), ASIC, and APRA (Australia) are urging organizations to urgently rebuild their defense perimeters. To proactively protect digital assets, businesses should focus on the following core strategies:

  • Use Technology to Fight Technology: Maintaining static antivirus software is completely useless against the speed of AI. Organizations must use AI itself to analyze behavior and detect network scanning efforts early. For businesses lacking an internal engineering team, utilizing a 24/7 Security Operations Center (SOC) from professional providers like IPSIP allows them to continuously monitor the system day and night, intercepting risks right at the gateway at an optimized cost.

  • Strengthen Personnel Awareness: Data from the Verizon Data Breach Investigations Report (DBIR) affirms that 74% of cyber incidents originate from the human factor. Training personnel on how to deal with phishing emails and building cross-verification processes for money transfer requests (to combat Deepfakes) is a critical step.

  • Strictly Control Input Data: Businesses must clearly classify which internal documents are strictly forbidden from being uploaded to public large language AI platforms to prevent unintended data leak risks.

  • Develop Incident Response Scenarios: Establish crisis management procedures, including isolating systems, patching vulnerabilities, and recovering data to minimize downtime during an actual attack.

Complacency in a context where technology changes daily could cost the survival of an entire brand. Seriously investing in cybersecurity today is the most solid foundation for businesses to confidently maximize the power of artificial intelligence.

----------

References:

  • Article: "Cảnh báo toàn cầu về rủi ro AI: 'Siêu trí tuệ' có thể vượt tầm kiểm soát" - Bac Ninh Newspaper.

  • Article: "An ninh mạng trong kỷ nguyên AI: Rủi ro mới nổi và cách ứng phó" - VietNamNet.

  • Article: "Ba 'mặt trận' an ninh mạng tuần qua: Chính sách, AI và tấn công Cloud đồng loạt nóng lên" - Vietnam Cybersecurity Magazine.

Comments


IPSIP logo transparent.png

IPSIP VIETNAM ONE MEMBER LIMITED LIABILITY COMPANY (IPSIP VIETNAM OMLLC)

Tax code: 0313859600

🏢 SH05.01, B4 Street, Saritown Area, An Khanh Ward, Ho Chi Minh City, Vietnam

​☎  +84 918 397 489

  • Linkedin
  • Facebook
  • TikTok
  • Email liên hệ

Our Services

Sign up to receive in-depth cybersecurity documents and news from IPSIP Vietnam.

bottom of page