Artificial intelligence is getting integrated into hundreds of different products and services, but entrepreneurs often overlook the necessary investment in cybersecurity to avoid a potentially very dangerous misuse of this technology.
This has been one of the key insights to get from the second day of the MetaWorld Congress, celebrated last 8 of May in Spain’s capital. Artificial intelligence promises great efficiency improvements for businesses, but also introduces new ways for hackers to cause damage, if deployed incorrectly.
Chema Alonso’s speech inaugurated the day at the Congress, as he is one of the most recognizable faces to participate in it (he is a known face in Spanish media and business spheres, currently working as Cybersecurity Advisor for Telefonica). With his presentation, you were able to understand the new tactics hackers are using to induce AIs to do what they want. They pose threats to consumers (we learned about the ways to misuse intelligent household appliances) but also for businesses that try to add AI into their services in a fast and reckless way.
AI’s potential for bad
Malicious actors can trick AIs into saying or doing things they should not do, posing a risk to your web or company if you have not thought of the necessary security measures. Prompt injection to get sensitive information disclosure, data poisoning or jailbreak are some of the tools hackers can use to disrupt your product. As many companies deploy cybersecurity not in a systemic level but as a glued-brick they add at the need, these risks amplify.

Chema Alonso talks about Sensitive Information Disclosure in the MetaWorld Congress 2025. Image credits: Enterprises&More.
Talking about the CIA triad (confidentiality, integrity and availability), Alonso mentions the possibility of accessing critic information like API keys, secrets or personal identifiable identities using AI.
In his talk, Alonso cites numerous academic reports showing the easy ways in which you can trick LLMs (large language models) into doing what they are not supposed to do. A failure to assess the novelty of the technology can turn into unprotected solutions that, although it can surprise clients, hold a serious threat for security.

MetaWorld Congress 2025 discussion panel. Image credits: Enterprises&More.
Programmers’ responsibility
Coders and developers see their responsibility arise, as the AIs they use to assist their work can provide them with bad code that leaves huge security holes in the system: hallucinations and package hallucinations make AIs provide low quality or malicious code, allowing for critical vulnerabilities to be exploited later.

Hardening Principles in Cybersecurity. Image credits: Enterprises&More.
It is vital as well to make our AI strong against Denial of Service Attacks, which can cause strong damage by limiting AI resources and, with that, making the AI more vulnerable in its responses. “The vulnerability is magnified due to the resource-intensive nature of LLMs and unpredictability of user inputs”, remembers Alonso.
Security measures for businesses
So, what to do to avoid having our AI systems hacked? Alonso remembers us of the 5 “hardening principles” to maximize security and minimize exposure: zero trust paradigm, assuming the presence of hostile actors, establishing identities, limiting access and risk-based adaptive access.
Víctor Recuero, Microsoft’s Cybersecurity Cloud Solutions Architect, we were able to deepen into the necessity of a safe way of identifying people online, to avoid intromissions into our systems. As Recuero’s presentation cited, model hijacking and wallet abuse add to the pile of threats for IT systems, and identity is one of the threat vectors that can be used by malicious agents.

Víctor Recuero, Microsoft’s Cybersecurity Cloud Solutions Architect in the MetaWorld Congress 2025. Image credits: Enterprises&More.
GenAI introduces new attack surfaces that go from the most basic (prompts and responses) to AI data, RAG data (retrieval-augmented generation), models, and plugins and skills. One very important takeaway from the presentations today: AI can be hacked not by entering its core system, but also by contaminating any of the data pools or applications that connect to it. As AI is used as a super brain to manage and analyze information from multiple sources, it’s vital that we understand that a ChatGPT bot, for example, is not safe just because we believe en OpenAI well-doing: the apps and sources of information we connect to our bot will need to be protected as well.
Use of AI in the cloud also mean threat vectors to assess, and for that, Recuero of course shows pride of Defender, Azure and the rest of cybersecurity solutions Microsoft provides to offer end to end data security.
