How AI Is Transforming Cybersecurity: A Conversation with SMG’s CISO

In an era where artificial intelligence is reshaping every aspect of technology, cybersecurity stands at the frontline of both opportunity and risk. To explore how AI is transforming the way we protect data, systems, and users across high-stakes sectors like automotive, real estate, and finance, we sat down with Mostafa Hassanin (known as Mosti), Chief Information Security Officer at SMG. In this conversation, Mosti shares how AI enhances threat detection, supports ethical practices, and strengthens our culture of security and trust, revealing why the future of cybersecurity is as much about people and mindset as it is about technology.

Armanda:
Hello and welcome! Today we have a special conversation lined up with someone whose role is more critical than ever in today’s digital world. I’m joined by our Chief Information Security Officer, Mosti, to explore how artificial intelligence is transforming cybersecurity — from defending against AI-driven attacks to ensuring the ethical use of data across high-impact sectors like automotive, real estate, and finance.

Let’s dive right in. Mosti, thank you for joining us.

The Role of AI in Cybersecurity

Mosti:
Thank you, Armanda. When I think of AI in cybersecurity, it’s no longer optional – it’s essential. We have to use AI if we want to stay ahead of evolving threats, especially since attackers are already using it.

AI acts as a force multiplier, allowing us to scale our ability to detect and respond to threats across a growing attack surface. It doesn’t replace human expertise — it augments it. Human oversight remains critical.

And of course, in high-stakes sectors like automotive, real estate, and finance, customer trust is everything. We handle sensitive data, so maintaining that trust through secure systems is non-negotiable.

Leveraging AI for Threat Detection

Armanda:
AI-powered threat detection systems can process vast amounts of data in real time — something we couldn’t do before. How are we leveraging this across our platforms?

Mosti:
Think of AI-powered anomaly detection as continuously analysing both system and user behaviour in real time. The models establish baselines and flag deviations.

For example: if a user always logs in from Switzerland to sell postage stamps but suddenly logs in from abroad to sell a car — that’s a deviation.

These models are tailored to each user, business, and vertical. That’s why it’s important for customers to stay on our platforms to complete transactions. If they move to external channels like WhatsApp, we lose visibility — and with it, the ability to detect anomalies.

When something is flagged, automated playbooks kick in. These workflows pair signals with automated responses to keep both our customers and systems safe.

Consistency Across Marketplaces

Armanda:
SMG operates multiple platforms across various verticals. How do we ensure consistency in these AI-driven security systems while adapting to each marketplace’s needs?

Mosti:
It’s a federated but standardised approach. That means delegated, yet unified.

We use a multi-layered design — some sources, models, and actions are shared; others are customised. At SMG, our centralised security team oversees group-wide trust and safety, defining which standards must remain the same and which can differ.

This ensures agility and domain-specific measures without compromising security.
For example, accessing a platform is centralised, but actions like bidding on an item or buying a property differ — each carries different risks and verification steps. Everything is risk-based.

Armanda:
AI is powerful, but it can also cross ethical or legal boundaries. How do we ensure our models remain compliant and responsible, especially when handling personal data?

Mosti:
If anyone had a perfect answer to that, even OpenAI wouldn’t be struggling with misuse issues!

But we do our best. We follow privacy-by-design principles and train our models on anonymised or pseudonymised data wherever possible.

We also train our people, conduct audits to prevent bias or regulatory breaches, and embed technical safeguards — including safe defaults, automated checks, watermarking, and explainability features that show how an AI reached its decision.

It’s not easy in practice, but we’re committed to continuous improvement.

Fostering a Culture of AI Responsibility

Armanda:
Not everyone in the company is a cybersecurity expert — myself included! How do we encourage responsible AI use across non-technical teams?

Mosti:
It starts with incentive. People need to understand why cybersecurity matters — not just at work but personally. Once employees see that secure behaviour makes their jobs easier, it becomes part of the culture.

At SMG, security and privacy have become cultural values. We encourage and offer regular training, and I’ve noticed a strong security mindset among our teams — people proactively ask questions, raise flags, and seek improvement.

We also foster a no-blame culture. Mistakes happen — what matters is learning from them and ensuring they’re not repeated.

And of course, we make it fun: we run competitions and rewards, such as recognising the top 10 participants in phishing or bug bounty programmes each quarter. It builds engagement and awareness.

Looking Ahead: The Future of AI and Cybersecurity

Armanda:
Looking to the future — what trends or challenges do you foresee in AI and cybersecurity?

Mosti:
In the next two to three years, we’ll see more AI-powered attacks — deepfakes, phishing, automated scams — all at greater scale and sophistication. Attackers already automate entire workflows: generating unique phishing links, registering domains per target, even controlling everything through Telegram bots.

That’s why defence must evolve in parallel. We can’t base security on blind trust; it has to be risk-driven and adaptive.

Regulations will also evolve. The US tends to be flexible, Europe stricter — but global alignment is still uncertain. Everyone is proceeding cautiously after the GDPR experience, where compliance came faster than understanding.

We’re moving towards Zero Trust Security — “trust nothing, verify everything.” Over time, this will evolve into adaptive security, adjusting in real time based on risks and context.

Ultimately, you must use AI to defend against AI. Cyberattacks will continue — so focus on resilience. I call it cyber selection: those who are hit the least and recover the fastest are the ones who survive.

At SMG, we stay agile, prepare for attacks, and treat AI as an opportunity — not a threat. Our greatest strength remains our people and our culture.

Armanda:
That’s a perfect note to end on. Thank you, Mosti, for sharing your insights — and for the fascinating discussion!

Mosti:
Thanks a lot, Armanda. It was a pleasure.

Nach oben scrollen