Back to top Close Light Download image Go to slide [COUNT] Dark Next slide Previous slide Scroll left Scroll right Reset search input Submit search Share Stock exchange is momentarily closed

Navigating AI Complexity: Yunus on Challenges, Governance, and the Validaitor Advantage

How fragmented AI development clouds trust in technology and how Validaitor aims to bring clarity, safety, and control.

0
Download image

In this interview, Karen sits down with Yunus, an expert in trustworthy AI and founder of Validaitor, to discuss the pressing challenges companies face when managing AI applications, particularly in critical infrastructure sectors. Yunus shares his perspective on the growing complexity of AI systems, the risks of decentralized AI development, the evolving regulatory landscape, and the shortage of specialized talent in the field.

Drawing from his extensive experience in AI testing and red teaming, Yunus also introduces the innovative approach of Validaitor, a platform designed to simplify AI governance and compliance. He explains how Validaitor integrates seamlessly into existing organizational processes, reduces adoption costs, and empowers companies - big and small - to navigate the complexities of AI responsibly.

This conversation sheds light on the future of AI governance and the tools needed to ensure trustworthy and responsible AI practices in an increasingly regulated world. Dive in to learn more about how Validaitor is setting itself apart in this rapidly evolving industry!

Download image

What specific challenges do companies face managing AI applications with critical infrastructure right now?

There are four major challenges. The first one concerns the growing complexity of AI systems. As AI evolves, its complexity grows, and it becomes applicable to as many business cases as possible, particularly with the rise of Gen AI. To manage this complexity, advanced tools, advanced procedures, and processes are required. That's the first thing.

The second thing is related to the decentralized AI usage, especially with Gen AI. This new risk of shadow AI is reminiscent of the shadow IT of the past. So, business units tend to develop their own AI solutions and their own AI processes. This makes it very challenging for organizations to put effective AI governance practices with proper processes.

The third is the regulatory complexity. The EU AI Act, of course, spearheading that, but regulations are growing everywhere in the US or Asia. All these regulations come with their obligations and their own requirements. Governing AI in a central place is the only way to go forward with the regulatory challenges. But still, it needs proper tooling, it needs proper automation, and it needs proper processes to be put in place.

And the fourth is the lack of talent. The growing complexity of AI and the regulatory landscape require expertise in their respective fields. And many companies nowadays lack those talents. AI testing for example is a very new field. Even though AI engineers are very familiar with developing AI by optimizing for performance, they are not very familiar with testing AI systems for robustness, for fairness, for privacy etc. This makes it challenging for organizations to govern their AI in alignment with Trustworthy and responsible AI best practices, and in alignment with the ever-growing regulatory landscape.

Download image

Interesting! I mean you have been working on AI quality for the last seven years or so. Maybe you can tell us what sets the Validaitor team apart?

Our expertise comes from our academic backgrounds, and our team is very well versed in AI red teaming, AI testing, AI robustness, and cybersecurity of AI - that is the fundamental of what we are doing at Validaitor. And this serves as a basis for the AI governance as well. We start with the nuts and bolts of AI testing, and we build upon that. That is a different path that you might encounter in the industry where people come from governance backgrounds and try to integrate testing. We on the opposite are coming from AI testing and red teaming backgrounds and we built the governance and regulatory compliance on that very fundamental substance, very fundamental expertise, I would say.

Download image

We were talking about the challenges companies are facing earlier especially those with critical infrastructures. Maybe you can elaborate on how Validaitor addresses these challenges.

Yeah. First of all, governance should feel natural. The organizations are already complex. They have a lot of processes that they need to maintain. So, if, for example, from an AI standpoint, you introduce additional processes and procedures, that just makes things more complicated. At Validaitor, we believe that AI governance should fit the already in-place procedures and processes of the companies in the sense that everyone should do their respective job. And when they’re tasked with AI-related stuff, these tasks should feel natural as if it's a natural extension to people’s already in place job definitions.

Our second guiding principle is being an all-in-one platform, and we’re proud of that. Typically, companies need to adopt and integrate multiple tools—sometimes up to ten—just to cover the full spectrum of governance and compliance. Validaitor consolidates all of that into a single platform. We bring everything you need to govern AI responsibly and stay ahead of evolving regulations under one roof, significantly reducing adoption costs and operational complexity.

The third principle is accessibility, especially for companies that may not have deep in-house expertise. Many organizations, particularly smaller ones, lack dedicated AI governance or compliance teams. Validaitor helps fill that gap. We empower AI engineers with a suite of AI testing methods and equip governance teams with ready-to-use tools—regulatory mappings, policy templates, documentation workflows—everything aligned with today’s compliance standards.

And even for large enterprises, where internal talent already exists, automation is critical to reduce operational overhead. That’s another area where Validaitor excels. Our platform not only empowers teams but also drives efficiency—making it a strong differentiator in the market.

Download image

Can you explain what Validaitor does to someone who maybe has heard of the term cybersecurity, but nothing else?

AI adoption is accelerating rapidly. Especially with the surge of interest in Generative AI and AI agents. And this isn’t just hype - it’s becoming a concrete reality across industries and is increasingly embedded throughout organizations. For example in business processes, decision-making, and customer-facing use cases. As a result, ensuring the security of these AI systems and assets is becoming not only more challenging but also critical.

That’s where AI cybersecurity companies like Validaitor step in. Our role is to ensure that AI systems remain reliable, operate within their intended boundaries, and are safeguarded against threats—especially adversarial attacks. We focus on preserving system reliability, uptime, and integrity.

All of these concerns fall under what we call AI cybersecurity—or more broadly, AI safety. Over the past decade, the academic and research communities have identified numerous failure modes in AI systems. What we’re seeing now is that these once-theoretical issues are becoming real-world risks that businesses need to address urgently.

At Validaitor, we’re committed to tackling this challenge head-on. Our mission goes beyond establishing governance frameworks—we’re here to ensure the safety, security, and trustworthiness of AI systems in every environment where they operate.

Download image

“Can we even trust AI anymore and shouldn't we just stop using it?” What would you reply to that?

Trust is one of the biggest challenges that AI faces today. No question about it. And yet, it’s equally clear that AI brings tremendous value to organizations of all sizes, across every industry. So, the trust gap isn’t due to a lack of potential, it stems from real concerns, viewed from two key perspectives.

First, from the user’s perspective: People are becoming increasingly aware of AI’s limitations. They’ve seen examples like hallucinations in models such as ChatGPT that make them hesitant to rely on AI systems in critical workflows. That hesitation can create resistance to adoption.

The second perspective is that of the AI developers themselves. These are the people building cutting-edge solutions, but they often lack complete visibility into the reliability and behavior of their models especially when deployed in real-world environments. So, they, too, face trust issues with their creations.

These two sides of the trust problem ultimately led to the underutilization of AI. And that’s a missed opportunity. That’s exactly where Validaitor comes in. We act as independent third-party evaluators. A bridge between developers and users. Our platform verifies, tests, and validates AI systems to increase transparency, reliability, and confidence. By doing so, we help organizations move beyond hesitation and fully harness the power of AI.

Download image

You know, we're VC, so we love the big vision, and I'd love to hear that. So my question is: Where do you want to see yourself as a founder and a CEO, where do you want to be with validaitor in 5 to 10 years?

That’s a really good question because the AI security and safety market is still quite new. But one thing we can say for sure is that AI adoption is going to be massive over the next five years. And to support that growth responsibly, we need proper guardrails in place when it comes to AI safety. So, in the next 5 to 10 years? Our goal is to be a major player in the AI safety and independent verification space. We believe this role is essential to making AI both scalable and trustworthy.

Download image

Thank you so much, Yunus!

Thanks for having me!