Deep Dives with Iman

Conversations on humans and machines

In collaboration with Radio4Brainport

Episode 18: Embodied Cognition, Dynamical Systems & Attractor Networks in Care Robotics

In this episode of Deep Dives with Iman, I speak with Prof. Dr. Yulia Sandamirskaya, head of the Center for Cognitive Computing in Life Sciences at Zurich University of Applied Sciences. Yulia builds robots that perceive, plan, and act in the real world using data- and energy-efficient neuromorphic computing. She states that we discovered our brain didn’t evolve for abstract tasks, it evolved to move. Even small children show cognitive behaviors grounded in movement. [...] Like when we were little, doing simple addition- ‘seven plus two’ - we imagine seven on a number line and move right two units. We place things in space.

Homepage

LinkedIn

Tags: Robotics, Yulia Sandamirskaya, Dynamical Systems, neuromorphic, Brainport, Embodied Cognition, Care, AI, Attractor Networks

Episode 17: Safe AI For Children

I had the privilege of speaking with Tara Steele, Founder and Director of the Safe AI for Children Alliance. We discussed the growing concerns around AI’s impact on children, particularly with the rise of grief bots—chatbots designed to simulate deceased people—and the lack of understanding regarding their effects.

Safe AI for Children Alliance

LinkedIn

Tags: Tara Steele, Safe AI for Children Alliance, AI, Children, Brainport, Eindhoven, Grief bots

Episode 16: Seeing Atoms, Shaping Futures: Dr. Remco Schoenmakers on Asimov, AI, Science, Purpose and Strategy

In this episode of Deep Dives with Iman, I am joined by Dr. Remco Schoenmakers, Senior Director and AI Strategy Lead at Thermo Fisher Scientific. Dr. Schoenmakers discusses his journey in AI, from his early days in astrophysics to leading AI strategy for the Electron Microscopy business. He shares insights on how AI is transforming the scientific landscape, emphasizing that it is a tool to enhance, not replace, human expertise. With his vast experience, Dr. Schoenmakers also addresses the growing influence of AI in workplaces, offering perspectives on the role of future generations of scientists.

Researchgate

LinkedIn

Tags: Eindhoven, Brainport, Science, Thermo Fisher Scientific, AI, Electron Microscope, Physics

Episode 15: AI, Art, and the Human Touch: Mei-Li Nieuwland on Creativity in the Age of Machines

How is AI shaping the future of creativity? Where does human artistry remain irreplaceable? Mei-Li is an incredible illustrator and graphic journalist with a background in AI and cultural anthropology. In our conversation, we explored AI's growing influence on the art industry, its impact on concept and environmental art, and why her niche remains largely unaffected.

Mei-Li Nieuwland

Tags: Creativity, AI, Art, Culture

Episode 14: Could Generative AI be Useful for Science and Understanding the World?

In this episode, Jakub Tomczak, a leading figure in Generative AI and former Program Chair of NeurIPS 2024, shares his insights on the transformative impact of Generative AI, not just in the field of artificial intelligence but in scientific discovery as well. Jakub explains how this technology is reshaping our understanding of generation processes and its potential to revolutionize various domains. Jakub offers a thought-provoking perspective on AI's societal implications, inviting us to be mindful. For those interested in a deeper dive into his work, Jakub's book, Deep Generative Modeling, is an essential resource on advanced AI models, including diffusion, flow, and energy-based models, GANs, and Variational Autoencoders.

Jakub Tomczak LinkedIn

Jakub Tomczak Persoanl Webpage

Tags: AI, Science, Generative AI

Episode 13: "Your Business Focus and Opportunity Is Trustworthiness" Ger Janssen, Philips' AI Ethics and Compliance Lead

Together with Ger Janssen, we discuss responsible AI practices, highlighting trustworthiness, fairness, and transparency in AI applications.Ger is Philips' Ethics and Compliance Lead, explores AI’s impact on industries, particularly healthcare. He discusses how AI can enhance patient care while addressing biases and ethical challenges. With AI's rapid rise, it’s crucial to adapt education and regulations to support effective human-AI collaboration. As Janssen underscores, AI isn’t going away—businesses must learn to leverage it responsibly. This episode offers essential insights into AI’s evolving influence on industries and society.

LinkedIn

Tags: human-AI collaboration, responsible AI, fairness, healthcare, Ethics, trustworthiness, regulation, Eindhoven, society, AI, industry

Episode 12: How can nature inspire artificial intelligence research to revolutionize energy efficiency? Dr. Federico Corradi explains.

A brief introduction to neuromorphic computing by Dr. Federico Corradi, Assistant Professor at TU Eindhoven. Specializing in energy-efficient AI inspired by the brain, Dr. Corradi leads the Neuromorphic Edge Computing Systems Lab, exploring how nature's principles can reshape AI to consume less energy. In this episode, Dr. Corradi explains how the brain’s minimal energy use inspires new AI systems, addressing the massive energy demands of large language models. In his line of research, the boundary between hardware and software is increasingly blurred. Designing algorithms and hardware together could transform AI into more sustainable and independent systems, enabling smarter edge devices without cloud reliance.

University Page

LinkedIn

Tags: AI, Edge, Technology, Eindhoven, University, Innovation, Computing, Brain, Energy, Neuromorphic

Episode 11: Robert Engels: "Agents need the same thing as you and I: they need an idea of intent and purpose of the counterpart or adversary"

Iman Mossavat engages Robert Engels, Head of AI Futures Lab at Capgemini, in a discussion on the role of context and abstraction in AI. With 36 years of experience, Engels examines why current generative AI systems, like GPT-4, excel at tasks yet fail in complex, real-world settings. Robert argues, \“Two things are underperforming in the world of generative AI—abstraction and logical reasoning.\” Robert underscores the need for AI to adopt a world model akin to philosophical reasoning: "Plato and Socrates understood this when they built logics—they looked at the world and tried to grasp its principles."

LinkedIn (Robert Engels)

Tags: Context, Eindhoven, Innovation, AI, Abstraction, GPT, Capgemini

Episode 10: Artin Entezarjou: "AI vs Doctors: Human Judgement Wins (for time being) in Complex Medical Contexts"

Our host Iman Mossavat welcomes Dr. Artin Entezarjou, a board-certified general medicine specialist, to discuss AI's role in healthcare. Dr. Entezarjou highlights AI's progress and challenges in handling complex clinical scenarios. “Humans can outperform AI when questions aren’t multiple choice, especially when psychosocial factors are involved,” he explains. His research comparing GPT-4 to human doctors reveals AI's limitations in understanding real-world medical complexities and patient contexts. “Clinical decisions require more than symptoms; they demand insight into patients’ preferences and circumstances,” he says. Dr. Entezarjou stresses the continued need for human judgment, adding, “We’re still the masters of recognizing when more context is needed, though this is rapidly changing.” He advocates for AI systems that are robust, trustworthy, and intuitive, emphasizing their role in supporting—not replacing—physicians. “AI can excel in specific tasks, but general judgment remains human,” he concludes

Study

LinkedIn (Artin Entezarjou)

Tags: Eindhoven, Healthcare, Innovation, Health, Clinical, Care, AI, Medicine

Episode 9: Diederik Roijers: "What If We Optimized for Enough, Not More?"

Join Iman Mossavat as he speaks with Diederik Roijers, senior researcher at the AI Lab at Vrije Universiteit Brussel. Diederik specializes in Multi-Objective Reinforcement Learning (MORL), a framework that shifts focus from optimizing a single outcome to balancing multiple, sometimes competing, objectives. Drawing on the ancient idea of balance—akin to Yin and Yang—Diederik challenges the status quo of “maximizing more.” Instead, he advocates for pursuing outcomes that are "good enough", prioritizing practicality, ethics, and societal benefit over perfection. 💡 Key Insight: How can AI navigate trade-offs like maximizing rewards while minimizing risks? Diederik’s vision emphasizes deliberate choices in AI design, promoting transparency, maintainability, and alignment with societal values. Together, they explore how AI can serve everyone fairly—not just a select few—while balancing innovation and responsibility. Tune in for a refreshing perspective on building AI systems that prioritize balance, fairness, and shared benefit.

For more about Diederik Roijers: Google Scholar

Episode 8: Weaving Trustworthy AI with Threads of Reason: The Subtle Genius of Professor Mehdi Dastani

Join Iman Mossavat for an enlightening conversation with Professor Mehdi Dastani, Chair of the Intelligent Systems group at Utrecht University. With over 500 scientific contributions, Professor Dastani has spent decades merging computer science and philosophy in a way that challenges and inspires. His work spans formal logic, reinforcement learning, ethics, and human-centered AI, bringing fresh perspectives to the challenges facing modern technology. Key Insight: Professor Dastani emphasizes the need for a fusion of machine learning, formal reasoning, and domain expertise to create AI systems that are safe, ethical, and aligned with human values. His interdisciplinary approach, drawing from psychology, law, and philosophy, offers innovative solutions to issues like AI bias, accountability, and safety risks.

  • Mehdi Dastani on Google Scholar
  • Episode 7: Lazy But Brilliant: How LazyDynamics is Set to Redefine Real-Time Decision Making with Reactive AI

    Join Iman Mossavat for an insightful episode with special guests Albert Podusenko and İsmail Şenöz, as we dive into the innovative world of Lazy Dynamics. This cutting-edge company is reshaping how agents process and act on real-time data in unpredictable environments. Their software streamlines the development of intelligent systems capable of navigating uncertainty — whether it's an ambulance optimizing routes or legal tech helping lawyers model strategic uncertainties in complex litigation. At the heart of their approach is RxInfer, a fast and efficient tool designed to overcome the computational limitations of traditional probabilistic programming libraries, enabling real-time decision-making. With a unique reactive message-passing system, Lazy Dynamics ensures agents can keep reasoning even if some sensors fail — a true breakthrough for dynamic, high-stakes environments. Discover how collaboration and open-source contributions are fueling their success and how these innovations are shaping the future. Tune in for an episode packed with insights into the future of AI and real-time decision-making! For more about the guests:

    Episode 6: Is your AI project legally compliant, with Inge Brattinga

    In this episode, Inge Brattinga, lawyer at VRF Advocaten and lecturer at Avans University, discusses how AI is reshaping hiring, human-resources (HR), decision-making, and privacy. She highlights the importance of AI literacy, offering practical advice for businesses navigating upcoming regulations.

    Episode 5: Evolutionary Science and AI with Indre Žliobaitė

    Professor Indre Žliobaitė explores parallels between evolutionary science and AI, discussing adaptive models, concept drift, and more. She explains how principles of evolution can inform AI, providing insights into dynamics like competition and adaptation.

    Episode 4: Legal Challenges of AI with Colette Cuijpers

    Colette Cuijpers, Associate Professor at Tilburg Law School, discusses the urgent legal and ethical challenges AI presents, from accountability to bias. This episode highlights the balance between innovation and regulation in a rapidly evolving field.

    Episode 3: Causal AI and Intelligent Systems with Alexander Molak

    Alexander Molak, machine learning researcher, explains why Generative AI lacks true causal understanding. He discusses Causal AI and its potential to improve intelligent systems by moving beyond mere correlation to deeper cause-and-effect insights.

    Episode 2: Systems Engineering and AI with Gerrit Muller

    Gerrit Muller, a systems engineer and professor, discusses the integration of systems engineering and AI, exploring the importance of a balanced approach. He examines common challenges, such as data quality and interpretability, and the complementary role of Model-Based Systems Engineering.

    Episode 1: Understanding Power Dynamics in AI with Mahault Albarracin

    Mahault Albarracin, Director of Applied Research and PhD candidate in Computing, discusses how AI often prioritizes objectives set by powerful stakeholders, affecting societal power structures. This episode explores the intersection of technology, ethics, and power in AI design.

    Contact