Senior Director at Microsoft’s Azure AI engineering organization, Eve Psalti : Artificial Intelligence meets Ancient Greek philosophy

Artificial Intelligence meets Ancient Greek philosophers

 

Interview of Senior Director at Microsoft’s Azure AI engineering organization, Eve Psalti by Vicky Evangeliou, VSN Hub Founder & member of the Advisory Panel for GRtraveller magazine

 

“As the curator for this section of the magazine, I’ve had the privilege of crafting thought-provoking questions for Mrs. Psalti, whose insights into the intersection of ancient wisdom and modern AI management promise to be enlightening. With her extensive background in the field, Mrs. Psalti brings a wealth of knowledge and experience to our discussion. Through this interview, we aim to uncover how timeless virtues can inform contemporary leadership practices in the realm of artificial intelligence, providing valuable insights for readers navigating the ever-evolving landscape of AI governance and innovation.”

Vicky Evangeliou
VSN Hub Founder & member of the Advisory Panel

 

 

 

 

 

 

 

 

 

 

Leadership and Virtue:

 

Ancient Greek philosophers like Aristotle emphasized the importance of virtuous leadership. How do you translate the concept of virtues into modern leadership practices within the realm of AI management?

 

Wisdom and ethical behavior were essential virtues in ancient Greece and Aristotle, very rightly so, emphasized the importance of integrity and accountability in leaders.

 

In these modern times, especially with the rapid evolution of AI technology it’s essential for AI leaders to prioritize responsible AI consideration in decision-making ensuring that AI systems and applications adhere to the appropriate standards including addressing biases, promoting transparency and considering the broader societal impact of AI technologies.

 

While risk taking and courage were considered a virtue in ancient Greece, the importance was about doing so for the greater good. Today, AI leaders should encourage innovation while being mindful of potential risks such as ensuring data privacy and proactively mitigating potential negative impacts of AI.

 

Lastly, Aristotle highlighted the importance of intellectual curiosity and the pursuit of knowledge, which should be a paramount principle for AI leaders today. The concepts of continuous learning and adaptability are essential for AI leaders to foster a culture of lifelong learning, staying informed about the latest AI advancements and adapting strategies to changing circumstances.

 

 

Knowledge and Wisdom:

 

Socrates valued the pursuit of knowledge and self-awareness. How does your AI management strategy prioritize continuous learning and the development of wisdom, both for the AI system and the team overseeing it?

 

AI technology has rapidly evolved with deep learning and generative AI (large language models like Open AI) in the past 2-3 years and it still changes rapidly as data becomes more available and computational power becomes more accessible. So, it’s important to implement ongoing training protocols for these AI systems to keep them updated with the latest data.

 

Also, in terms of self-awareness, although AI systems are not “self-aware,” we, as humans and designers or users of these AI models, need to regularly evaluate and update these algorithms and processes to improve performance and ensure the results that they yield continue to be accurate and relevant.

Regarding continuous learning, digital assistants continuously learn from user interactions and adapt to user preferences, understand new voice commands, and improve their ability to fulfill user requests over time.  Also, AI systems in healthcare like the ones used for medical image analysis, can continuously learn from new medical images and diagnostic outcomes and that helps them adapt to diverse patient populations and develop a form of “clinical wisdom” by recognizing subtle patterns and anomalies that contribute to more accurate diagnoses and treatment recommendations.

 

While AI systems can exhibit continuous learning and, to some extent, develop practical wisdom in specific domains, it’s important to note that the concept of true wisdom involves a level of consciousness and subjective understanding that current AI lacks. AI systems operate based on patterns, statistical correlations, and training data, and their “wisdom” is limited to the context in which they have been trained.

 

Adaptability and Change:

 

Heraclitus famously said, “Change is the only constant.” How does your AI management strategy embrace change and adaptability in the rapidly evolving field of artificial intelligence?

 

Regardless of whether you’re implementing or designing AI, it’s critical to adopt agile methodologies and implement iterative development cycles that allow continuous improvement – what is called reinforced learning.

 

For those of us who design AI models and applications, scalability is paramount as data continues to grow in volume, user interactions evolve, and computational capabilities become more efficient.

For those who integrate AI into their infrastructure and processes, it’s important to adopt a culture of experimentation and rapid prototyping to test new ideas, use cases and adopt the best performing ones.

 

For example, advances in medical imaging and diagnostics through AI require constant adaptation to evolving techniques. As new algorithms and methodologies are developed, healthcare AI applications need to incorporate these improvements to enhance accuracy in disease detection and diagnosis.

 

Also, improvements in natural language understanding and dialogue generation techniques impact virtual assistants and chatbots. Adapting to these improvements enhances the conversational abilities, responsiveness, and overall user experience of AI-powered virtual assistants.

In either case we need to invest in continuous learning and skill development for ourselves and our teams to keep abreast of the latest developments in the AI field. Also, it’s essential to establish governance frameworks that are adaptable to changes in regulations, responsible AI standards and societal expectations.

 

By embracing these strategies, organizations and individuals can create an AI management approach that not only embraces change but actively leverages it for innovation and growth. The ability to adapt to change becomes then a competitive advantage.

 

Sophrosyne in AI Governance:

 

The concept of “sophrosyne” involved self-control and moderation. How can these principles be integrated into the governance and regulation of AI technologies to ensure responsible and ethical use?

 

Establishing clear responsible guidelines and standards for the development and use of AI technologies is absolutely critical and aligned with the concept of “sophrosyne” – these guidelines can help teams avoid undesired practices like bias, discrimination, and privacy violations.

 

Transparency and traceability in AI systems can provide clear explanations of how these AI models and algorithms are put together and what kind of data they use to derive their results.

 

Also, we should regularly monitor AI processes to assess and address any issues apply human intervention over critical decisions.

While these AI models are powerful and very capable, they’re still a tool and a “co-pilot” for humans and should not replace critical thinking, judgement and creativity unique to humans.

 

What we’ve seen work well is when we foster a strong collaboration across AI specialists and experts from diverse disciplines including ethics, law, sociology and humanities as well as engineering to make sure diverse voices and points of view are considered. Plus, governments need to establish AI laws and regulations that are adaptive and can change as technology evolves.

 

Socratic Questioning in AI Design:

 

Socrates used questioning to stimulate critical thinking. How can the Socratic method be applied in the design and development of AI systems to encourage ethical considerations and responsible decision-making?

 

The Socratic method can be a great user-centric approach for the development and use of AI – Encouraging AI engineers and developers to ask the right questions throughout the design process like potential biases in the training data and how these AI models can impact various user groups.

 

I believe it can provide accountability and responsibility to those who develop AI systems to integrate regular reflection sessions where teams can openly explore and discuss ethical considerations and potential challenges like “who is accountable for the results of this AI model?” and how to safeguard the process for users to ensure there no unintended consequences.

 

By leveraging the Socratic method as an educational tool, we can foster a culture of continuous learning and responsible AI awareness. Using questioning as a tool for teaching responsible AI and prompting critical thinking skills can be an effective way to engage all stakeholders whether they’re developing or using AI systems.

 

Eve Psalti is a 20+year tech and business leader, currently the Senior Director at Microsoft’s Azure AI engineering organization responsible for scaling Generative AI solutions with customers and partners.
She was previously the Head of Strategic Platforms at Google Cloud, where she worked with F500 companies, helping them grow their businesses through digital transformation initiatives.
Prior to Google, Eve held business development, sales and marketing leadership positions at Microsoft and startups across the US and Europe, leading 200-person teams and $600M businesses.
A native of Greece, she holds a Master’s degree and several technology and business certifications from London Business School and the University of Washington. Eve currently serves on the board of WE Global Studios , a full-stack startup innovation studio supporting female entrepreneurs.
https://www.linkedin.com/in/evepsalti
https://twitter.com/evepsalti