How to use AI for work—without legal liability | Constantine Karboliotis | TEDxOshawa
TEDx Talks·2025-09-18 17:01

Risks & Liabilities - Organizations face legal repercussions under existing laws (privacy, human rights, consumer protection, employment, copyright) when AI is improperly controlled [8][9] - Over-reliance on AI without sufficient human judgment can lead to unforeseen consequences and liabilities [4] - Data breaches and intellectual property exposure can occur when confidential information is fed into AI systems [6] - Organizations are responsible for the outcomes and promises made by AI tools, even if provided by third parties [5][13] Critical Controls - Data quality is paramount; bad data leads to legal liability, highlighting the importance of data provenance, copyright, privacy, and relevance [10][11][12] - Establishing clear AI governance policies, including inventory of AI use, vendor management protocols, and contractual protections, is crucial [12][13][14] - Continuous oversight and review of AI outputs are necessary to ensure transparency, accountability, and risk management [15] - Education and training are essential to ensure proper use of AI, understanding its limitations, and maintaining human oversight [17][18][19] Key Takeaways - AI systems are powerful but literal, unpredictable, and require careful framing of instructions [3][4] - Organizations must actively manage AI and understand its limits to avoid becoming servants to the technology [20] - A culture of mindfulness is needed to balance reliance on AI with human expertise [19]