Quando sbaglia l’AI | Alberta Antonucci | TEDxLink Campus University
TEDx Talks·2025-12-03 17:59

AI and Responsibility - The core issue is not AI's errors, but the lack of human governance and understanding of AI, emphasizing that technology should support, not replace, human judgment [19][20] - The industry highlights the paradox of widespread AI adoption coupled with a reluctance to accept responsibility for its outcomes [20] - The industry acknowledges that "AI hallucinations" are actually human failures to recognize incorrect information provided by AI [21] - The industry emphasizes the importance of assuming responsibility when using AI, as it ultimately lies with the user, not the AI itself [23] AI in Education - The report illustrates how students are using tools like ChatGPT to complete assignments, and teachers are using tools to detect AI-generated content, leading to a cycle of counter-measures [6][7] - The report points out that educators are struggling with how to address the use of AI in schools, with some even banning tools like Word to monitor student work [5][7] - The industry suggests that schools should focus on educating students' hearts and minds, rather than solely focusing on task completion [11] AI in the Workplace and Legal Profession - The report mentions a real case at Samsung where employees shared confidential code with AI, leading to potential security breaches and highlighting the need for corporate AI policies [13][14] - The industry references Italian Law 132/2025, which mandates corporate AI policies, responsible AI usage, and employee training [14][18] - The report cites instances of lawyers using AI to generate legal precedents that turned out to be non-existent, resulting in sanctions from judges [16][17]