Skip to main content

Generative AI and the End of Trust

Generative AI and the End of Trust

Tuesday, May 2, 2023
12pm to 1pm PT

Add to calendar:

Session will take place in McClatchy Hall #S40. McClatchy Hall is near the Oval, a short distance from Encina Hall., S40

This event is over.

Event Details:

Join the Cyber Policy Center, together with the Program on Democracy and the Internet, on Tuesday May 2nd from Noon – 1 PM Pacific, for Generative AI and the End of Trust, a conversation with Jeff Hancock, co-director of the CPC and director of the Stanford Social Media Lab. The session will be moderated by Nate Persily. This session is part of the Spring Seminar Series, a series spanning April through June, hosted at the Cyber Policy Center with the Program on Democracy and the Internet. Sessions are in-person and virtual, with in-person attendance offered to Stanford affiliates only. Lunch is provided for in-person attendance. Registration is required. Session will take place in McClatchy Hall #S40. McClatchy Hall is near the Oval, a short distance from Encina Hall.

The impact of recent AI advancements has massive implications for trust in human interactions. There is not only a growing role of AI in financial decision-making, risk assessment, and fraud detection, but the introduction of generative AI will challenge the maintenance of trust and accountability in an increasingly AI-mediated world. In this talk, Hancock will cover recent research on how people perceive and detect AI in human communication, and how generative AI is likely to undermine trust in several important human domains.

About the Speaker

Jeff Hancock is the Harry and Norman Chandler Professor of Communication at Stanford University, Founding Director of the Stanford Social Media Lab, and co-director of the Stanford Cyber Policy Center. He is also a senior fellow at the Freeman Spogli Institute (FSI). A leading expert in social media behavior and the psychology of online interaction, Professor Hancock studies the impact of social media and AI technology on social cognition, well-being, deception and trust, and how we use and understand language. Recently Professor Hancock has begun work on understanding the mental models people have about algorithms in social media, as well as working on the ethical issues associated with computational social science. He is also Founding Editor of the Journal of Trust & Safety.

1 person is interested in this event