exterior of Dickson Hall
News and Announcements

CHSS Launches AI Distinguished Speaker Series

Year-long series brings expert insights to campus to spark critical conversations about the future of technology, society, and education.

Posted in: Homepage News and Events

Distinguised Speaker Series feature image

This year, the College of Humanities and Social Sciences launches the AI Distinguished Speaker Series, a yearlong exploration of how artificial intelligence is reshaping human systems, knowledge, and society, and the role that humanities and social sciences could and should play in framing the questions and guiding the regulation of these systems. Designed to be inclusive and engaging, the series will bring thought leaders to campus who not only explain emerging technologies but also invite students and the wider community into critical conversations about their impact.

The series will feature experts from multiple fields, addressing topics such as AI literacy, the truth and trust challenges of generative AI, ethical and responsible use, the impact of AI on teaching and learning, and how institutions and communities can prepare for rapid technological change. By blending expert insight with student-centered formats, the series aims to spark new ways of learning, questioning, and participating in AI’s future.

More speakers, dates, and topics—including issues such as AI and education, equity, and institutional readiness—will be announced throughout the academic year. Check back often for new events in the series!

This speaker series is co-sponsored by the College for Education and Engaged Learning.

Upcoming Speakers

Designing AI for Learning, Not Convenience: A Vision and Early Progress Toward a Performance-Based Course Model, Stephen Balfour
Wednesday, February 4, from 10 – 11:30 a.m., Presentation Hall

What if we are asking AI to do the wrong things in education? Rather than automating grading, replacing instruction, or bypassing learning, this talk explores AI as a learning interface through which students demonstrate mastery of instructor-defined knowledge and skills.

Registration to Attend

This event will be offered in-person only

Designing AI for Learning, Not Convenience: A Vision and Early Progress Toward a Performance-Based Course Model

Abstract
What if we are asking AI to do the wrong things in education? Rather than automating grading, replacing instruction, or bypassing learning, this talk explores AI as a learning interface through which students demonstrate mastery of instructor-defined knowledge and skills. Drawing on UGA Online’s emerging work, I will describe a performance-based course model in which students show what they know through sustained interaction with texts, data, arguments, and methods, guided by AI that adapts to their learning history and supports work within their zone of proximal development. Faculty retain control over standards, content, and evaluation, while AI scales practice, feedback, and coaching without diminishing intellectual standards, interpretation, or mentorship.

Speaker Bio
Dr. Stephen Balfour, Executive Director of UGA Online, began working (poorly) with the AI precursors to Generative Pre-trained Transformers (GPTs) in 1992. From 2008-2017, he had contracts with the US Department of Defense that provided an early AI translation assistant to their research team. Steve and his colleague, Dr. James Castle, Associate Director of UGA Online, began working with ChatGPT when it emerged in 2021. James scaled them into the orchestration of Large Language Models (LLM) and then made the short leap to coding their own agents with python in a Jupyter notebooks environment using OpenAI’s Application Programming Interfaces (APIs). This talk describes the trajectory of their work applying Generative AI to education.

Past Speakers

An Imperative for AI Literacy for All,  Sri Yeswanth (Yash) Tadimalla, PhD
Wednesday, October 8

Dr. Tadimalla will present an interdisciplinary approach to understanding AI, its socio-technical implications, and its practical applications for all levels of education. The talk will highlight ways to understand the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI.
An Imperative for AI Literacy for All

Abstract
The talk will focus on what “AI Literacy for All” means. Dr.Tadimalla will present an interdisciplinary approach to understanding AI, its socio-technical implications, and its practical applications for all levels of education. With the rapid evolution of artificial intelligence (AI), there is a need for AI literacy that goes beyond the traditional techno-centric AI education curriculum. AI literacy has been conceptualized in various ways, including public literacy, competency building for designers, conceptual understanding of AI concepts, and fluency for domain-specific upskilling. In “AI Literacy for All”, the emphasis is on a balanced approach that includes technical and socio-technical learning outcomes to enable a conceptual understanding and critical evaluation of AI technologies in real-world contexts. The talk will highlight ways to understand the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI.

Dr. Tadimalla will present on how AI for All learning outcomes can be adjusted for various learning contexts, including STEM and non-STEM majors, high school summer camps, the adult workforce, and public education. The talk will empower the audience to advocate for a shift in AI education that offers a more interdisciplinary and socio-technical pathway to broaden participation in AI dialogues and decision-making for all.

Speaker Bio

Sri Yeswanth (Yash) Tadimalla holds a Ph.D. in Computing and Information Systems with an interdisciplinary focus in Sociology and a Master of Science in Computer Science from UNC Charlotte. He currently serves as a Computing Innovation (CI) Fellow focused on AI Education at the Computing Research Association (CRA) in Washington, DC. At UNC Charlotte, he contributes to NSF-funded research projects within the Center for Humane AI Studies, the Center for Education Innovation Research (CEI) Lab, and the Human-Centered Computing (HCC) Lab.

Dr. Tadimalla’s research agenda examines how human identity impacts interactions with technology and learning experiences, particularly within the contexts of Artificial Intelligence (AI) and Computer Science (CS) education. His contributions include several socio-technical frameworks: the Human-AI Interaction Ecosystem Framework, the AI Identity Boundary Object model—which examines how identity shapes the creation, perception, and use of AI—and the Higher Education AI Readiness (HEAIR) Framework. His work on universal AI Literacy advocates a shift from techno-centric AI education to an interdisciplinary, socio-technical model.

As the Global Focal Point for the United Nations Major Group for Children and Youth (UNMGCY) Science-Policy Interface and President of the World Student Platform for Engineering Education and Development (SPEED), Dr. Tadimalla advocates globally for equitable access to STEM education, mental health, gender/sexual and reproductive health rights (SRHR), and sustainable human-centered technology development and deployment. He coordinates youth and civil society participation in advocacy mechanisms and policy engagements for high-level meetings within the ECOSOC cycle, including the STI Forum, Partnership Forum, Youth Forum, High-Level Political Forum (HLPF), and United Nations General Assembly (UNGA). He has presented his work as a keynote speaker and researcher at distinguished international conferences (ASEE, ACM-SIGCSE, IEE-FIE, AAAI, UN) across diverse global locations in 20+ countries.

View his website or find him on LinkedIn

The Truth of the Matter in the Age of Generative AI, Tina Eliassi-Rad, PhD
Wednesday, November 5, from 10 – 11:30 a.m., Presentation Hall

We live in the age of algorithmically infused societies where human and algorithmic behavior are intertwined. Consider generative AI tools (such as ChatGPT and Gemini). Their use across a broad spectrum of our society is undeniable. How can we shape their use to improve our society?
Watch the Recording
The Truth of the Matter in the Age of Generative AI

Abstract
We live in the age of algorithmically infused societies where human and algorithmic behavior are intertwined. Consider generative AI tools (such as ChatGPT and Gemini). Their use across a broad spectrum of our society is undeniable. How can we shape their use to improve our society? As it stands, such tools often increase epistemic instability in our society. Generative AI tools, for example, are not experts in any field and are prone to falsehoods (a.k.a. hallucinations) and adversarial attacks, yet they are treated like experts. The relationship between a person and a generative AI tool is also exaggerated by the sense of familiarity one feels when using such tools. The lack of effective oversight and accountability for this technology exacerbates these issues. How can we govern a technology that is mutating so rapidly? A digitally savvy public is an essential part of the solution.

Speaker Bio
Tina Eliassi-Rad is a Professor of Computer Science and The Inaugural Joseph E. Aoun Chair at Northeastern University. She is also a core faculty member at Northeastern’s Network Science Institute. In addition, she is an external faculty member at the Santa Fe Institute and the Vermont Complex Systems Institute. Prior to joining Northeastern, Tina was an Associate Professor of Computer Science at Rutgers University; and before that she was a member of technical staff and principal investigator at Lawrence Livermore National Laboratory. Tina earned her Ph.D. in Computer Sciences (with a minor in Mathematical Statistics) at the University of Wisconsin-Madison. Her research is at the intersection of data mining, machine learning, and network science. She has over 150 peer-reviewed publications (including a few best paper and best paper runner-up awards); and has given over 300 invited talks and 14 tutorials.

Tina’s work has been applied to personalized search on the World-Wide Web, statistical indices of large-scale scientific simulation data, fraud detection, mobile ad targeting, cyber situational awareness, drug discovery, democracy and online discourse, and ethics in machine learning. Her algorithms have been incorporated into systems used by governments and industry (e.g., IBM System G Graph Analytics), as well as open-source software (e.g., Stanford Network Analysis Project). Tina received an Outstanding Mentor Award from the U.S. Department of Energy’s Office of Science in 2010, became an ISI Foundation Fellow in 2019, was named one of the 100 Brilliant Women in AI Ethics in 2021, received Northeastern University’s Excellence in Research and Creative Activity Award in 2022, was awarded the Lagrange Prize in 2023, and was elected Fellow of the Network Science Society in 2023.

View her website or LinkedIn.