Keynote Speakers

Anthony J. Annunziata

Abstract:
A lot has happened in a year! Several high performing pretrained AI foundation models, namely, GPT-OSS models (Open AI), Llama Series (Meta), Gemma 3 (Google), Granite 3.3 (IBM), Qwen 3 series (Alibaba), Mixtral (Mistral), etc., with openly available model parameters and permissible licenses have been released.   These models are now major pillars of the AI ecosystem and are essential for innovation and efficient deployment of AI in the society.  Concerns about safety and trustworthiness of this technology are key challenges that the community is addressing head on.

What’s next?  We must scale intelligence outward.

Instead of concentrating knowledge and capability in AI models, we need to do the opposite; empower individuals and organizations with trusted, expert agent capabilities they own and control. This will take significant open source advancements: in data, data structures, utility-scale domain-specific models, evaluation, and in emerging protocols for communication across humans, agents and tools.  We need to figure out how to incorporate AI agent capabilities seamlessly in both human-driven workflows and in software workflows. The AI Alliance and its more than 180 members are at the forefront of driving this progress. In this talk, I will highlight several of our projects that promise to scale intelligence outward.

Bio:
Dr. Anthony Annunziata is the Director of AI Open Innovation at IBM and Co-founder & Co-chair of the AI Alliance.  He leads IBM’s open AI ecosystem efforts across data, models, software and governance technologies. His major focus is the AI Alliance, a growing open research and technology consortium and open source foundation co-founded by IBM and Meta with more than 180 organizational members. Anthony has held a number of positions of responsibility working closely with IBM senior leadership, including driving the strategy and launch of IBM’s Granite series of open foundation models and creating external partnerships and a developer community around them. Earlier, he started IBM’s AI for science program and built its science developer platform to bring generative AI to scientific discovery. Before that, he was the leader who created and launched IBM Quantum and led its external product, ecosystem, and customer development and growth from inception to global leadership. Anthony started his career as a physicist researching spintronics for semiconductor memories, earned his Ph.D. from Yale, and holds 105 patents.

Schedule and location:
10:30 a.m. - 12:00 p.m., October 17, McNally Amphitheatre


Doni Bloomfield

"Governing Dual-Use Training Data"

Abstract:
Scientists may soon be able to use pathogen data to train artificial intelligence models that can cause grave harm—for example, by designing novel viruses or evading screening programs for synthetic nucleic acids. This possibility raises questions about the ethics of generating, disseminating, and regulating pathogen data. This example is but one subset of the broader question of how to regulate data that could be used to train AI models capable of both benevolent and malicious use. I examine the economics of dual-use data and argue that past policies for overseeing biological research, such as those meant to protect clinical trial participants and genetic donors, carry key lessons in this new domain. 

Bio:
Doni Bloomfield is an Associate Professor of Law at Fordham Law School. He teaches and writes in the areas of intellectual property, biosecurity, antitrust, national security law, torts, and health law. His work has been published, or is forthcoming, in Science, Washington University Law Review, Iowa Law Review, Antitrust Law Review, British Medical Journal, Journal of the American Medical Association, and elsewhere. He is a Greenwall Faculty Scholar, and previously served as a law clerk to Judge Timothy B. Dyk of the Federal Circuit and Judge Patricia A. Millett of the D.C. Circuit. Before law school, Bloomfield was a biotechnology reporter for Bloomberg News in Boston.

Schedule and location:
10:30 a.m. - 12:00 p.m., October 17, McNally Amphitheatre


Ben Brooks

"Why Open-Source Matters, and How It's At Risk"

Abstract:
Policymakers, developers, and researchers are grappling with a tsunami of AI regulatory and legislative proposals. Ben Brooks will discuss how these well-intentioned reforms can chill the open release of AI technology—and why that matters for transparency, competition, and privacy in AI.  Ben will discuss the unintended effects of different proposals, and how lawmakers and civil society can navigate recurring obstacles.

Bio:
Ben Brooks is an Affiliate at the Berkman Klein Center, Harvard. Ben engages decision makers around the world to promote open-source innovation in future rules. He served as Head of Public Policy for Stability AI, custodian of Stable Diffusion, testifying on AI regulation before federal, state, and international legislatures. Previously, Ben advocated the safe, open, and durable regulation of emerging technologies for ridesharing at Uber, digital assets at Coinbase, and drone delivery at Google's Wing—America's first certified drone "airline". He has worked with authorities on the ground in over 25 countries as they navigate complex reforms in high-stakes or permission-based domains, from Hanoi to Helsinki.

Schedule and location:
10:30 a.m. - 12:00 p.m., October 16, McNally Amphitheatre


Julia Stoyanovich

"Responsible AI Beyond Principles: Building a Regime of Distributed Accountability"

Abstract:
Much of the early work on Responsible AI emphasized broad principles—fairness, transparency, accountability—but too often left them abstract. This talk makes the case for moving toward a regime of distributed accountability, in which responsibility is shared across designers, deployers, regulators, professionals, and the public.  I will highlight examples of lifecycle-wide, context-aware technical interventions—from responsible data practices to explainability, privacy, and beyond—that show how technical work can support and reinforce accountability. Yet such interventions, on their own, are not enough. To have real impact, they must be complemented by literacy and guardrails: literacy equips professionals and citizens to use, question, and contest AI outputs; guardrails in law and regulation ensure transparency, oversight, and recourse. Through examples from healthcare, education, and public services, I will show how these dimensions fit together. Technical tools gain meaning when embedded in broader institutional and civic practices, exposing the “knobs of responsibility” to people, so they can be debated, adjusted, and shared. This is how general-purpose AI can become public-purpose AI.

Bio
Dr. Julia Stoyanovich is an Associate Professor of Computer Science & Engineering and of Data Science, and Director of the Center for Responsible AI (https://r-ai.co) at New York University. Her mission is to make “Responsible AI” synonymous with “AI.” She pursues this goal through academic research, education, technology policy, and public engagement. Her research spans data management and AI systems, as well as the ethics and governance of AI.  Julia holds a Ph.D. in Computer Science from Columbia University. She is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) and a Senior Member of the Association for Computing Machinery (ACM).

Schedule and location:
10:30 a.m. - 12:00 p.m., October 16, McNally Amphitheatre