Symposium on Responsible AI
October 16-17, 2025
McNally Amphitheatre
Lincoln Center Campus, Fordham University
140 West 62nd Street, New York City
.png)
Sponsored by:
Purpose and Significance
The ongoing AI revolution presents unprecedented opportunities and complex challenges for higher education, industry, and society at large. Unlike traditional software systems, AI systems powered by machine learning exhibit two key characteristics:
- Dependence on vast datasets, which raises critical concerns around privacy, data rights, provenance, and bias.
- Non-deterministic outputs, which introduce new risks related to trust, safety, reliability, and security.
Given the growing application of AI across nearly every sector of society, it is essential that we deepen our understanding of both the use and governance of these technologies.
At its core, Responsible AI seeks to foster the development of trustworthy AI systems that promote discovery and informed decision-making for the common good across both cyber and physical domains. Achieving this vision requires interdisciplinary collaboration that spans the sciences, social sciences, humanities, public health, law, business, and education, as well as meaningful partnerships with industry, government, and civil society.
The Symposium on Responsible AI brings together scholars, practitioners, and professionals from across the greater New York City area and beyond to advance this collaborative mission. The event is designed to strengthen cross-institutional networks and foster dialogue around the ethical, legal, and sustainable dimensions of AI. Open to the public and held in person, the symposium will feature keynote speeches, selected paper sessions, interactive workshops, and networking opportunities aimed at building a more responsible AI ecosystem.
We have confirmed the following four keynote speakers: 1) Ben Brooks, Harvard University; 2) Julia Stoyanovich, NYU; 3) Anthony Annunziata, IBM; and 4) Doni Bloomfield, Fordham University
Symposium Focus Areas and Topics
A) AI Regulation & Frameworks
This track explores the evolving legal and regulatory landscape surrounding AI, with an emphasis on accountability, compliance, and ethical oversight.
- Legal implications of AI applications across sectors
- AI in legal services: Opportunities, challenges, risks, and ethical considerations
- The role of AI in the courts: Evidence, reliability, case law, and due process
- Global AI governance: The EU AI Act, U.S. Executive Orders, NIST AI Risk Management Framework, and other emerging policies
- Accountability in AI systems: Legal liability, regulatory compliance, and social responsibility across the public and private sectors (higher education, industry, government, tech company, and NGO)
- Human-centered design and environmental considerations
- System integration and deployment challenges
B) Trustworthy AI
This track focuses on the ethical, technical, and social foundations for building AI systems that are transparent, fair, secure, and aligned with human values.
- Transparency and explainability: Addressing the "black box" nature of AI
- Embedding human values: Promoting reliable information, non-discrimination, and the common good
- Mitigating bias and advancing fairness in algorithmic decision-making and scientific discovery
- Quality management in AI development: Testing, evaluation, implementation, and supply chain integrity
- Ethics and accountability: Navigating ethical dilemmas and responsibility in cases of AI failure
- Data infrastructure and governance: Ensuring data privacy, quality, rights, and provenance
- Security and sustainability: Designing AI systems to be robust, secure, and environmentally sustainable
- Social justice in AI: Advancing equity, inclusion, and responsibility in autonomous and agentic AI systems
- Leveraging open-source tools: Creating, evaluating, and deploying AI using open and collaborative technologies
Suggestions for Submissions to Workshops & Paper Presentations
Workshops: Interactive demonstrations of tools, methods, and frameworks for developing and supporting Responsible AI systems, led by experts from academia, industry, government, and civil society.
Selected Paper Sessions: In-depth discussions on key concepts, case studies, best practices, and lessons learned from real-world AI applications. Each paper presentation will be allotted approximately 20 minutes.
Potential Presenters:
- University Professors and Students: Scholars in AI research and education
- Legal Scholars and Practitioners: Experts in regulation, intellectual property, privacy, and emerging AI legislation
- Philosophers and Ethicists: Thinkers examining fairness, accountability, and value systems in AI development
- Industry Leaders and Corporate Counsel: Professionals overseeing AI governance, compliance, and risk management
- Policymakers and Regulators: Key contributors to frameworks such as the EU AI Act, U.S. executive orders, and global standards
- Technologists and Data Scientists: Specialists in data infrastructure, algorithm design, safety protocols, and responsible deployment
- Civil Society Leaders and Equity Advocates: Voices addressing bias, discrimination, and the ethical impact of AI on marginalized communities
Registration
- To register as a presenter or a talk of workshop, please submit the registration form by September 15 2025. All submissions will be reviewed by the organizing committee, which will announce its decisions by September 30, 2025.
- To register as an audience participant, please submit the registration form by October 10, 2025.
- Symposium Cost: Free of charge.
- Complimentary lunch will be provided to all registered participants.
Contact: For additional information, please contact Mr. David Heston at [email protected].