Yilu Zhou
Bio:
Dr. Yilu Zhou’s research on responsible AI emphasizes transparency, trustworthiness, and safety in real-world applications. Dr. Zhou explores how advanced models, such as multimodal large language models and deep learning methods, can be designed to align with policies, mitigate bias, and strengthen accountability. A central motivation is the protection of vulnerable populations, such as developing trustworthy frameworks for better app maturity rating.
Abstract:
Mobile applications (apps) can expose children to inappropriate content such as violence, sexual themes,
or drug use, making accurate maturity ratings essential. Current methods are often unreliable (e.g., developer self-reports) or costly (e.g., manual review). We propose a framework that leverages multimodal large language models (MLLMs), specifically ChatGPT-4 with chain-of-thought (CoT) reasoning, to determine app maturity levels. By guiding the model through systematic reasoning, our approach
enhances consistency, safety, and fairness. Experimental results show that our method outperforms baseline models, highlighting the potential of MLLMs to provide transparent, reliable, and equitable maturity rating systems.