Former CIS Student (2014 grad from PCS-LC) Anthony Edwards Jr. was featured in a Thrillist article: How This Restaurant Directory App Is Helping Black-Owned Restaurants Around the Country.
Sofia Ongele, a (class of 2022) student at Fordham University and Swift Student Challenge winner, describes her inspiration for creating the ReDawn app for women who experience sexual assault. She speaks on "Bloomberg Technology."
Combining Evidence for Cross-language Information Retrieval
By: Petra Galuscakova
Time: Tuesday, March 31, 10 - 11 a.m.. Eastern Time
Abstract: System combination has been extensively studied in monolingual information retrieval, but the problem is understudied in cross-language retrieval in which queries are expressed in one language, but documents are written in another. One notable characteristic of cross-language retrieval, however, is the potential for a greater diversity of system design, since translation and retrieval components both exhibit substantial design spaces. Due to the large diversity of the systems in cross-language retrieval, the potential range of combinations is orders of magnitude larger than in monolingual applications.
I show that evidence combination works well in cross-language retrieval, achieving improvements of 40% relative to the best single system. The best results are obtained using post-retrieval evidence combination, which is able to incorporate many diverse high-quality systems. Because hundreds of different systems can be built, the effectiveness of alternative approaches for managing the complexity is also explored. Both system clustering and expert judgment regarding diversity can help to limit the combinatorial growth of time complexity arising when selections among large numbers of systems need to be made.
Bio: Petra Galuscakova is a postdoctoral researcher at the University of Maryland, College Park working with Prof. Doug Oard. She is broadly interested in information and multimedia retrieval and she is presently working on cross-language information retrieval in low resource languages. She completed her Ph.D. in computational linguistics at Charles University in Prague. Her prior work has investigated methods for effective search and navigation in multimedia archives.
Interpretability by Design: New Interpretable Machine Learning Models and Methods
By: Chaofan Chen
Time: Monday, March 30, 1 - 2 p.m., Eastern Time
Abstract: As machine-learning models are playing increasingly important roles in many real-life scenarios, interpretability has become a key issue for whether we can trust the predictions made by these models, especially when we are making high-stakes decisions. Lack of transparency has long been a concern for predictive models in criminal justice and in health care. There have been growing calls for building interpretable, human-understandable machine-learning models, and “opening the black box” has become a debated issue in the media. Chaofan Chen's research addresses precisely the demand for interpretability and transparency in machine-learning models. The key problem of Chen's research is: “Can we build machine learning models that are both accurate and interpretable?”
To address this problem, Chen will discuss the notion of interpretability as it relates to machine learning, and present several new interpretable machine-learning models and methods he developed in his research. In particular, he will first give an overview of his research by discussing two types of model interpretability—predicate-based and case-based interpretability, and highlighting the contributions he has made. Chen will then focus on the topic of case-based interpretability for computer vision, in the remaining part of my talk. More specifically, I will present my work in developing deep neural networks that are able to reason about images by saying “this looks like that,” just like how we humans would explain to others on how to solve challenging image classification tasks. These networks are able to learn a meaningful latent embedding space that captures the notion of visual similarities and a set of prototypical cases for comparison. Given a new image, they are able to identify similar prototypical cases using distances in the latent space and make predictions according to the known class labels of those prototypical cases. The experiments on MNIST (for handwritten digit recognition) and CUB-200-2011 (for bird species identification) show that the case-based interpretable networks can achieve comparable accuracy with its analogous non-interpretable counterpart and at the same time, provide a level of interpretability that is absent in attention-based interpretable deep models. Indeed, as Chen's work has demonstrated, we can build machine learning models that are both accurate and interpretable by designing novel model architectures or regularization techniques.
Bio: Chaofan Chen attended the University of Chicago and graduated with a Bachelor of Science in Mathematics (with honors). He began doctoral studies in computer science at Duke University in 2014. He was awarded the Outstanding Ph.D. Preliminary Exam Award in 2018 and the Outstanding Research Initiation Project Award in 2017 by the Department of Computer Science at Duke University. He pursued his research in the area of interpretable machine learning under the direction of Professor Cynthia Rudin.
Dr. Thaier Hayajneh is inviting faculty and students to the following scheduled Zoom presentation.
Studying Security Tasks Through the Lens of Brain-Computer Interface
By: Muhammad Lutfor Rahman
Time: Wednesday, March 25, 10 a.m. Eastern Time (US and Canada)
Abstract: Traditional security research focuses on securing the hardware and software stack of cyber systems, and less focus on how humans weaken the cybersystem. All kinds of preventive, mechanisms from traditional research might be in vain if a user falls for a phishing attack. Attackers usually target the weakest link of the security chain, and humans are considered as one of the weakest links. We have observed a surge of phishing and social engineering attacks in the past few years, and many large corporations were penetrated through targeted/spear-phishing attacks. Hence, I explore cybersecurity through an unconventional approach by studying human brains and unfolding some of its mysteries. In this talk, I will present my recent projects on utilizing Brain-Computer Interface to enhance the security of web browsing and the privacy of personal devices. For the phishing study, we explore the feasibility of utilizing neural activities for automated phishing detection. With improved data preprocessing techniques and feature extraction methods, we showed that it is possible to utilize differences at the neural activity level to detect phishing websites. For the access control study, we explore the feasibility of utilizing neural activities to infer the user’s high-level intents while using an app. The inferred intent can then be utilized to automate authorization access to privacy-sensitive sensors and files.
Bio: Muhammad Lutfor Rahman is a Ph.D. candidate and Associate Instructor in Computer Science and Engineering at the University of California, Riverside. He is working with Dr. Chengyu Song. His research interest lies in the intersection of cybersecurity, human factors in security, and Brain-Computer Interface (BCI). His work published at top tier conferences, including CCS, ECMLPKDD, and ACSAC, and received significant media coverage by more than 600 high-profile media outlets (e.g., Phys, MIT Technology Review, ZDNet) in more than twenty languages worldwide. He has experience of working as a visiting researcher with the US Army Research Laboratory team in the summer of 2018 and 2019. He worked for more than five years as a software engineer in multiple companies. He gained teaching experience by working four quarters as a primary instructor and four semesters as a teaching assistant. He received his MS from the University of Alabama at Birmingham in 2014 and BS from Bangladesh University of Engineering and Technology in 2009. As an outreach activity, he has been leading a non-profit, Education Foundation (efcharity.org) for promoting rural underprivileged students in Bangladesh since 2010. More than 20,000 students from 71 rural area schools impacted through this foundation.
Intellectual Property Security: Challenges and New Frontiers
By: Sheikh Ariful Islam
Time: Tuesday, March. 24 at 10 a.m. Eastern Time (US and Canada)
Abstract: The emergence of billions, smart, connected, and deeply embedded devices has led to Cyber-Physical System (CPS). The complexity of CPS has opened new opportunities for malicious attacks. The current globalized and decentralized Integrated Circuit (IC) business model faces several security challenges including overbuilding, counterfeiting, piracy, and hardware trojan. The existing protection mechanisms against Intellectual Property theft face several challenges in terms of performance overhead and are more vulnerable to side-channel attacks. In this talk, we will first discuss the major security issues of hardware protection. We will then describe the role of High-Level Synthesis (HLS) for hardware obfuscation early-on during the design cycle. Further, we will discuss DLockout, which locks out the design when finite, but incorrect trials are made from which recovery is only possible by legal authorities. We will then present camouflaging to increase the difficulty of reverse engineering followed by the resistance of proposed techniques against side-channel attacks. The novelty of proposed approaches is that we are free from storing the key bit in memory. The HLS-based Register Transfer Level (RTL) obfuscation technique which is application-agnostic results in the area, delay, and power overhead of 2.45%, 2.65%, and 2.61% respectively for a 32-bit key.
Bio: Sheikh Ariful Islam received his B.Sc. degree in Electronics and Communication Engineering from Khulna University of Engineering and Technology, Bangladesh, in 2011. From 2011 to 2013, he worked as a full-time faculty in Northern University Bangladesh. Currently, he is working towards Ph.D. degree in Computer Science and Engineering at the University of South Florida, Tampa, FL. His current research focuses on the development of security-driven hardware synthesis tools and cyber-physical systems. He completed an internship at ON Semiconductor, Idaho in Fall 2018. Arif received the best paper nomination at the 2018 AsianHOST Conference and is a recipient of the DAC Richard Newton Young Fellow award in 2016. He has served in the Technical Program Committee in IEEE Dependable and Secure Computing and as a reviewer for several IEEE and ACM publications. He has published nine (9) conference proceedings and journal articles and currently six (6) works are under review.
Large-Scale and Robust Software Authorship Identification with Deep Feature Learning
By: Mohammed Abuhamad
Time: Mon. Mar 23, 2020, 9:10 a.m. Eastern Time (US and Canada)
Abstract: Software authorship identification is the process of software developer identification by associating a programmer to a given code based on the programmer's distinctive stylometric features. The software can be presented with the original source code or the executable binaries, which can be decompiled to generate pseudo-code as higher-level construction of the binary instructions. Successful software authorship de-anonymization has both software forensics applications and privacy implications. However, the process requires an efficient extraction of authorship attributes. The extraction of such attributes is very challenging, due to various software code formats from executable binaries with different toolchain provenance to source code with different programming languages. Moreover, the quality of attributes is bounded by the availability of software samples to a certain number of samples per author and a specific size for software samples. To this end, this work proposes a deep Learning-based approach for software authorship attribution, that facilitates large-scale, format-independent, language oblivious, and obfuscation-resilient software authorship identification. The proposed approach incorporates the process of learning deep authorship attribution using a recurrent neural network, and ensemble random forest classifier to de-anonymize programmers. Comprehensive experiments are conducted to evaluate the proposed approach over the entire Google Code Jam (GCJ) dataset across all years (from 2008 to 2016) and over real-world code samples from 1987 public repositories on GitHub. The results of our work show high accuracy despite requiring a smaller number of samples per author.
Bio: Mohammed Abuhamad is a Ph.D. candidate in the Department of Computer Science at the University of Central Florida (UCF), and a Ph.D. candidate in the Department of Computer Engineering at INHA University. He received his Master's degree in Information Technology (Artificial Intelligence) from the Faculty of Information Science and Technology, National University of Malaysia. Mohammed Abuhamad is a member of the Security Analytics Research Lab at UCF, and the Information Security Research Lab at INHA. His research interests include software security, authentication, privacy, and deep learning-based applications to information security.
Spring 2020 CIS Seminar Series
The Future of Narrative
Speaker: Justus Robertson
Postdoctoral Research Associate, University of York
Date: Thursday, March 5th
Abstract: What do computer games, storytelling, artificial intelligence, and Star Trek: The Next Generation have in common? The answer is: the holodeck, an immersive virtual reality facility capable of shaping dynamic stories around the live actions of human participants. This talk presents the history and future of automated interactive narrative with a focus on the scientific and engineering challenges that stand between us and a narrative controller fit for the holodeck.
Bio: Dr. Justus Robertson received a Ph.D. in Computer Science from North Carolina State University where he studied video games, symbolic planning, cognitive psychology, and their applications to interactive storytelling. He is currently a postdoctoral research associate in the Department of Theatre, Film, Television and Interactive Media at the University of York where he is researching data-driven storytelling in applied real-world domains, like eSports.
Neuroimaging for Mental Disorders Research
Speaker: Xiaofu He Assistant Professor, Columbia University Medical Center
Date: Wednesday, March 4
Location: JMH 302
Abstract: Mental disorders are common throughout the United States, affecting an estimated nearly one in five U.S. adults. Mental disorders are the leading cause of disability in the U.S. which greatly influences Americans lives. Although the latest neuroimaging techniques can be used to study brain structure and function, mental disorders are still not yet objectively diagnosed by neuroimaging. Moreover, the mechanism of cause-and-effect relationships between the human brain and behavior for mental disorders is still unknown. In this talk, I will discuss how we can use neuroimaging techniques (focus on Diffusion Tensor Imaging and fMRI) to identify brain biomarkers for mental disorders and how we can use real-time fMRI neurofeedback to investigate the cause-and-effect relationships between brain and behavior which can be applied to psychiatric disorders.
Bio: Dr. Xiaofu He, Assistant Professor of Clinical Neurobiology at Columbia University Medical Center and a Research Scientist at New York State Psychiatric Institute. He is also a Faculty member at the Data Science Institute, Columbia University. Dr. He has a broad background in computer science, machine learning, neuroscience, and brain imaging. His research interests include developing brain imaging data analysis tools, exploring new diagnosis and prediction methods using machine learning including deep learning, and investigating potential treatments using real-time fMRI/fNIRS/EEG neurofeedback, which he is currently applying to psychiatric disorders.
Invited Talk in Cybersecurity
By: Mohamed Rahouti
Tuesday, March 3 at JMH 302
Title: A Dynamic Threshold-Based Modular Framework for SYN Flood Attack Detection and Mitigation Using SDN
Abstract: Denial of Service (DoS) attacks and in particular, SYN flood attacks (half-open attacks) have been proven a serious threat to Software-Defined Networking (SDN)-enabled environments. A variety of Intrusion Detection and Prevention Systems (IDPS) have been introduced for identifying and preventing such security threats, but they often result in significant performance overhead and response time. In addition to this shortcoming, previously proposed solutions are based on a static detection threshold that needs to be manually set prior to deployment. Therefore, those existing solutions are inflexible for at-scale networks as the malicious traffic rate may continuously change over time.
As the centralized control capability of SDN presents a unique opportunity for enhancing Quality of Service (QoS) and security in networks, in this talk, I will present a novel and dynamic threshold-based kernel-level intrusion detection and prevention system to address these challenges through leveraging SDN capabilities and filtering mechanisms. The proposed solution is based on a self-adjusted detection threshold that aligns with the legitimate and malicious traffic rates in order to guarantee an efficient response to threats with optimal delay.
Bio: Mohamed Rahouti received an M.S. degree in Statistics in 2016 at the University of South Florida and is currently pursuing a Ph.D. degree in Electrical Engineering at the University of South Florida. Mohamed holds numerous academic achievements in the area of computer science and engineering. His current research focuses on computer networking, Software-Defined Networking (SDN), and network security with applications to smart cities.
The Geometry of Functional Spaces of Neural Networks
Speaker: Matthew Trager Post-doc Researcher, New York University
Date: Wednesday, Feb. 26
Location: LL 601
Abstract: The reasons behind the empirical success of neural networks are not well understood. One important characteristic of modern deep learning architectures compared to other large-scale parametric learning models is that they identify a class of functions that is non-linear, but rather has a complex hierarchical structure. Furthermore, neural networks are non-identifiable models, in the sense that different parameters may yield the same function. Both of these aspects come into play significantly when optimizing an empirical risk in classification or regression tasks.
In this talk, I will present some of my recent work that studies the functional space associated with neural networks with linear, polynomial, and ReLU activations, using ideas from algebraic and differential geometry. In particular, I will emphasize the distinction between the intrinsic function space and its parameterization, in order to shed light on the impact of the architecture on the expressivity of a model and the corresponding optimization landscapes.
Bio: Matthew Trager is a post-doc at the Center for Data Science at New York University. He has a Master’s degree in mathematics from the University of Pisa and Scuola Normale Superiore, and a "Master 2" degree in Mathematics, Machine Learning and Computer Vision from École Normale Supérieure de Cachan. He completed his PhD in computer science at École Normale Supérieure of Paris, under the supervision of Jean Ponce and Martial Hebert. During his PhD, he worked on the geometry of vision. He is now interested in mathematical aspects of deep learning.
Understanding security threats with data-driven and human-centered approaches
By Doowon Kim
Tuesday, February 25 at JMH 302
Abstract: Recent cyberattacks involve various actors including diverse adversaries, where each actor plays subtle but prominent roles. It is essential to understand the real-world actors from various aspects to mitigate security threats and protect end-users from the threats. In this talk, I will present fundamental findings from several measurements and user studies exploring and understanding the unique behaviors of adversaries as well as benign software developers that cause various security incidents. First, I will discuss the malicious actor, adversaries: particularly, how they abuse the Code-Signing Public Key Infrastructure (PKI) by exploiting the weaknesses in other actors (i.e., certificate authorities, publishers, and end-users). Second, I will describe why benign software developers often fail in secure development and present blueprints for improvement. Finally, I will conclude by discussing my future research directions in understanding new security threats and actors from emerging technologies (e.g., IoT).
Bio: Doowon Kim is a Ph.D. candidate in the Department of Computer Science at the University of Maryland, College Park. His research focuses on data-driven security and usable security. Specifically, he investigates the root causes of security threats by better understanding actors (e.g., adversary and end-users) involved, with data-driven and human-centered perspectives. Moreover, his work covers the Code-Signing PKI, the Web PKI, and the security behaviors of benign software developers. His research has resulted in a real-world impact on the Code-Signing PKI and has generated interest from media such as Ars Technica, The Register, Schneier on Security, and Threatpost. He is a recipient of the NSA Best Scientific Cybersecurity Paper Award and Ann G. Wylie Dissertation Fellowship.
Built-in Security and Resilience for Assured Autonomy: A Unified Game, Decision, and AI Approach
By Juntao Chen
Thursday, Feb. 20 at JMH 342
Abstract: Mobile autonomous systems (MAS) are increasingly important due to their wide application in mission-critical tasks, such as surveillance, search, and rescue. Enabled by the Internet of Things (IoT) devices, multiple heterogeneous MAS can also be integrated together as a multi-layer MAS network to offer holistic services. On one hand, the networked MAS can improve the interoperability between different systems. On the other hand, it creates new challenges for enhancing the real-time security and resiliency of autonomy at different scales against cyber-physical attacks. To achieve assured autonomy, I will first establish universal metanetwork modeling which offers a gestalt view of heterogeneous autonomous components, and by leveraging which we can analyze the performance of the global MAS. Then, I will discuss metagame-theoretic approaches to enable decentralized and interdependent decision making between different operators under adversarial attacks. I will also provide AI-enabled algorithms for the online implementation of policies that yield a high level of autonomy in a dynamic environment. In the second part of the talk, I will also briefly discuss how to design strategic trust mechanisms for achieving assured cloud-enabled autonomy. Finally, a number of future research directions on AI and learning for human-centered cyber-physical security will be elaborated.
Bio: Juntao Chen is currently a final-year Ph.D. candidate at the Department of Electrical and Computer Engineering of Tandon School of Engineering, New York University (NYU). He received the B.Eng. degree in Electrical Engineering and Automation from Central South University, China, in 2014. He has published more than 25 research papers and a book. He is a recipient of the Ernst Weber Ph.D. Fellowship and Dante Youla Award for Graduate Research Excellence from NYU. He is a research associate of the Laboratory for Agile and Resilient Complex Systems (LARX) and a member of Center for Cybersecurity (CCS) at NYU. His research interests include cyber-physical systems, security and resilience, game and control theory, and artificial intelligence.
Program Analysis and Testing for Reliable Android and Wear Apps
Hailong Zhang, Ph.D. candidate, Ohio State Univ.
Abstract: Due to the widespread use of Android devices and apps, it is important to develop tools and techniques to improve app quality and performance. However, traditional program analyses cannot be used directly for Android apps because of their unique characteristics.
In this talk, I will discuss program analysis specific for control-flow modeling and testing of software for regular Android and Android Wear. I will introduce effective hybrid techniques with static control/data-flow analysis, automated test generation and runtime monitoring for detection of resource leaks in apps.
Bio: Hailong Zhang is a Ph.D. candidate in the Departments of Computer Science and Engineering at the Ohio State University, where he works with Prof. Atanas Rountev. Before that, he graduated from the Beijing University of Posts and Telecommunications with a Master and a Bachelor degree. His research interests revolve around problems that are related to software reliability, security, and privacy. His current focus is on foundational program analysis and testing of apps for mobile and wearable devices and privacy-preserving software analysis and analytics. Refreshment and coffee provided.