Vinitra Swamy

PhD Candidate at EPFL · Deep Learning Research · vinitra@berkeley.edu

Hello! I am an AI researcher and PhD student working on deep learning model explainability at École Polytechnique Fédérale de Lausanne (EPFL). I'm coadvised by Prof. Tanja Käser at the ML4Ed Lab and Prof. Martin Jaggi at the MLO Lab.

Before moving to Switzerland, I worked for two years at Microsoft AI as a lead engineer for the Open Neural Network eXchange project.

My claim to fame (haha) is that I graduated at 20 as the youngest M.S. in Computer Science recipient in UC Berkeley's history. Since then, I've served as a machine learning lecturer for the Berkeley Division of Data Sciences and the University of Washington CSE Department.

I love people, data, and working on exciting problems at the intersection of the two:

  • explainable and interpretable AI
  • generalized learning (transfer learning, multimodal learning)
  • ML for education (autograding, knowledge tracing, scalable infrastructure)

Thank you for taking time out of your day to find out what I do with mine!

Selected Research

MultiModN (NeurIPS 2023)

Vinitra Swamy*, Malika Satayeva*, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser*, Mary-Anne Hartley*

We present MultiModN, a multimodal, modular network that fuses latent representations in a sequence of any number, combination, or type of modality while providing granular real-time predictive feedback on any number or combination of predictive tasks. MultiModN's composable pipeline is interpretable-by-design, as well as innately multi-task and robust to the fundamental issue of biased missingness. We perform four experiments on several benchmark MM datasets across 10 real-world tasks (predicting medical diagnoses, academic performance, and weather), and show that MultiModN's sequential MM fusion does not compromise performance compared with a baseline of parallel fusion.

[Paper + Video] [Pre-Print] [Code] [Poster]

2023

Unraveling Downstream Bias from LLMs (EMNLP Findings 2023)

Thiemo Wambsganss*, Xiaotian Su*, Vinitra Swamy, Parsa Seyed Neshai, Roman Rietsche, Tanja Käser

We investigate how bias transfers through an AI writing support pipeline through a large scale user study with 231 students writing business case peer reviews in German. Students are divided into five groups with different levels of writing support (traditional ML suggestions, control group with no assistance, finetuned versions of GPT2, GPT 3, and GPT3.5). Using GenBit, WEAT, and SEAT, we evaluate the gender bias at various stages of the pipeline: in model embeddings, in suggestions generated by the models, and in reviews written by students. Our results demonstrate that there is no significant difference in gender bias between the resulting peer reviews of groups with and without LLM suggestions. Our research is therefore optimistic about the use of AI writing support in the classroom, showcasing a context where bias in LLMs does not transfer to students' responses.

[Paper] [Pre-Print] [Code]

2023

Viewpoint: Future of Human-Centric XAI (Submitted to JAIR)

Vinitra Swamy, Jibril Frej, Tanja Käser

Current approaches in human-centric XAI (e.g. predictive tasks in healthcare, education, or personalized ads) tend to rely on a single explainer. This is a concerning trend given systematic disagreement in explainability methods applied to the same points and underlying black-box models. We propose to shift from post-hoc explainability to designing interpretable neural network architectures; moving away from approximation techniques in human-centric and high impact applications. We identify five needs of human-centric XAI (real-time, accurate, actionable, human-interpretable, and consistent) and propose two schemes for interpretable-by-design neural network workflows (adaptive routing for interpretable conditional computation and diagnostic benchmarks for iterative model learning). We postulate that the future of human-centric XAI is neither in explaining black-boxes nor in reverting to traditional, interpretable models, but in neural networks that are intrinsically interpretable.

[Pre-Print]

2023

Trusting the Explainers (LAK 2023, Honorable Mention)

Vinitra Swamy, Sijia Du, Mirko Marras, and Tanja Käser

We use human experts to validate explainable AI approaches in the context of student success prediction. Our pairwise analyses cover five course pairs (nine datasets from Coursera, EdX, and Courseware) that differ in one educationally relevant aspect and popular instance-based explainers. We quantitatively compare the distances between the explanations across courses and methods, then validate the explanations of LIME, SHAP, and a counterfactual-based confounder with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy.

[Paper] [Pre-Print] [Code]

2023

Ripple: Concept-Based Interpretation for Raw Time Series (AAAI 2023)

Mohammad Asadi, Vinitra Swamy, Jibril Frej, Julien Vignoud, Mirko Marras, Tanja Käser

We present RIPPLE, utilizing irregular multivariate time series modeling with graph neural networks to achieve comparable or better accuracy with raw time series clickstreams in comparison to hand-crafted features. Furthermore, we extend concept activation vectors for interpretability in raw time series models. Our experimental analysis on 23 MOOCs with millions of combined interactions over six behavioral dimensions show that models designed with our approach can (i) beat state-of-the-art time series baselines with no feature extraction and (ii) provide interpretable insights for personalized interventions.

[Paper] [Pre-Print] [Slides] [Code]

2023

Evaluating the Explainers (EDM 2022)

Vinitra Swamy, Bahar Radhmehr, Natasa Krco, Mirko Marras, and Tanja Käser

We compare five explainers for black-box neural nets (LIME, PermutationSHAP, KernelSHAP, DiCE, CEM) on the downstream task of student performance prediction for five massive open online courses. Our experiments demonstrate that the families of explainers do not agree with each other on feature importance for the same Bidirectional LSTM models with the same representative set of students. We use Principal Component Analysis, Jensen-Shannon distance, and Spearman's rank-order correlation to quantitatively cross-examine explanations across methods and courses. Our results come to the concerning conclusion that the choice of explainer contains systematic bias and is in fact paramount to the interpretation of the predictive results, even more so than the data the model is trained on.

[Paper] [Pre-Print] [Slides] [Code]

2022

Meta Transfer Learning (ACM L@S 2022)

Vinitra Swamy, Mirko Marras, and Tanja Käser

We tackle the problem of transferability across MOOCs from different domains and topics, focusing on models for early success prediction. In this paper, we present and analyze three novel strategies to creating generalizable models: 1) pre-training a model on a large set of diverse courses, 2) leveraging the pre-trained model by including meta features about courses to orient downstream tasks, and 3) fine-tuning the meta transfer learning model on previous course iterations. Our experiments on 26 MOOCs with over 145,000 combined enrollments and millions of interactions show that models combining interaction clickstreams and meta information have comparable or better performance than models which have access to previous iterations of the course. With these models, we enable educators to warm-start their predictions for new and ongoing courses.

[Paper] [Pre-Print] [Slides] [Code]

2022

Interpreting LMs Through KG Extraction (NeurIPS 2021)

Vinitra Swamy, Angelika Romanou, and Martin Jaggi

While transformer-based language models are undeniably useful, it is a challenge to quantify their performance beyond traditional accuracy metrics. In this paper, we compare BERT-based language models (DistilBERT, BERT, RoBERTa) through snapshots of acquired knowledge at sequential stages of the training process. We contribute a quantitative framework to compare language models through knowledge graph extraction and showcase a part-of-speech analysis to identify the linguistic strengths of each model variant. Using these metrics, machine learning practitioners can compare models, diagnose their models' behavioral strengths and weaknesses, and identify new targeted datasets to improve model performance.

Published at eXplainable AI for Debugging and Diagnosis Workshop at NeurIPS 2021.

[Paper] [Poster] [Code]

2021

ONNX: Open Neural Network eXchange (Microsoft AI)

Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Founded by Microsoft and Facebook, and now supported by over 30 other companies, ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. Gave several research talks about model operationalization and acceleration with ONNX and ONNX Runtime at Microsoft MLADS (ML, AI, Data Science Conference) and UW eScience Institute.

[ONNX Model Zoo] [ONNX + Azure ML Tutorials] [MLADS Notebooks] [MLADS Slides] [UW Slides]

2020

ML for Humanitarian Data: Tag Prediction using the HXL Standard (KDD 2019)

Vinitra Swamy (Microsoft AI), Elisa Chen, Anish Vankayalapati, Abhay Aggarwal, Chloe Liu (UC Berkeley), Vani Mandava (MSR), Simon Johnson (UN)

We present a simple yet effective machine learning model to predict tags for datasets from the United Nations Office for the Coordination of Humanitarian Affairs (UN OCHA) with the labels and attributes of the Humanitarian Exchange Language (HXL) Standard for data interoperability. This paper details the methodology used to predict the corresponding tags and attributes for a given dataset with an accuracy of 94% for HXL header tags and an accuracy of 92% for descriptive attributes. Compared to previous work, our workflow provides a 14% accuracy increase and is a novel case study of using ML to enhance humanitarian data.

[Paper] [Slides] [Poster] [Code]

2019

Pedagogy, Infrastructure, and Analytics for Data Science Education at Scale (MSc Thesis)

Vinitra Swamy, David Culler

A detailed research report on autograding, analytics, and scaling JupyterHub infrastructure highlighted in use for thousands of students taking Data 8 at UC Berkeley. Thesis presented as a graduate student affiliated with RISELab, after helping develop UC Berkeley data science's software infrastructure stack including JupyterHub, autograding with OkPy, Gradescope, and authentication for 1000s of students. Collaborated with Yuvi Panda, Ryan Lovett, Chris Holdgraf, and Gunjan Baid on a talk detailing the infrastructure stack at JupyterCon 2017.

[Thesis] [Blog] [Code] [JupyterCon Slides] [JupyterCon Speaker Profile]

2018

Deep Knowledge Tracing for Student Code Progression (AIED 2018)

Vinitra Swamy, Samuel Lau, Allen Guo, Madeline Wu, Wilton Wu, Zachary Pardos, David Culler

Knowledge Tracing is a body of learning science literature that seeks to model student knowledge acquisition through their interaction with coursework. This paper uses a recurrent neural network (LSTM) and free-form code attempts to model student knowledge in large scale computer science classes.

[Paper] [Poster]

2018

Microsoft AI

AI Software Engineer

Working on a framework for deep learning / ML framework interoperability (ONNX) alongside an ecosystem of converters, containers, and inference engines.

Lead of the inter-company ONNX Special Interest Group (SIG) for Model Zoo and Tutorials with Microsoft, Intel, Facebook, IBM, nVidia, RedHat, and other academic and industry collaborators.

Presented and represented Microsoft AI at several conferences: WIDS 2020, Microsoft //Build 2019, KDD 2019, Microsoft Research Faculty Summit 2019, UC Berkeley AI for Social Impact Conference 2018, Women in Cloud Summit 2018, RISECamp 2018

2018 - 2020

Berkeley Insitute for Data Science (BIDS), RISELab

Research Assistant

Worked on projects in AI + Systems with an application area of data science education. Project areas include JupyterHub architecture, custom deployments, OkPy autograding integration, Jupyter noteboook extensions, and D3.js / PlotLy visualizations for data science explorations of funding and enrollment data.

[BIDS] [RISELab]

2015 - 2018

IBM Research

Research Scientist Intern, Machine Learning

Worked on the CSi2 project as a Machine Learning Research Scientist intern on the Hybrid Cloud team. The CSi2 algorithm is an ensemble machine learning algorithm to detect inactivity of VMs as well as suggest a course of action (i.e. termination, snapshot). It is projected to save IBM Research at least $3.2 million dollars with 95.12% recall and 88% F1 score (>> industry standard) and is being implemented into the Watson Services Platform. Collaborated with Neeraj Asthana, Sai Zheng, Ivan D'ell Era, Aman Chanana. Presented an exit talk and filed 2 patents.

2017

LinkedIn

Software Engineering Intern

Interned at LinkedIn headquarters with the Growth Division's Search Engine Optimization (SEO) Team the summer before entering UC Berkeley. Worked on fullstack testing infrastructure for the public profile pages, as well as a Hadoop project; outside of assigned work, helped plan LinkedIn’s DevelopHER Hackathon and worked on several Market Research/User Experience Design initiatives.

2015

Google

Intern, Made w/ Code Ambassador

Spent a summer learning computer science fundamentals and shadowing engineers through the CAPE high school internship program at Google Headquarters in Mountain View, CA. Chosen as a Google Ambassador for Computer Science following the experience. Worked with Google, Salesforce, and AT&T to introduce coding to over 15,000 girls across California with the Made w/ Code Initiative.

2011

Education

École Polytechnique Fédérale de Lausanne

PhD in Computer Science
  • President of EPFL PhDs in Computer Science (EPIC)
  • Advised by Prof. Tanja Käser at the ML4ED Lab
    and Prof. Martin Jaggi at the MLO Lab
  • EDIC Computer Science Fellowship Recipient, EPFL IC Distinguished Service Award
2020 - Current

University of California, Berkeley

Master's in Electrical Engineering and Computer Science
  • President of Computer Science Honor Society (UPE)
  • Head Graduate Student Instructor of Data 8 (Foundations of Data Science)
  • Research Assistant, Graduate Opportunity Fellow at RISELab
  • Advisor: Dean of Data Sciences, David Culler
2017 - 2018

University of California, Berkeley

Bachelor's in Computer Science
  • EECS Award of Excellence in Undergraduate Teaching and Leadership
  • UC Berkeley Alumni Leadership Scholar
  • Graduated 2 years early
2015 - 2017

Teaching Experience

Machine Learning, Data Analysis, Databases

EPFL

2020 - 2024

CSE/STAT 416: Introduction to Machine Learning

University of Washington, Seattle
  • Lecturer to 100+ upper-division undergraduate and graduate students on a practical introduction to machine learning. Modules include regression, classification, clustering, retrieval, recommender systems, and deep learning, with a focus on an intuitive understanding grounded in real-world applications.

[CSE 416 Website]

Summer 2020

Data 8: Foundations of Data Science

UC Berkeley
  • Lecturer to 250+ undergraduate students on fundamentals of statistical inference, computer programming, and inferential thinking.

[Data 8 Website] [Course Offering] [Course Materials / Code]

Summer 2018

Data 8: Foundations of Data Science

UC Berkeley
  • TA / Head Graduate Student Instructor (GSI) of Data 8 for 4 semesters, responsible for management of 1000+ undergraduates, 40 GSIs, 30 tutors, and 100+ lab assistants each semester.
  • Helped create data science curriculum material for lecture and domain-specific seminar courses.
  • In charge of developing JupyterHub infrastructure for 1500+ active users (with Jupyter Servers with Docker/Kubernetes backend on top of various cloud providers including Google Cloud, Azure, and AWS).
2016 - 2018

Organizing Team

WiML Program Chair @ ICML 2022
FATED Workshop Co-Chair @ EDM 2022

Reviewer / Program Committee

AIED Program Committee 2023, 2024
AIED 2021*, 2022* (Subreviewer for Tanja Käser)
EMNLP BlackBoxNLP 2021, 2022, 2023
EACL 2022
EDM Program Committee 2023, 2034
Journal of Educational Data Mining (JEDM) 2022
LAK 2022*, 2023* (Subreviewer for Tanja Käser)
Editor for Springer Series on Big Data Management (Educational Data Science)

Working Groups

Fairness Working Group @ EDM 2022
WiML Workshop Team @ NeurIPS 2021
Lead of the 2020 ONNX SIG for Models and Tutorials

2023

Awards

  • EPFL IC Distinguished Service Award 2021, 2022, 2023
  • EPFL Computer Science (EDIC) Fellowship
  • UC Berkeley EECS Award of Excellence for Teaching and Leadership
  • UC Berkeley Graduate Opportunity Fellowship
  • Kairos Society Entrepreneurship Fellow, UC Berkeley
  • President of UPE, UC Berkeley Computer Science Honor Society
  • UC Berkeley Alumni Leadership Scholar
  • NASA-Conrad Foundation Spirit of Innovation Cybertechnology Finalist
  • Girl Scout Gold Award: Bridging the Digital Divide
  • Google International Trailblazer in Computer Science

Speaking Engagements

  • Fall 2023: Speaker at AWS Research Day on Personalized, Trustworthy Human-Centric Computing: AI for Education
  • Summer 2022: Speaker at Oxford ML "Un-Workshop" Series on Evaluating Explainable AI
  • Summer 2022: Opening Remarks at the FATED workshop at EDM 2022 (Durham, UK)
  • Spring 2022: Speaker at Women in Data Science (WIDS 2022) Silicon Valley: Explainable AI
  • Fall 2021: Spotlight Talk at NeurIPS Inaugural eXplainable AI for Debugging and Diagnosis Workshop
  • Fall 2021: Presenter at the Tamil Internet Conference (INFITT) on "TamilBERT: Natural Language Modeling for Tamil"
  • Spring 2021: Presenter at the EDIC Orientation for PhDs, EPFL
  • Spring 2021: UC Berkeley Data Science Alumni Panel
  • Fall 2020: Featured Guest on the Tech Gals Podcast (Episode 3)
  • Fall 2020: Speaker at the ONNX Workshop
  • Spring 2020: Speaker at Women in Data Science Conference (WIDS 2020) Silicon Valley: Interoperable AI (ONNX)
  • Spring 2020: Speaker at the Linux Foundation (LF) AI Day
  • Fall 2019: Presenter at Microsoft Bay Area AI Meetup
  • Summer 2019: Guest on the Microsoft AI Show (Channel 9)
  • Spring 2019: Speaker at Microsoft Machine Learning and Data Science Conference (MLADS) (Redmond)
  • Summer 2018: Presenter at Artificial Intelligence in Education 2018 (London)
  • Summer 2018: Speaker at UC Berkeley's Data Science Undergraduate Pedagogy and Practice Workshop (Berkeley)
  • Fall 2017: Opening Panelist at SalesForce Dreamforce Conference (SF)
  • Summer 2017: Speaker at JupyterCon (NYC)
  • Spring 2017: Presenter at Berkeley Institute for Data Science Research Showcase (Berkeley)
  • Fall 2016: Panelist at SF BusinessWeek Conference (SF)
  • Summer 2016: Conference organizing team at Algorithms for Modern Massive Data Sets (MMDS) (Berkeley)