Contents
Introduction
In this project, we examine the challenges, tools, theories, and techniques involved in writing reliable and efficient imperative Deep Learning (DL) programs. Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for DL systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance.
Researchers
Exploring Challenges Associated with Migrating Imperative Deep Learning Programs to Graph Execution
Introduction
While hybrid approaches aim for the “best of both worlds,” the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges—and resultant bugs—involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation—the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.
Publications
My and my research students‘ names are boldfaced, undergraduate students are italicized, and female students are underlined:
Tatiana Castro Vélez, Raffi Khatchadourian, Mehdi Bagherzadeh, and Anita Raja. Challenges in migrating imperative Deep Learning programs to graph execution: An empirical study. In International Conference on Mining Software Repositories, MSR ’22, pages 469–481, New York, NY, USA, May 2022. IEEE/ACM, ACM. (45/138; 32.6% acceptance rate). [ bib | DOI | arXiv | video | data | slides | poster | http ]
Study Data
Our dataset is hosted on Zenodo.
Blog Posts
Presentations
Automatically Migrating Imperative Deep Learning Programs to Graph Execution
Introduction

Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations. Our key insight is that, while DL programs typically execute sequentially, hybridizing imperative DL code resembles parallelizing sequential code in traditional systems. Inspired by this, we present an automated refactoring approach that assists developers in determining which otherwise eagerly-executed imperative DL functions could be effectively and efficiently executed as graphs. The approach features novel static imperative tensor and side-effect analyses for Python. Due to its inherent dynamism, analyzing Python may be unsound; however, the conservative approach leverages a speculative (keyword-based) analysis for resolving difficult cases that informs developers of any assumptions made. The approach is: (i) implemented as a plug-in to the PyDev Eclipse IDE that integrates the WALA Ariadne analysis framework and (ii) evaluated on nineteen DL projects consisting of 132 KLOC. The results show that 326 of 766 candidate functions (42.56%) were refactorable, and an average relative speedup of 2.16 on performance tests was observed with negligible differences in model accuracy. The results indicate that the approach is useful in optimizing imperative DL code to its full potential.
Publications
My and my research students‘ names are boldfaced, undergraduate students are italicized, and female students are underlined:
Raffi Khatchadourian, Tatiana Castro Vélez, Mehdi Bagherzadeh, Nan Jia, and Anita Raja. Speculative automated refactoring of imperative Deep Learning programs to graph execution, May 2025. [ bib | arXiv | data ]
Raffi Khatchadourian, Tatiana Castro Vélez, Mehdi Bagherzadeh, Nan Jia, and Anita Raja. Hybridize Functions: A tool for automatically refactoring imperative Deep Learning programs to graph execution. In Artur Boronat and Gordon Fraser, editors, Fundamental Approaches to Software Engineering, FASE ’25, pages 89–100, Cham, May 2025. ETAPS, Springer Nature Switzerland. (11/31; 35% acceptance rate). EAPLS Distinguished Paper Award 🏆. [ bib | DOI | tool | slides | poster | http ]
Nan Jia, Anita Raja, and Raffi Khatchadourian. ReLESS: A framework for assessing safety in Deep Learning systems. In Workshop on Artificial Intelligence Safety at the International Joint Conference on Artificial Intelligence, AISafety ’24 at IJCAI ’24. IJCAI, August 2024. Best Paper Award 🏆 nominee. [ bib | http ]
Raffi Khatchadourian, Tatiana Castro Vélez, Mehdi Bagherzadeh, Nan Jia, and Anita Raja. Towards safe automated refactoring of imperative Deep Learning programs to graph execution. In International Conference on Automated Software Engineering, ASE ’23, pages 1800–1802. IEEE, September 2023. NIER track. (25/70; 35.7% acceptance rate). [ bib | DOI | slides | http ]
Research Prototype
Study Data
Our dataset is hosted on Zenodo.