Home »

Subscribe

Archives

Categories

Attribution-NonCommercial-ShareAlike 4.0 International

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

Enhancing the Reliability and Efficiency of Imperative Deep Learning Programs

Introduction

In this project, we examine the challenges, tools, theories, and techniques involved in writing reliable and efficient imperative Deep Learning (DL) programs. Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for DL systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance.

Exploring Challenges Associated with Migrating Imperative Deep Learning Programs to Graph Execution

Introduction

While hybrid approaches aim for the “best of both worlds,” the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges—and resultant bugs—involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation—the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.

Publications

My and my research students‘ names are boldfaced, undergraduate students are italicized, and female students are underlined:

Tatiana Castro VélezRaffi Khatchadourian, Mehdi Bagherzadeh, and Anita Raja. Challenges in migrating imperative Deep Learning programs to graph execution: An empirical study. In International Conference on Mining Software Repositories, MSR ’22, pages 469–481, New York, NY, USA, May 2022. IEEE/ACM, ACM. (45/138; 32.6% acceptance rate). [ bib | DOI | arXiv | video | data | slides | poster | http ]

Study Data

DOI

Our dataset is hosted on Zenodo.

Presentations

Automatically Migrating Imperative Deep Learning Programs to Graph Execution

Introduction

Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution. We present an automated refactoring approach that assists developers in specifying whether their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs while preserving semantics. The approach, based on a novel imperative tensor analysis, automatically determines when it is safe and potentially advantageous to migrate imperative DL code to graph execution. The approach is implemented as a PyDev Eclipse IDE plug-in that integrates the WALA Ariadne analysis framework and evaluated on 19 Python projects consisting of 132.05 KLOC. We found that 326 of 766 candidate functions (42.56%) were refactorable, and an average speedup of 2.16 on performance tests was observed. The results indicate that the approach is useful in optimizing imperative DL code to its full potential.

Publications

My and my research students‘ names are boldfaced, undergraduate students are italicized, and female students are underlined:

Raffi KhatchadourianTatiana Castro Vélez, Mehdi Bagherzadeh, Nan Jia, and Anita Raja. Hybridize Functions: A tool for automatically refactoring imperative Deep Learning programs to graph execution. In Artur Boronat and Gordon Fraser, editors, Fundamental Approaches to Software Engineering, FASE ’25, pages 89–100, Cham, May 2025. ETAPS, Springer Nature Switzerland. (11/31; 35% acceptance rate). EAPLS Distinguished Paper Award 🏆. [ bib | DOI | tool | slides | poster | http ]

Raffi KhatchadourianTatiana Castro Vélez, Mehdi Bagherzadeh, Nan Jia, and Anita Raja. Safe automated refactoring for efficient migration of imperative Deep Learning programs to graph execution, April 2025. [ bib | arXiv | data ]

Nan Jia, Anita Raja, and Raffi Khatchadourian. ReLESS: A framework for assessing safety in Deep Learning systems. In Workshop on Artificial Intelligence Safety at the International Joint Conference on Artificial Intelligence, AISafety ’24 at IJCAI ’24. IJCAI, August 2024. Best Paper Award 🏆 nominee. [ bib | http ]

Raffi KhatchadourianTatiana Castro Vélez, Mehdi Bagherzadeh, Nan Jia, and Anita Raja. Towards safe automated refactoring of imperative Deep Learning programs to graph execution. In International Conference on Automated Software Engineering, ASE ’23, pages 1800–1802. IEEE, September 2023. NIER track. (25/70; 35.7% acceptance rate). [ bib | DOI | slides | http ]

Research Prototype

Study Data

DOI

Our dataset is hosted on Zenodo.

Presentations

Awards & Nominations

  1. EAPLS distinguished paper award at FASE ’25.
  2. Best paper award nomination at AISafety ’24.

Funding

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.