Home » Research

Subscribe

Archives

Categories

Attribution-NonCommercial-ShareAlike 4.0 International

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

Research

Introduction

Please review my vitaprojects, publications, presentations, students, collaboratorsbackground, group, or software for more information regarding my research. The below slides also discuss an overview of my research area. Also, be sure to check out the research blog for recent paper highlights:

Software Evolution, Program Analysis, and Automated Refactoring

Once software has been initially deployed, it often undergoes various changes before retirement. For example, stakeholder requirements may change while the software is currently deployed. Depending on the software’s size and complexity, such a change may result in non-trivial modifications to various supporting software artifacts. Consequently, software development involves revisiting artifacts to materialize requirement changes.

Requirement changes are not the only reason artifacts are altered. Other reasons may include modifying the deployment environment, migrating to a new platform version, and restructuring the software to mirror an improved design [1]. Unfortunately, software evolution and maintenance is an expensive, time-consuming, error- and omission-prone task, especially when the software is large and complex [2]. Typically, changing one element necessitates modifying related—and often seemingly unrelated—elements scattered throughout the software system.

To address these problems, approaches have emerged to assist developers with a wide range of software evolution and maintenance tasks. Automated refactoring is one such approach. Refactorings [3][4] are program transformations that are:

  1. Source-to-source (instead of, e.g., source-to-bytecode) and
  2. Semantics-preserving, i.e., the program’s behavior is the same before and after the refactoring.

Refactoring preconditions are formulated to ensure that code exhibits this criterion following the transformation. Automated refactorings combine (front-end) compiler techniques, e.g., type inferencing, theories, algorithms, and automated software engineering, to ensure that:

  1. Complex analyses underpinning the refactoring are sound,
  2. Widespread transformations are correct, and
  3. Automated transformations are minimal—resembling those that an expert human developer may have performed.

Refactorings take place for many reasons, including enhancing program structure [5][8], upgrading to new API versions or design patterns [9][15], migrating legacy code to a new platform (e.g., [16][18]), parallelizing sequential code [2][19], improving energy consumption [20], eliminating code redundancy [21], making mobile applications more asynchronous [22], and others [23].

Transformations made by automated refactoring may span multiple, non-adjacent program statements or expressions. Simple find-and-replace tools are inadequate to perform these tasks as they require program analysis, such as dataflow and type inference [2]. Semantics preservation is often difficult due to complex relationships among program entities, especially when refactoring Object-Oriented (OO) systems utilizing polymorphism or multiple dispatch. For example, erroneously altering a parameter type may inadvertently change a method overloading relationship to overriding. Dynamic features and languages further complicate semantics preservation efforts. Reflection, metaprogramming, and dynamic language (e.g., Python) features (e.g., conditional imports) make approximating run-time behavior at “development” time difficult to impossible.

Typically, changing one element necessitates modifying related—and often seemingly unrelated—elements scattered throughout the software system.

Furthermore, refactorings are typically conservative due to the semantics preservation requirements. Thus, to be helpful, refactorings must also automatically discover (or mine for) as many refactoring opportunities as possible, even ones that may not be immediately evident to developers. Also, unlike compiler optimizations, since refactorings are source-to-source transformations, their results remain in the code. Consequently, transformations must be comprehensible, maintainable, and of high quality.

Big Data & Programming

Big Data has brought new challenges to today’s large and complex software systems engineering. Software must handle more data, more frequently, and at a higher throughput. These (often competing) goals have brought upon new highly-distributed programming models and network architectures. Mainstream, OO programming languages have also adapted by increasingly incorporating more functional programming language constructs. Because functional programming embraces concepts such as isolation and immutability, it is ideal to handle the increased workloads and transaction frequency. However, while such constructs work well in highly-distributed systems (e.g., Spark, Hadoop), their incorporation into mainstream languages poses new obstacles for developers.

Functional Features in Mainstream Object-Oriented Programs

There is a recent trend in mainstream OO programming languages, frameworks, and libraries to incorporate more functional-like programming language constructs. The use of these constructs in mainstream OO languages is increasingly pervasive [24]. For instance, lambdas, stream, and MapReduce-style [25] computations are now available in languages such as Java [26] (and Android [27]), C# [28], F# [29], and Scala [30]. Through these mechanisms, native data structures like collections and infinite data structures may be declaratively processed. Streaming Application Programmer Interfaces (APIs), for example—available in many mainstream OO languages—support this paradigm fusing. Other mechanisms include lambda expressions (lambdas), i.e., (typically stateless) first-class computational units facilitating deferred execution [31][32], and option types, i.e., nomadic types supporting MapReduce-style operations.

“Functionalizing” Mainstream Object-Oriented Programs

While new software written in mainstream OO languages can benefit from using functional language-inspired features, the question remains whether legacy software may also take advantage of this new, attractive trend. Previous work [33] has examined how lambdas can be retrofitted onto existing software; however, the infrastructure that enables using lambdas in a backward-compatible fashion has yet to be thoroughly examined. Adding lambdas to existing languages necessitates that older programs written in that language can interoperate with the new code. In Java, for instance, default methods make this transition possible. An open question is whether these useful constructs can be used to improve legacy systems in other ways.

Automated Refactoring of Legacy Programs to Default Methods

Using default methods, developers can write default (instance) interface methods, which provide an implementation that interface implementers will inherit if they do not provide their own [34]. Although their original motivation was to add new functionality to existing interfaces without breaking clients [35], default methods can be used [36] as a replacement for the skeletal implementation pattern [37], Item 18. The skeletal implementation pattern involves creating an abstract skeletal implementation class that provides a partial interface implementation. Instead of directly implementing the interface, interface implementations extend the abstract skeletal implementation class, making the interface easier to implement.

Because functional programming embraces concepts such as isolation and immutability, it is ideal to handle the increased workloads and transaction frequency.

While there are many advantages in migrating legacy code using the skeletal implementation pattern to use default methods instead, e.g., foregoing the need for subclassing, having classes inherit behavior (but not state) from multiple interfaces [36], facilitating local reasoning [6], doing so requires significant manual effort, especially in large projects. Notably, there are subtle language and semantic restrictions, e.g., interfaces cannot declare (instance) fields. The transformation requires preserving type-correctness by analyzing complex type hierarchies, resolving potential issues arising from multiple (implementation) inheritance, reconciling differences between class and interface methods, and ensuring tie-breakers with overriding class methods—rules governing dispatch precedence between class and default methods with the same signature—do not alter semantics.

In this effort, we have devised an efficient, fully-automated, semantics-preserving refactoring approach that assists developers in taking advantage of enhanced interfaces. The approach—based on type constraints [5][38]—works on large-scale projects with minimal intervention and features an extensive rule set, covering various corner cases where default methods cannot be used. It identifies instances of the pattern and safely migrates class method implementations to interfaces as default methods. The related Pull Up Method refactoring [4], [5] safely moves methods from a subclass into a superclass. Its goal is to reduce redundant code, whereas ours includes opening classes to inheritance and allowing classes to inherit multiple interface definitions. Moreover, the approach deals with multiple inheritance, a more complicated type hierarchy involving interfacessemantic differences due to class tie-breaking, and differences between class method headers and corresponding interface method declarations.

This work resulted in a technical track publication at ICSE ’17 [11], an open-source Eclipse plug-in publicly available via the Eclipse Marketplace called Migrate Skeletal Implementation to Interface Refactoring [39]. Educators may also use the plug-in to teach students how these new constructs work by relating their functionality to previous knowledge (i.e., the original program vs. the refactored program). Although many theoretical considerations were required in formulating the approach, this research frequently results in practical artifacts with real-world applicability.

The work also involved empirical experimentation, involving 19, open-source subject systems of varying sizes and domains with a total of ∼2.7 million lines of code. Additionally, pull requests (patches) of the refactoring results were submitted to popular GitHub repositories. The study served as a basis for a more comprehensive investigation of best practices for default method usage in legacy systems, which was published in the main technical research track at Programming ’18 [40]. Additionally, four of our submitted patches were accepted into real-world, open-source projects, impacting the software development community. Our study indicated that the analysis cost is practical, the skeletal implementation pattern is commonly used in legacy software, and the proposed approach helps refactor method implementations into default methods despite language restrictions. It also provides insight to language designers on how this new language construct applies to existing software.

Streaming API Misuse in Mainstream Object-Oriented Programs

Using functionally-inspired language features, constructs, and (streaming) APIs in mainstream OO programs can have many benefits [41, Ch. 1], including succinct [42] and nearly effortless (syntax-wise) parallelism [43]. Streaming APIs, for example, incorporate MapReduce-like [25] operations on native data structures like collections or infinite data structures, allowing high-level manipulation of values with functional language-inspired operations, such as filter() and map(). Such operations typically accept behavioral parameters, i.e., lambdas, language syntactical mechanisms facilitating deferred execution [31], [32], that define how to process each encountered element.

The listing below shows a “sum of even squares” example computation in Java, where each encountered even number is squared and summed [24]. The map() operation accepts a lambda and results in the element’s square. The lambda argument to filter() evaluates to true if and only if the element is even. The code in this example can execute in parallel simply by replacing stream() with parallelStream():

int sumOfInts = list.stream().filter(x -> x % 2 == 0).map(x -> x * x).sum();

Despite the advantages, combining the paradigms may result in code that behaves in ways not expected by developers. For instance, unexpected errors and suboptimal performance may be incurred as subtle considerations are required to produce code that is correct, optimally parallelizable, efficient, reliable, and free of bugs related to nontermination, non-determinism, and mismanaged resource cleanup in using these constructs [44]. Moreover, developers may not prefer to use these constructs to write concurrent code [45], perhaps missing opportunities where this modern technology may be beneficial [46]. Also, API developers may be restricted in writing libraries that are maximally usable and extendable due to the current language design [24].

One significant problem is that, while MapReduce is traditionally highly distributed with no shared memory, streaming APIs in mainstream OO languages execute within a single node under multiple threads sharing the same memory. This difference makes using these constructs and APIs prone to such problems as thread contention, necessitating great care in writing correct yet efficient code. Moreover, discovering and repairing these problems may involve complex interprocedural whole-program analysis, a thorough understanding of construct intricacies, and detailed knowledge of situational API replacements. Manually detecting problematic code areas [47][48] and uncovering appropriate optimizations that transform source code while preserving its original semantics (refactoring) can be daunting, especially in large and complex projects or where developers are unfamiliar with how best to use such constructs and APIs. Approximately four thousand stream questions have been posted on [49], of which ∼5% remain unanswered.

In our preliminary work [46], we found 157 total streams across 11 open-source subject projects with a 34-subject maximum, which can increase over time with a rise in stream popularity. Also, the number of operations issued per stream may be many; 4.14 operations per stream on average we discovered. This warrants manual determination and compacting operation insertion locations when manually optimizing streams. Lastly, (manual) type hierarchy analysis may be needed to discover ways to use streams in a particular context. Permutating through operation combinations and assessing performance, for which dedicated performance tests may be absent, can be burdensome. Although the listing above shows stream processing in a single method, determining stream characteristics may involve an interprocedural analysis as it depends on the run time type of the collection from which it is derived. Also, we found that, across 19 open source projects of varying size and domain, 144 streams are returned from methods, and 147 are used as parameters to methods. This is in stark contrast to the usual tutorials portraying stream usage.

Safe Optimization of Streaming API Client Programs via Automated Refactoring

In our effort towards safely increasing the performance of Streaming API client programs, we have formulated a fully automated, semantics-preserving refactoring approach that transforms stream code for improved performance. The approach is based on a novel data ordering and typestate analysis. The ordering analysis involves inferring when maintaining the order of a data sequence in a particular expression is necessary for semantics preservation. Typestate analysis is a program analysis that augments the type system with “state” and has been traditionally used for preventing resource errors [50][51]. Here, it is used to identify stream usages that can benefit from “intelligent” parallelization, resulting in more efficient, semantically-equivalent code. Our approach interprocedurally analyzes relationships between types. It also discovers possible side-effects in arbitrarily complex lambdas to safely transform streams to either execute sequentially or in parallel, depending on which refactoring preconditions, Furthermore, to the best of our knowledge, it is the first automated refactoring technique to integrate typestate. Integrating such complex static analyses is challenging as these analyses commonly involve instruction-based IR, while refactorings work on Abstract Syntax Trees (ASTs) to facilitate source-to-source transformation.

Our refactoring approach was implemented as an open-source Eclipse plug-in called Optimize Streams that integrates analyses from the WALA static analysis framework [52] and the SAFE typestate analysis engine. An engineering track paper describing this implementation was published at IEEE SCAM ’18 [53] and won a Distinguished Paper Award. The evaluation involved studying the effects of our plug-in on 11 projects of varying sizes and domains with a total of ∼642 thousand lines of code. Our study has indicated that:

  1. Given its interprocedural nature, the analysis cost is reasonable, especially considering that it is fully automated, with an average running time of 0.45 minutes per candidate stream and 6.602 seconds per thousand lines of code,
  2. Despite their ease of use, parallel streams are not commonly (manually) used in modern software, motivating an automated approach, and
  3. Our approach helps refactor stream code for greater efficiency despite its conservative nature.

This work makes contributions in the following areas. Preconditions are formulated to assist developers in maximizing the efficiency of their stream code by automatically determining when it is safe and possibly advantageous to execute streams in parallelwhen running streams in parallel can be counterproductive, and when ordering is unnecessarily depriving streams of optimal performance. The critical insight is that the type of the resulting reduction can be analyzed to maximize stream performance. Characteristics such as ordering can be detrimental to running MapReduce-style operations in parallel. If it can be determined that the ordering attribute of the reduction result is not essential, e.g., the resulting data is not collected into a data structure that maintains ordering such as an array, such streams can either run in parallel as-is or be altered not to maintain ordering (unordered). Conversely, if streams already running parallel code must maintain ordering, such code should run sequentially. Also significant are the side-effects that lambdas passed to MapReduce operations may produce and whether or not data structures manipulated in those lambdas support simultaneous access.

Consider the following listing that contains code that can optimally run in parallel:

// collect weights over 43.2 into a set in parallel.
Set<Double> heavyWidgetWeightSet = orderedWidgets.parallelStream().
    map(Widget::getWeight).filter(w -> w > 43.2).collect(Collectors.toSet());

The reason is that:

  1. The lambdas, e.g., w -> w > 43, contains no heap side effects, and
  2. The data elements (weights) are gathered into a data structure (a set) that does not maintain element ordering.

Had either of these conditions been false, our refactoring would alter the above code to run sequentially, e.g., as in orderedWidgets.stream(). Our approach refactors streams for greater parallelism while maintaining original semantics. Practical stream analysis necessitates several generalizations of typestate analysis, including determining object state at arbitrary points and support for immutable object call chains. Reflection is combined with (hybrid) typestate analysis to identify initial states. To ensure real-world applicability, the approach was implemented as an Eclipse plug-in built on WALA and SAFE and was used to study 11 programs that use streams. Our technique successfully refactored 36.31% of candidate streams, and we observed an average speedup of 3.49 during performance testing. The experimentation also gives insights into how streams are used in real-world applications, which can motivate future language and API design. A publication that includes these results appeared in the technical track of ICSE ’19 [19].

Large-scale Empirical Study

Through a large-scale empirical study, a broader understanding of how streaming APIs are used (and misused) in open-source, real-world software projects is investigated. To the best of our knowledge, it is the first such study to focus on streaming API integration in mainstream OO programs in the general sense. The results may be helpful to language designers, API developers, and tool support. Our preliminary work in this area resulted in a technical track publication at FASE ’20 [54] that won the 2020 EAPLS Best Paper Award.

Evolution and Maintenance of Machine Learning Systems

Machine Learning (ML) systems, like stream-based programming, are also a growing area for data-intensive software applications. The data aspect of such systems introduces challenges specific to such systems [55] as data is considered part of the code. Long-liven ML systems, such as those used in e-commerce and other industrial settings, will require special considerations during software evolution and maintenance. Consequently, we will develop approaches and techniques to combat evolution challenges, e.g., technical debt [56], [57], specific to ML systems. The central insight is that ML models are only a tiny part of the overall system, with many subsystems supporting the learning process. Combining traditional software components with data-driven (learning) components can introduce unique subtitles in evolving the components together.

Our preliminary work in this area has involved empirically studying common refactorings and the ML-specific technical debt they combat. This study is the first of its kind and has served as an open-source, data-driven complement to the seminal work done by [55] on the specific evolution challenges of ML systems. In this work, from 327 patches of 26 projects manually examined, we built a rich hierarchical, crosscutting taxonomy of common generic and ML-specific refactorings, whether they occur in ML-related code—code specific to ML-related tasks (e.g., classifiers, feature extraction, algorithm parameters)—and the ML-specific technical debt they address. We also introduced 14 and 7 new ML-specific refactorings and technical debt categories, respectively. Lastly, we proposed preliminary recommendations, best practices, and anti-patterns for long-lasting ML system evolution from our statistical results, as well as an in-depth analysis. The results were published in a paper included in the main technical track of ICSE ’21 [58]. My plans include generalizing and subsequently automating—extracting underlying theories and techniques—the common refactorings in this study.

Practical Analyses and Transformations for Scalable Imperative Deep Learning Programs

Like general ML systems, the engineering of Deep Learning (DL) systems also presents unique challenges. DL systems inherently deal with larger datasets, and their run-time performance is critical, particularly in industrial settings. Traditionally, DL systems are programmed using DL frameworks that support a procedural paradigm, emphasizing deferred execution of model code. Because the APIs mimic facets of the underlying (GPU) hardware, such programs are typically performant; however, because such programs do not follow an imperative, step-by-step control flow, they are notoriously tricky to write, debug, and evolve. Modern DL frameworks now support imperative (or “eager”) programming to combat these problems. Unfortunately, while easier to program, imperative DL programs suffer from run-time performance problems.

One avenue of research in this area has explored bridging the two paradigms, a technique called hybridization (e.g., in TensorFlow), which has since been incorporated into mainstream DL frameworks. Here, developers specify parts of the code they wish to optimize, and the runtime automatically converts the code into execution graphs, those that would have been built using deferred execution. This way, DL developers (data scientists) can write imperative DL code and take advantage of advanced hardware acceleration. However, this “bridging” technology comes at the expense of its obstacles, as developers need to manually specify non-trivial metadata, i.e., which functions to hybridize and how to hybridize them. Although approaches have emerged that enable the runtime to convert DL code to graphs without metadata, they usually require custom interpreters and only support certain Python constructs, which may not be practical for industrial settings. As such, DL developers, who may not be classically trained Software Engineers, are left to manually tune their DL applications on the client side.

We empirically studied the challenges developers face in writing scalable, imperative DL systems by manually examining—with the aid of automated mining approaches—470 and 446 patches and bug reports, respectively, of 250 projects and built a rich hierarchical taxonomy of common hybridization challenges. We found that DL hybridization is widely usedsubtle bugs in using hybridization can cause run-time performance degradation—the opposite of its intention, and hybridization is commonly incompatible in a given context—limiting its application. This work has been published in MSR ’22 [59]. and ongoing work is being supported by an NSF CISE/CCF/SHF core research grant.

References

[1]
ISO/IEC 14764, Software engineering – software life cycle processes – maintenance. International Organizations for Standardization, Geneva, Switzerland, 2006.
[2]
D. Dig, J. Marrero, and M. D. Ernst, “Refactoring sequential java code for concurrency via concurrent libraries,” in International conference on software engineering, 2009, pp. 397–407. doi: 10.1109/icse.2009.5070539.
[3]
W. F. Opdyke, “Refactoring object-oriented frameworks,” PhD thesis, University of Illinois at Urbana-Champaign; University of Illinois at Urbana-Champaign, Champaign, IL, USA, 1992.
[4]
M. Fowler, Refactoring: Improving the design of existing code. Boston, MA, USA: Addison-Wesley, 1999.
[5]
F. Tip, R. M. Fuhrer, A. Kieżun, M. D. Ernst, I. Balaban, and B. De Sutter, “Refactoring using type constraints,” ACM Transactions on Programming Languages and Systems, vol. 33, no. 3, pp. 91–947, May 2011, doi: 10.1145/1961204.1961205.
[6]
R. Khatchadourian, O. Moore, and H. Masuhara, “Towards improving interface modularity in legacy Java software through automated refactoring,” in Companion proceedings of the international conference on modularity, Mar. 2016, pp. 104–106. doi: 10.1145/2892664.2892681.
[7]
A. Donovan, A. Kieżun, M. S. Tschantz, and M. D. Ernst, “Converting Java programs to use generic libraries,” in ACM SIGPLAN international conference on object-oriented programming, systems, languages, and applications, 2004, pp. 15–34.
[8]
H. Kegel and F. Steimann, “Systematically refactoring inheritance to delegation in java,” in International conference on software engineering, 2008, pp. 431–440.
[9]
M. A. G. Gaitani, V. E. Zafeiris, N. A. Diamantidis, and E. A. Giakoumakis, “Automated refactoring to the null object design pattern,” Inf. Softw. Technol., vol. 59, no. C, pp. 33–52, Mar. 2015, doi: 10.1016/j.infsof.2014.10.010.
[10]
R. Khatchadourian, “Automated refactoring of legacy Java software to enumerated types,” Automated Software Engineering, pp. 1–31, Dec. 2016, doi: 10.1007/s10515-016-0208-8.
[11]
R. Khatchadourian and H. Masuhara, “Automated refactoring of legacy Java software to default methods,” in International conference on software engineering, May 2017, pp. 82–93. doi: 10.1109/ICSE.2017.16.
[12]
W. Tansey and E. Tilevich, “Annotation refactoring: Inferring upgrade transformations for legacy applications,” in ACM SIGPLAN international conference on object-oriented programming, systems, languages, and applications, 2008.
[13]
D. von Dincklage and A. Diwan, “Converting Java classes to use generics,” in ACM SIGPLAN international conference on object-oriented programming, systems, languages, and applications, 2004, pp. 1–14.
[14]
R. Fuhrer, F. Tip, A. Kieżun, J. Dolby, and M. Keller, “Efficiently refactoring Java applications to use generic libraries,” in European conference on object-oriented programming, Jul. 2005, pp. 71–96.
[15]
A. Kieżun, M. D. Ernst, F. Tip, and R. M. Fuhrer, “Refactoring for parameterizing Java classes,” in International conference on software engineering, 2007, pp. 437–446.
[16]
A. de Lucia, G. A. D. Lucca, A. R. Fasolino, P. Guerra, and S. Petruzzelli, “Migrating legacy systems towards object-oriented platforms,” International Conference on Software Maintenance, p. 122, 1997.
[17]
Y. Zou and K. Kontogiannis, “A framework for migrating procedural code to object-oriented platforms,” Asia-Pacific Software Engineering Conference, vol. 0, p. 390, 2001.
[18]
K. Kontogiannis, J. Martin, K. Wong, R. Gregory, H. Müller, and J. Mylopoulos, “Code migration through transformations: An experience report,” in Conference of the centre for advanced studies on collaborative research, 1998, p. 13.
[19]
R. Khatchadourian, Y. Tang, M. Bagherzadeh, and S. Ahmed, “Safe automated refactoring for intelligent parallelization of Java 8 streams,” in International conference on software engineering, May 2019, pp. 619–630. doi: 10.1109/ICSE.2019.00072.
[20]
G. Pinto, F. Soares-Neto, and F. Castor, “Refactoring for energy efficiency: A reflection on the state of the art,” in International workshop on green and sustainable software, May 2015, pp. 29–35. doi: 10.1109/GREENS.2015.12.
[21]
N. Tsantalis, D. Mazinanian, and S. Rostami, “Clone refactoring with lambda expressions,” in International conference on software engineering, 2017, pp. 60–70. doi: 10.1109/ICSE.2017.14.
[22]
Y. Lin, S. Okur, and D. Dig, “Study and refactoring of android asynchronous programming,” in International conference on automated software engineering, 2015, pp. 224–235. doi: 10.1109/ASE.2015.50.
[23]
D. Dig, “The changing landscape of refactoring research in the last decade: Keynote,” in International workshop on API usage and evolution, Jun. 2018, p. 1. doi: 10.1145/3194793.3194800.
[24]
A. Biboudis, N. Palladinos, G. Fourtounis, and Y. Smaragdakis, “Streams à la carte: Extensible pipelines with object algebras,” in European conference on object-oriented programming, 2015, vol. 37, pp. 591–613. doi: 10.4230/LIPIcs.ECOOP.2015.591.
[25]
J. Dean and S. Ghemawat, MapReduce: Simplified data processing on large clusters,” Commun. ACM, pp. 107–113, 2008, doi: 10.1145/1327452.1327492.
[26]
Oracle, java.util.stream (Java SE 9 & JDK 9)–classes to support functional-style operations on streams of elements, such as map-reduce transformations on collections.” 2017. http://docs.oracle.com/javase/9/docs/api/java/util/stream/package-summary.html
[27]
J. Lau, “Future of Java 8 language feature support on Android,” Android Developers Blog, Mar. 14, 2017. http://android-developers.googleblog.com/2017/03/future-of-java-8-language-feature.html (accessed Jul. 06, 2017).
[28]
Microsoft, LINQ: .NET language integrated query,” 2018. http://msdn.microsoft.com/en-us/library/bb308959.aspx (accessed Apr. 16, 2018).
[29]
M. Shilkov, “Introducing stream processing in f#,” Nov. 29, 2016. http://mikhail.io/2016/11/introducing-stream-processing-in-fsharp (accessed Jul. 18, 2018).
[30]
É. P. F. de Lausanne, “Collections–mutable and immutable collections–scala documentation,” 2017. https://www.scala-lang.org/api/2.12.3/scala/collection/index.html (accessed Apr. 16, 2018).
[31]
Oracle Corporation, Lambda Expressions (The Java Tutorials > Learning the Java Language > Classes and Objects),” 2015. http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html (accessed Jul. 05, 2017).
[32]
B. Wagner, Lambda Expressions (C# Programming Guide) | Microsoft Docs,” Mar. 03, 2017. https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/statements-expressions-operators/lambda-expressions (accessed Jul. 05, 2017).
[33]
A. Gyori, L. Franklin, D. Dig, and J. Lahoda, “Crossing the gap from imperative to functional programming through refactoring,” in ACM SIGSOFT symposium on the foundations of software engineering, 2013, pp. 543–553. doi: 10.1145/2491411.2491461.
[34]
Oracle Corporation, Java Programming Language Enhancements.” 2018. Available: http://docs.oracle.com/javase/8/docs/technotes/guides/language/enhancements.html
[35]
Oracle Corporation, Default Methods (The Java Tutorials, Learning the Java Language, Interfaces and Inheritance),” 2015. http://docs.oracle.com/javase/tutorial/java/IandI/defaultmethods.html (accessed Aug. 20, 2015).
[36]
B. Goetz, “Interface evolution via virtual extensions methods,” Oracle Corporation, Jun. 2011. Available: http://cr.openjdk.java.net/~briangoetz/lambda/Defender%20Methods%20v4.pdf
[37]
J. Bloch, Effective java, 2nd ed. Upper Saddle River, NJ, USA: Prentice Hall PTR, 2008.
[38]
J. Palsberg and M. I. Schwartzbach, Object-oriented type systems. Chichester, UK: John Wiley; Sons Ltd., 1994.
[39]
R. Khatchadourian and H. Masuhara, “Defaultification refactoring: A tool for automatically converting Java methods to default,” in International conference on automated software engineering, Oct. 2017, pp. 984–989. doi: 10.1109/ASE.2017.8115716.
[40]
R. Khatchadourian and H. Masuhara, “Proactive empirical assessment of new language feature adoption via automated refactoring: The case of Java 8 default methods,” in International conference on the art, science, and engineering of programming, Mar. 2018, vol. 2, pp. 6:1–6:30. doi: 10.22152/programming-journal.org/2018/2/6.
[41]
R. Warburton, Java 8 lambdas: Pragmatic functional programming, 1st ed. O’Reilly Media, 2014, p. 182.
[42]
D. Mazinanian, A. Ketkar, N. Tsantalis, and D. Dig, “Understanding the use of lambda expressions in java,” ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, vol. 1, no. OOPSLA, pp. 85:1–85:31, Oct. 2017, doi: 10.1145/3133909.
[43]
Oracle Corporation, Parallelism (The Java Tutorials > Collections > Aggregate Operations).” 2015. Available: http://docs.oracle.com/javase/tutorial/collections/streams/parallelism.html
[44]
Oracle, “Thread interference (the java tutorials > essential classes > concurrency),” 2017. http://docs.oracle.com/javase/tutorial/essential/concurrency/interfere.html
[45]
S. Nielebock, R. Heumüller, and F. Ortmeier, “Programmers do not favor lambda expressions for concurrent object-oriented code,” Empirical Software Engineering, May 2018, doi: 10.1007/s10664-018-9622-9.
[46]
Y. Tang, R. Khatchadourian, M. Bagherzadeh, and S. Ahmed, “Towards safe refactoring for intelligent parallelization of java 8 streams,” in International conference on software engineering: Companion proceeedings, May 2018, pp. 206–207. doi: 10.1145/3183440.3195098.
[47]
M. Tufano et al., “When and why your code starts to smell bad,” in International conference on software engineering, May 2015, vol. 1, pp. 403–414. doi: 10.1109/ICSE.2015.59.
[48]
M. Fowler, “CodeSmell,” Feb. 09, 2008. http://martinfowler.com/bliki/CodeSmell.html (accessed Jul. 09, 2018).
[49]
Stack Overflow, “Newest ’java-stream’ questions,” Feb. 2018. http://stackoverflow.com/questions/tagged/java-stream (accessed Mar. 06, 2018).
[50]
R. E. Strom and S. Yemini, “Typestate: A programming language concept for enhancing software reliability,” IEEE Transactions on Software Engineering, vol. SE–12, no. 1, pp. 157–171, Jan. 1986, doi: 10.1109/tse.1986.6312929.
[51]
S. J. Fink, E. Yahav, N. Dor, G. Ramalingam, and E. Geay, “Effective typestate verification in the presence of aliasing,” ACM Transactions on Software Engineering and Methodology, vol. 17, no. 2, pp. 91–934, May 2008, doi: 10.1145/1348250.1348255.
[52]
WALA Team, T.J. Watson Libraries for Analysis (WALA),” Jun. 2015. http://wala.sf.net (accessed Jan. 18, 2017).
[53]
R. Khatchadourian, Y. Tang, M. Bagherzadeh, and S. Ahmed, “A tool for optimizing Java 8 stream software via automated refactoring,” in International working conference on source code analysis and manipulation, Sep. 2018. doi: 10.1109/SCAM.2018.00011.
[54]
R. Khatchadourian, Y. Tang, M. Bagherzadeh, and B. Ray, “An empirical study on the use and misuse of Java 8 streams,” in Fundamental approaches to software engineering, Apr. 2020, pp. 97–118. doi: 10.1007/978-3-030-45234-6_5.
[55]
D. Sculley et al., “Hidden technical debt in Machine Learning systems,” in Nips, 2015, vol. 2, pp. 2503–2511.
[56]
E. Tom, A. Aurum, and R. Vidgen, “An exploration of technical debt,” Journal of Systems and Software, vol. 86, no. 6, pp. 1498–1516, 2013, doi: 10.1016/j.jss.2012.12.052.
[57]
N. Brown et al., “Managing technical debt in software-reliant systems,” in FSE/SDP workshop on future of software engineering research, 2010, pp. 47–52. doi: 10.1145/1882362.1882373.
[58]
Y. Tang, R. Khatchadourian, M. Bagherzadeh, R. Singh, A. Stewart, and A. Raja, “An empirical study of refactorings and technical debt in Machine Learning systems,” in International conference on software engineering, May 2021, pp. 238–250. doi: 10.1109/ICSE43902.2021.00033.
[59]
T. Castro Vélez, R. Khatchadourian, M. Bagherzadeh, and A. Raja, “Challenges in migrating imperative Deep Learning programs to graph execution: An empirical study,” in International conference on mining software repositories, May 2022, pp. 469–481. doi: 10.1145/3524842.3528455.