In large software systems determining the real impact of a software change can be very hard. Even minor changes can produce errors in unexpected locations. However, the data present in every modern software development project can be used to shed light on non-obvious dependencies in the software and to warn about possibly impacted testcases ahead of time. We'll show how well a machine learning system trained on five years of data from source control and test results performs the task of alerting developers to potentially test-breaking commits.
Target Audience: Developers, Testers, Architects, Managers, Decision Makers
Prerequisites: Basic knowledge in Machine Learning, Big Data, Software Development