INGI2252 Software Measures & Maintenance

Post on 22-Apr-2015

397 views 1 download

description

These slides show an example of a presentation produced by students following the course LINGI2252 "Software Engineering: Measures and Maintenance", taught by Prof. Kim Mens to Master-level students in computer science and computer science engineering at the Louvain School of Engineering, UCL, Belgium. Throughout this course, students have to analyze the quality of an existing software system (and more specifically, its maintainability), understand the nature of some of the problems encountered when maintaining complex software systems, suggest appropriate solutions to improve reusability and maintainability of a software system, measure its quality and support its evolution and program in Smalltalk, a pure object-oriented programming language.

transcript

INGI2252Measures &

MaintenanceAn excerpt of some final presentations

by Prof. Kim Mens

partly based on a presentation by R. Capron & R. Wallez, Dec. 2012

Concept of the Course« Software Measures and Maintenance »

• Two interleaved parts• Software development and maintenance

• Software measures and metrics

• Main objective• How to develop « better » and more « maintainable » software systems?

• and object-oriented software in particular

• How to assess the quality of a software system?

• and its maintainability in particular

• Side objective• Learn yet a new programming language (Smalltalk)

Techniques and tools« Software Measures and Maintenance »

• “Pure” object-oriented programming

• Integrated development environments

• Best programming practices & design heuristics

• Framework development

• Refactoring & bad smells

• Unit testing

• Software metrics and measures

• And a whole bunch of software analysis tools

Course assignment« Software Measures and Maintenance »

• No traditional exam

• Instead, analyze quality (maintainability) of an existing software systemwith techniques and tools seen in the course

• Part 1 : Manual code assessment

• Based on good practices, conventions, heuristics, design principles and patterns

• Part 2 : Automated analysis

• Using software metrics, visualisation and code querying tools like MOOSE, Mondrian, SOUL, ...

• Present overall results of the analysis to a « client »

Global view of the analyzed systemUsing the « System Complexity » visualization

NOP : Number of PackagesNOC : Number of ClassesNOM : Number of MethodsLOC : Lines of CodeCYCLO : McCabe’s Cyclomatic complexity number

ANDC : Average Number of Derived ClassesAHH : Average Hierarchy HeightCALLS : Number of Distinct Method InvocationsFANOUT : Number of Called Classes

Summary of the system’s measures

Using the « Overview Pyramid »

Critical evaluation ofthresholds used

Compare predefined thresholds

(good, bad, average) used by

the tool with :

• The system being analyzed

• Some “good” reference

systems

Collection hierarchy

Pharo Kernel

Evolution of a metricin terms of the chosen threshold

Detailed analysis of FANOUT metric

per class of the analyzed system

Number of Comments

Number of Public Methods

9

Illustration of the lack of comments

Absence of accessors and mutators

Using the « Class Blueprint »

After adding accessors and mutators

Using the « Class Blueprint »

Number of statements vs. lines of code

Using a Mondrian « Scatter Plot »

Detection of code duplicationUsing the « Side-by-side Code Duplication »

tool

Discovering dependenciesbetween packages

Using the« Dependency Structure Matrix »