Methodology for validating software metrics

Empirical data, collected from three different application domains, is then analyzed using the MOOD metrics, to support this theoretical validation.

Results show that (with appropriate changes to remove existing problematic discontinuities) the metrics could be used to provide an overall assessment of a software system, which may be helpful to managers of software development projects.

Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Abstract—Measurement of software security is a long standing challenge to the research community.Several of Chidamber& Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. At the same time, practical security metrics and measurements are essential for secure software development.To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository.Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful.In order to do this, we assessed these metrics as predictors of fault-prone classes.

This study is complementary to [Li& Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes.

However, further empirical studies are needed before these results can be generalized. To remain competitive in the fast paced world of software development, managers must optimize the usage of their limited resources to deliver quality products on time and within budget.

In this paper, we present an approach (The Top Ten List) which highlights to managers the ten most susceptible sub ..." To remain competitive in the fast paced world of software development, managers must optimize the usage of their limited resources to deliver quality products on time and within budget.

Security is a software attribute that is hard to measure; hence validating a security measure is even harder. Abstract—Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules.

There is not much prior work on validating security measurements and metrics. Several classification models have been evaluated for this task.

However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results.