Discovering Relationships between Service and Customer Satisfaction

Michael Buckley and Ram Chillarege IBM Thomas J. Watson Research Center, 1995

Abstract: --

Organizations spend significant resources tracking customer satisfaction and managing service delivery. Although a great deal of effort is expended in understanding what goes on within each of these areas, little or no effort has been applied to identifying and quantifying the relationships between the two. The objective of this research is to discover and establish potential relationships between service data and customer satisfaction. This understanding will enable more effective management, which will lead to improved quality, reduced cost and increased customer satisfaction.

This study uses three years of data from an IBM operating system to measure the correlation between 15 service variables and nine customer satisfaction attributes. The results show that:

  • There is a relationship between the service data and customer satisfaction. This is the fast time the existence of such a relationship has been proven and quantified.
  • The relative order of influence on customer satisfaction, of the four key service measures that are usually tracked, is defective fixes, followed by the number of problems, which in turn are followed by the number of defects and Days to Solution. The latter two were found to have little or no inf7uence on customer satisfaction.
  • There is a return on investment of at least ten to one, for each dollar spent on quality improvement efforts in development.

Key Words: Software Quality, Customer Satisfaction, Service Process, Correlation, Empirical Analysis.


This research established that there is a relationship between several service measures and Customer Satisfaction. The results are based upon an analysis of over three years of actual data for an IBM operating system product. Fifteen service variables and nine customer satisfaction attributes were analyzed. The implication of the results is that we can improve Customer Satisfaction by controlling the relevant service measures. Although, the existence of such a relationship was often questioned and some believed that it existed it had not been proven previously. The main findings are:

  • 1. The four service variables that are mostly commonly tracked, from the fifteen that were analyzed, are the number of defective fixes (PEs), the number of problems (PMRs), the number of defects (APARs), and Days to Solution.. This study found that the relative ranking for these four with respect to their influence on customer satisfaction is: PE > PMR Total >> APAR Total > Days to Solution That is, defective fixes (PEs) are the strongest driver of customer satisfaction and they are closely followed by the total number of problems (PMRs), while the number of APARs and Days to Solution have little or no influence on customer satisfaction. Thus, if resources are limited, the service focus should be on reducing defective fixes (PEs) and problems (PMRs), rather than on defects (APARs) or Days to Solution.
  • 2. From a causal perspective,if we consider ll fifteen service variables, the three that are the strongest drivers of customer satisfaction are (a) the number of defective fixes (PEs), (b) the number of Preventive Service problems, and (c) the total number of problems (PMRs).
  • 3. From an effect viewpoint, the Overall and Performance attributes are the two customer satisfaction attributes that are the most influenced by the service data. The results show that there is little or no relationship between the service data and the Maintainability, Interoperability, and Usability attributes.
  • 4. The cost - benefit study shows that for each dollar invested in quality improvement efforts one will save at least ten dollars in service costs. Hence, there should be a continued focus on improving the service measures, since this will reduce service costs in addition to increasing customer satisfaction.

These findings are specific to the product analyzed here. The authors believe that similar relationships will exist between these two data sets for other products, but that the specific details will vary by product. Therefore, the methodology presented here should be applied to data from other products in order to (a) validate the findings and (b) to determine what the specific links are for other products.


We are indebted to a host of people at IBM who provided invaluable insight and guidance, along with access to data and facilities. These include: Tom Byrnes, Bill Spencer, Al Beckmann, John Yang, Bill Bleier, P. Santhanam, Ram Biyani, Jarir Chaar, and Kathy Bassin. We are especially grateful to Ken Fordyce, Art Nadas and Elliot Feit for contributing many valuable suggestions to the research.


  • C. Jones,“ Applied Software measurement Assuring productivity and quality,” McGraw Hill, 1991.
  • W. S. Humphrey, “Managing the Software Process,” Addison- Wesley, 1989.
  • B. W. Boehm. “Software Engineering Economics,” Prentice Hall, 1981.
  • J. Gray, “Why Do Computers Stop and What Can Be Done About It,” Proc. Sth Symp. on Reliability in Distributed Software and Database Systems, pp. 3-12, 1986.
  • X. Castillo, “A Workload Dependent Software Reliability Prediction Model,” Proc. 12th Int. Symp. on’ Fault-Tolerant Com)utkg, pp. 279-286, June 1982.
  • R. Chillarege and D. P. Siewiorek. “Special Issue on Experimental Evaluation of Computer System Reliability.” IEEE Trans. Reliability. vol. 39, no. 4, 1990.
  • R. Gradv and D. Caswell. “Software Metrics: Establishing a Company-Wide Program,” Prentice Hall, 1987,1987.
  • J. D. Musa, “Software Reliability: Measurement, Prediction, Application.,” McGraw Hill, 1990.
  • M. Sullivan and R. Chillarege, “Software Defects and their Impact on System Availability - a study of Field Failures in Operating Systems,” Proc. 21st Intl. Symp. on Fault-Tolerant Computing (FTCS 21), pp 2-9, 1991.
  • R. Chillarege, I. Bhandari, J. Chaar, M. Halliday., D. Moebus, B. Ray and M.Y. Wang, “Orthogonal Defect Classification - A Concept for In-Process Measurement,” IEEE Trans. on Software Engineering, vol. 18, no. 11, pp. 943-956, 1992.
  • T. J. McCabe, ‘Complexity Measure,” IEEE Tranr. on Software Engineering, vol. 2, no. 4, 1976.
  • M. Buckley, “Computer Event Monitoring and Analysis,” Ph.D. The&, Carnegie-Mellon University, Pittsburgh, PA, 1992.
  • S. D. Schlotzhauer and R C. Littell, “SAS System for Elementary Statistical Analysis, SAS Institute.
  • A. H. Maslow, “Motivation and Personality,” Harper and Row, 1970.
  • G. W. Snedecor and W. G. Cochran, “Statistical Methods,* Iowa Srare University Press, 1989.


Full Paper in PDF