Performance Measurement – Software Engineering Perspective

Association of end user performance with low level and derived measures

Measuring end user perceived performance and satisfaction with the use of an information system has already been presented by several researchers (Baer, 2011) (Buyya, Yeo, Venugopal, Brober, & Brandic, 2009) (Davis F. D., 1989) (Davis & Wiedenbeck, 2001) (Etezadi-Amoli & Farhoomand, 1996) (Fagan & Neill, 2004) (Law, Roto, Hassenzahl, Vermeeren, & Kort, 2009) (Mahmood, Burn, Gemoets, & Jacquez, 2010) (Marshall, Mills, & Olsen, 2008) (Tullis & Albert, 2010). In these publications, end user performance and end user satisfaction were identified as intrinsically interdependent, meaning that whenever end users where satisfied with information systems these proved to be well performing, and vice versa. These research results were mostly based on conducting surveys and interviews with the end users in order to identify factors, determine performance and evaluate information system quality. One of the challenges of this research is mapping measures to end user performance characteristics. Assuming that a way for the end user to communicate the dissatisfaction with a system is to present a complaint, a survey could be performed on these complaints, and, in this survey, identify the events where the end user was not satisfied with the system’s performance. The performance logs of these events could be investigated to look for evidence of which measures were in a degraded state at the time reported for each of the events. This could lead to a non-exhaustive list of measures and states reported for moments of end user dissatisfaction.

Mapping low level and derived measures into the Performance

Measurement Framework Measuring the performance of cloud computing-based applications using ISO quality characteristics is a complex activity for various reasons. Among them is the complexity of the typical cloud computing infrastructure on which an application operates. Beginning with the quality concepts proposed in the ISO 25010 standard (maturity, fault tolerance, availability, recoverability, time behavior, resource utilization and capacity) this research maps the collected measures into the performance concepts by associating the influence of each particular measure in regards to the concepts. This is fundamentally different from Bautista’s proposition where the measures are manually associated to the performance concepts and the formulae are built depending on the context selected. In the present research, the combination for particular measures is only relevant for that particular moment in time and, for another observation, different measures can fulfill the same concept. This is explained in detail in section 4.

Validation of the quality measures using a validation method Jacquet and Abran (Jacquet & Abran, 1998) propose a validation framework for software quality measures which address three main validation issues: 1) the validation of the design of the measurement method; 2) the application of the measurement method; and 3) the predictive system. This measurement validation framework is based on a measurement model which is detailed in Figure 2.5 and presented later in this thesis. For this research, we use the results of sub-steps 1.4.3.1 and 1.4.3.2 using this model and conduct 3 experiments: 1) the validation of the representation theorems; 2) the application of different numerical values to these rules in order to simulate the response of the theorem; and 3) the proposition of a quality model.

Laboratory experiment for end user performance modeling This sub-step will consider the measures collected during sub-step 1.4.3.1 and the validated mapping to the measurement framework from sub-steps 1.4.3.3 and 1.4.3.4 to manually create an end user performance model for the experimental case study. The objective is to gather information for the creation of an automated solution that would be able to respond to the information needs of the decision makers in a timely manner. This experiment will also attempt to represent the end user performance perspective in a graphical manner, facilitating the interpretation of results. In this experiment, we will also determine if the log data is sufficient for modeling end user performance perspectives.

Expanded laboratory experimentation Leveraging the outcomes of sub-step 1.4.3.4, this next step will expand the initial population to a larger infrastructure of servers and desktops, aiming to target approximately 500 servers and 30000 end users in North America. The objective of this is to verify the reproducibility and expandability of the earlier findings. If the log data has been found to be insufficient in the previous sub-step, a feedback mechanism will be proposed during this sub-step in order to gather further information about the user’s perspective under different information system performance scenarios, such as where there is evidence of degradation, evidence of good performance, lack of end user complaints or increased end user complaints.

Performance Measurement – Software Engineering Perspective This section presents the ISO 25000 family of standards, the ISO 15939 standard, the subject of metrics validation and the difficulties of applying such standards in organizations. The objective is the documentation of the completeness of the contemporary ISO 25000 standard as the confluence of previous standards, the coverage of the ISO 15939 measurement process and the caveats that involve the selection, election and evaluation of the metrics. Finally, an evaluation of the performance measurement process is executed to demonstrate the typical efforts and challenges involved in applying such standards in an organization. What is quality for a software product? Many authors define and debate quality: (Shewhart, 2015), (Deming, 2000), (Feigenbaum, 1991), (Juran & De Feo, 2010) and others have contributed to the creation of a broad definition, reflected in ISO/IEC 9001, where quality is the characteristic that a product or a service has that defines it as satisfactory to its intentions. Measuring quality then requires validated and widely accepted measurement models like ISO/IEC 9126 (ISO/IEC, 2003) and its superseding ISO/IEC 25000 series (ISO/IEC, 2005) of standards named SQuaRE. Systems and Software Engineering – Systems and software Quality Requirements and Evaluation (SQuaRE) aims to harmonize many other standards of software quality such as ISO/IEC 9126, 14598 and 15939, complementing and addressing the gaps between them. SQuaRE has many groups of documents for different audiences. They are: Quality Management (ISO/IEC 2500n), Quality Model (ISO/IEC 2501n), Quality Measurement (ISO/IEC 2502n), Quality Requirements (ISO/IEC 2503n), Quality Evaluation (ISO/IEC 2504n) and the Extensions (ISO/IEC 25050 – 25099). The 5 groupings and their 14 documents are listed in the next section (section 2.1.1.1).

Le rapport de stage ou le pfe est un document d’analyse, de synthèse et d’évaluation de votre apprentissage, c’est pour cela rapport-gratuit.com propose le téléchargement des modèles complet de projet de fin d’étude, rapport de stage, mémoire, pfe, thèse, pour connaître la méthodologie à avoir et savoir comment construire les parties d’un projet de fin d’étude.

Table des matières

RÉSUMÉ
ABSTRACT
LIST OF TABLE
LIST OF FIGURES
LIST OF ALGORITHMS
TABLE OF ABBREVIATIONS
INTRODUCTION
CHAPTER 1 Research Introduction
1.1 Motivation
1.2 Problem definition
1.3 Research question
1.4 Methodology
1.4.1 Definition of the research
1.4.2 Planning
1.4.3 Development of theory and experimentation
1.4.4 Interpretation of the results
1.5 Chapter conclusion
CHAPTER 2 Literature review
2.1 Performance management
2.1.1 Performance Measurement – Software Engineering Perspective
2.1.2 Performance Measurement – Business perspective
2.2 Cloud computing
2.2.1 Definition
2.2.2 Service and deployment models
2.2.3 Advantages and disadvantages of cloud computing technology
2.2.4 Section conclusion
2.3 Analysis of the previous research
2.3.1 End user performance perspective
2.3.2 System measurement process
2.3.3 Big Data and Machine learning
2.3.4 Section conclusion
2.4 Chapter conclusion
CHAPTER 3 Research problematic
3.1 Research Problematic
3.2 Originality of the research
3.3 Planned solution and validation method for the research problem
3.4 Chapter Conclusion
CHAPTER 4 Experiment
4.1 Introduction
4.2 Association of end user performance perspective with low level and derived measures
4.2.1 Experiment description
4.2.2 Data Analysis
4.2.3 Experiment conclusion
4.3 Mapping performance measures for CCA, platform and software engineering concepts
4.4 Validation of quality measures for representing performance from an end user perspective on CCA
4.4.1 Validation description
4.4.2 Data analysis
4.4.3 Validation conclusion
4.5 Laboratory experiment for end user performance modeling
4.5.1 Description
4.5.2 Setup
4.5.3 Data preparation
4.5.4 Analysis
4.5.5 Experiment conclusion
4.6 Extension of Bautista’s performance measurement model
4.6.1 Setup
4.6.2 Data preparation
4.6.3 Feature Extraction
4.6.4 Correlation analysis
4.6.5 Anomaly detection
4.6.6 Application of the model
4.6.7 Discussion
4.6.8 End user feedback and anomaly forecasting
4.6.9 Experiment conclusion
4.7 Chapter conclusion
CHAPTER 5 Proposition of a model for end user performance perspective for cloud computing systems using data center logs from Big Data technology
CHAPTER 6 Conclusion
ANNEX I RESEARCH CONTRIBUTIONS
ANNEX II COMPLETE LIST OF IDENTIFIED MEASURES
ANNEX III ANOMALY DETECTION (SCREENS, UNTRAINED, TRAINED BAYES)
BIBLIOGRAPHY

Rapport PFE, mémoire et thèse PDFTélécharger le rapport complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *