lnu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Lincke, Rüdiger
Publications (10 of 13) Show all publications
Wingkvist, A., Ericsson, M., Lincke, R. & Löwe, W. (2010). A Metrics-Based Approach to Technical Documentation Quality. In: Proceedings of the 7th International Conference on Quality of Information and Communications Technology: . Paper presented at 7th International Conference on Quality of Information and Communications Technology, Porto, 29 September-2 October, 2010 (pp. 476-481). IEEE
Open this publication in new window or tab >>A Metrics-Based Approach to Technical Documentation Quality
2010 (English)In: Proceedings of the 7th International Conference on Quality of Information and Communications Technology, IEEE, 2010, p. 476-481Conference paper, Published paper (Refereed)
Abstract [en]

Technical documentation is now fully taking the step from stale printedbooklets (or electronic versions of these) to interactive and online versions.This provides opportunities to reconsider how we define and assess the qualityof technical documentation. This paper suggests an approach based on theGoal-Question-Metric paradigm: predefined quality goals are continuously assessedand visualized by the use of metrics. To test this approach, we performtwo experiments. We adopt well known software analysistechniques, e.g., clone detection and test coverage analysis, and assess thequality of two real world documentations, that of a mobile phone and of(parts of) a warship. The experiments show that quality issues can be identifiedand that the approach is promising.Technical documentation is now fully taking the step from stale printedbooklets (or electronic versions of these) to interactive and online versions.This provides opportunities to reconsider how we define and assess the qualityof technical documentation. This paper suggests an approach based on the Goal-Question-Metric paradigm: predefined quality goals are continuously assessedand visualized by the use of metrics. To test this approach, we performtwo experiments. We adopt well known software analysistechniques, e.g., clone detection and test coverage analysis, and assess thequality of two real world documentations, that of a mobile phone and of(parts of) a warship. The experiments show that quality issues can be identifiedand that the approach is promising.

Place, publisher, year, edition, pages
IEEE, 2010
Keywords
Information quality, Quality assurance, Software metrics, Documentation
National Category
Software Engineering
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-6951 (URN)10.1109/QUATIC.2010.88 (DOI)2-s2.0-78751549202 (Scopus ID)978-1-4244-8539-0 (ISBN)
Conference
7th International Conference on Quality of Information and Communications Technology, Porto, 29 September-2 October, 2010
Available from: 2010-08-03 Created: 2010-08-03 Last updated: 2019-01-17Bibliographically approved
Wingkvist, A., Löwe, W., Ericsson, M. & Lincke, R. (2010). Analysis and Visualization of Information Quality of Technical Documentation. In: Castro Neto, M (Ed.), Proceedings of the 4th European Conference on Information Management and Evaluation: . Paper presented at 4th European Conference on Information Management and Evaluation, Lisbon, Sep. 9-10, 2010 (pp. 388-396). Academic Publishing International
Open this publication in new window or tab >>Analysis and Visualization of Information Quality of Technical Documentation
2010 (English)In: Proceedings of the 4th European Conference on Information Management and Evaluation / [ed] Castro Neto, M, Academic Publishing International, 2010, p. 388-396Conference paper, Published paper (Refereed)
Abstract [en]

Technical documentation has moved from printed booklets to online versions that need to be updated continuously to match product development and user demands. There is an imminent need to ensure the quality of technical documentation, i.e. information that follows a product.

Moving from printed material to online versions also allows for documentation to become active, to integrate interactive content, which blurs the boundaries between information and software. In order to assess quality of technical documentation, we adopt analyses and visualizations known from quality assessment of software. The analyses assess text copies, usage, structural properties, and the conformance of information to meta-information. The analysis results are visualized using a range of abstractions to aid in identifying and communicating quality issues to different stakeholders.

In a case study, we assessed the quality of real world technical documentations from a Swedish mobile phone vendor, a Japanese camera vendor, and a Swedish warship producer. The study showed that our analyses and visualization are applicable and can identify quality issues. For example, we tested an unclassified subset of the warship’s technical documentation and found that 49% of it was redundant information.

The case study was conducted at a Swedish company that is in charge of creating and maintaining technical documentation. While our approach is limited to analysis that can be performed automatically, the company acknowledges that it has great potential and that our results proved helpful.

Place, publisher, year, edition, pages
Academic Publishing International, 2010
Keywords
: Information Quality, Software Analysis, Software Visualization, Technical Documentation
National Category
Software Engineering
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-6949 (URN)978-1-906638-72-6 (ISBN)
Conference
4th European Conference on Information Management and Evaluation, Lisbon, Sep. 9-10, 2010
Available from: 2010-08-03 Created: 2010-08-03 Last updated: 2018-01-12Bibliographically approved
Wingkvist, A., Ericsson, M., Löwe, W. & Lincke, R. (2010). Incorporating Information Quality in Software Development. In: Proceedings of the 33rd Information Systems Research Seminar in Scandinavia: . Paper presented at 33rd Information Systems Research Seminar in Scandinavia.
Open this publication in new window or tab >>Incorporating Information Quality in Software Development
2010 (English)In: Proceedings of the 33rd Information Systems Research Seminar in Scandinavia, 2010Conference paper, Published paper (Refereed)
Abstract [en]

The usefulness and value of an information system is directly related to its perceived quality. Quality is multidimensional concept, and includes an object of interest, the viewpoint on that object and the qualities attributed to the object. This suggests that there is no universal standard in systems development; quality is rather defined how well the information system meets the purpose and the goals of the organization it is used within. It is important that people involved in a particular systems development project have an agreed understanding of what the strive for quality means. This agreed understanding should include how to assign appropriate quality characteristics to both the technical and social aspects of a system as well as how to assess and interpret them. The purpose of this paper is twofold; first, we emphasize that any definition of quality should be specific to a system, and include both the social and technical aspects of a system. Second, we extend methods used to define and assess quality to include social and technical aspects that extends beyond software. Our work is particularly focused on information quality.

Keywords
Systems development, Socio-technical systems, Information quality, Quality models
National Category
Software Engineering
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-6948 (URN)
Conference
33rd Information Systems Research Seminar in Scandinavia
Available from: 2010-08-03 Created: 2010-08-03 Last updated: 2018-01-12Bibliographically approved
Wingkvist, A., Ericsson, M., Löwe, W. & Lincke, R. (2010). Information Quality Testing. In: Proceedings of the 9th International Conference on Perspectives in Business Informatics Research: . Paper presented at 9th International Conference on Perspectives in Business Informatics Research (pp. 14-26). Springer
Open this publication in new window or tab >>Information Quality Testing
2010 (English)In: Proceedings of the 9th International Conference on Perspectives in Business Informatics Research, Springer, 2010, p. 14-26Conference paper, Published paper (Refereed)
Abstract [en]

When a new system, such as a knowledge management system or a contentmanagement system is put into production, both the software andhardware are systematically and thoroughly tested while the mainpurpose of the system --- the information --- often lacks systemictesting. In this paper we study how to extend testing approaches fromsoftware and hardware development to information engineering. Wedefine an information quality testing procedure based on test cases,and provide tools to support testing as well as the analysis andvisualization of data collected during the testing. Further, wepresent a feasibility study where we applied information qualitytesting to assess information in a documentation system. The resultsshow promise and have been well received by the companies thatparticipated in the feasibility study. When a new system, such as a knowledge management system or a contentmanagement system is put into production, both the software andhardware are systematically and thoroughly tested while the mainpurpose of the system --- the information --- often lacks systemictesting. In this paper we study how to extend testing approaches fromsoftware and hardware development to information engineering. Wedefine an information quality testing procedure based on test cases,and provide tools to support testing as well as the analysis andvisualization of data collected during the testing. Further, wepresent a feasibility study where we applied information qualitytesting to assess information in a documentation system. The resultsshow promise and have been well received by the companies thatparticipated in the feasibility study.

Place, publisher, year, edition, pages
Springer, 2010
Keywords
Information quality, Testing, Quality assessment
National Category
Software Engineering
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-6950 (URN)10.1007/978-3-642-16101-8_2 (DOI)2-s2.0-78349264601 (Scopus ID)978-3-642-16100-1 (ISBN)
Conference
9th International Conference on Perspectives in Business Informatics Research
Available from: 2010-08-03 Created: 2010-08-03 Last updated: 2018-01-12Bibliographically approved
Lincke, R., Gutzmann, T. & Löwe, W. (2010). Software Quality Prediction Models Compared. In: Software Quality Prediction Models Compared: . Paper presented at The 10th International Conference on Quality Software.
Open this publication in new window or tab >>Software Quality Prediction Models Compared
2010 (English)In: Software Quality Prediction Models Compared, 2010Conference paper, Published paper (Refereed)
Abstract [en]

Numerous empirical studies confirm that many software metrics aggregated in software quality prediction models are valid predictors for qualities of general interest like maintainability and correctness. Even these general quality models differ quite a bit, which raises the question: Do the differences matter? The goal of our study is to answer this question for a selection of quality models that have previously been published in empirical studies. We compare these quality models statistically by applying them to the same set of software systems, i.e., to altogether 328 versions of 11 open-source software systems. Finally, we draw conclusions from quality assessment using the different quality models, i.e., we calculate a quality trend and compare these conclusions statistically. We identify significant differences among the quality models. Hence, the selection of the quality model has influence on the quality assessment of software based on software metrics.

National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-6038 (URN)
Conference
The 10th International Conference on Quality Software
Available from: 2010-06-10 Created: 2010-06-10 Last updated: 2018-01-12Bibliographically approved
Barkmann, H., Lincke, R. & Löwe, W. (2009). Quantitative Evaluation of Software Quality Metrics in Open-Source Projects. In: Proceedings of The 2009 IEEE International Workshop on Quantitative Evaluation of large-scale Systems and Technologies (QuEST09): . IEEE
Open this publication in new window or tab >>Quantitative Evaluation of Software Quality Metrics in Open-Source Projects
2009 (English)In: Proceedings of The 2009 IEEE International Workshop on Quantitative Evaluation of large-scale Systems and Technologies (QuEST09), IEEE, 2009Conference paper, Published paper (Refereed)
Abstract [en]

The validation of software quality metrics lacks statistical significance. One reason for this is that the data collection requires quite some effort. To help solve this problem, we develop tools for metrics analysis of a large number of software projects (146 projects with ca. 70.000 classes and interfaces and over 11 million lines of code). Moreover, validation of software quality metrics should focus on relevant metrics, i.e., correlated metrics need not to be validated independently. Based on our statistical basis, we identify correlation between several metrics from well-known object-oriented metrics suites. Besides, we present early results of typical metrics values and possible thresholds.

Place, publisher, year, edition, pages
IEEE, 2009
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:vxu:diva-3954 (URN)
Available from: 2009-01-07 Created: 2009-01-07 Last updated: 2019-01-17Bibliographically approved
Lincke, R. (2009). Validation of a Standard- and Metric-Based Software Quality Model. (Doctoral dissertation). Växjö: Växjö University Press
Open this publication in new window or tab >>Validation of a Standard- and Metric-Based Software Quality Model
2009 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

The automatic assessment of software quality using standard- and metric-based Software Quality Models has numerous advantages. For example, it is less error-prone, less time consuming, and neutral to human judgments compared to manual approaches. Yet, to rely on the results, the Software Quality Models need to be validated, i.e., experiments should show that the measured quality and the experienced quality correlate. Such validations are rare and the few that exist are hard to generalize since they are ambiguously defined.

In detail, to be able to generalize from a validation of a standard- and metric-based Software Quality Model, the following prerequisites need to be fulfilled: First, the definitions of the software quality metrics need to be unambiguous. Second, a formal meta-model for the software assessed and a mapping from the (language specific) software elements to the elements of a formal meta-model need to be defined. Finally, the definition of the Software Quality Model needs to be exact. The prerequisites for generalization stated above are not fulfilled in practice. This is shown in the first part of this thesis, which is at first based on practical observations and then on experiments.

In order to address the above issues, we define a common meta-model as well as a mapping from language specific software elements to meta-model elements. This meta-model is then the basis for metrics definitions; we redefine about 25 well-known object-oriented metrics using our meta-model. Using these metrics, we introduce a Software Quality Model based on ISO 9126. It defines quantitatively the relations between the metrics and the quality factors and criteria. Fulfilling the prerequisite unambiguous experiment definition, as described in the second part of this thesis, allows us to validate the Software Quality Model in experiments and generalize from the results.

In the third part, we describe the main experiment: Using multicollinearity analysis, we show (i) that some metrics suggested in literature are linearly dependent. Using univariate regression analysis, we show (ii) that a Software Quality Model for maintainability based on the linearly independent metrics correlates with the number of revisions in successful open source projects--later revisions have better maintainability than earlier ones. Using multivariate regression analysis, we show (iii) how these metrics correlate with the number of iterations in the projects. For (i), we analyze 157 different software projects with over 70,000 classes. For (ii) and (iii), we analyze over 300 versions of 11 well-known open source projects. As a side effect, we extracted the distribution and threshold values for the well-known object-oriented metrics defined on classes.

Besides, we created tools and processes for monitoring software quality which may be used in a number of scenarios. In the development process, they allow early corrective actions. Under maintenance, they allow a precise assessment of required efforts of change and redevelopment. For project management, they allow to control if subcontractors or outsourced software developers meet the agreed quality goals. We show the feasibility of our tools and processes in a number of such practical cases.

Place, publisher, year, edition, pages
Växjö: Växjö University Press, 2009. p. 216
Series
Acta Wexionensia, ISSN 1404-4307 ; 186
Keywords
maintainability; software quality metric; software quality model; validation; regression; correlation
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:vxu:diva-5846 (URN)978-91-7636-679-0 (ISBN)
Public defence
Weber, K-Building, Växjö University (English)
Opponent
Supervisors
Projects
KK-stiftelsen, Sverige: Projekt “Validering av mätningsbaserad kvalitetskontroll”, Diarienr 2005/0218.
Available from: 2009-09-21 Created: 2009-09-20 Last updated: 2018-01-13Bibliographically approved
Lincke, R., Lundberg, J. & Löwe, W. (2008). Comparing Software Metric Tools. In: Compilation Proceedings of the 2008 International Symposium on Software Testing and Analysis and Co-Located Workshops: . ACM
Open this publication in new window or tab >>Comparing Software Metric Tools
2008 (English)In: Compilation Proceedings of the 2008 International Symposium on Software Testing and Analysis and Co-Located Workshops, ACM , 2008Conference paper, Published paper (Refereed)
Abstract [en]

This paper shows that existing software metric tools interpret and implement the

definitions of object-oriented software metrics differently. This

delivers tool-dependent metrics results and has even implications on the results

of analyses based on these metrics results. In short, the metrics-based assessment

of a software system and measures taken to improve its design differ considerably

from tool to tool. To support our case, we conducted an

experiment with a number of commercial and free metrics tools. We calculated metrics values using

the same set of standard metrics for three software systems of different sizes.

Measurements show that, for the same software

system and metrics, the metrics values are tool depended. We also defined a

(simple) software quality model for "maintainability" based on the metrics

selected. It defines a ranking of the classes that are most critical wrt.

maintainability. Measurements show that even the ranking of classes in a software

system is metrics tool dependent.

Place, publisher, year, edition, pages
ACM, 2008
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:vxu:diva-3950 (URN)10.1145/1390630.1390648 (DOI)978-1-60558-050-0 (ISBN)
Available from: 2009-01-07 Created: 2009-01-07 Last updated: 2018-05-17Bibliographically approved
Strein, D., Lincke, R., Lundberg, J. & Löwe, W. (2007). An Extensible Meta-Model for Program Analysis. IEEE Transaction on Software Engineering (TSE), 33(9), pp. 592-607
Open this publication in new window or tab >>An Extensible Meta-Model for Program Analysis
2007 (English)In: IEEE Transaction on Software Engineering (TSE), Vol. 33, no 9, p. pp. 592-607Article in journal (Refereed) Published
Abstract [en]

Software maintenance tools for program analysisand refactoring rely on a meta-model capturing the relevantproperties of programs. However, what is considered relevantmay change when the tools are extended with new analyses andrefactorings, and new programming languages. This paper proposesa language independent meta-model and an architecture toconstruct instances thereof, which is extensible for new analyses,refactorings, and new front-ends of programming languages. Dueto the loose coupling between analysis-, refactoring-, and frontend-components, new components can be added independentlyand reuse existing ones. Two maintenance tools implementingthe meta-model and the architecture, VIZZANALYZER and XDEVELOP,serve as a proof of concept.

Place, publisher, year, edition, pages
IEEE Computer Society, 2007
Keywords
Programming environments, program analysis, metamodels
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:vxu:diva-4795 (URN)10.1109/TSE.2007.70710 (DOI)
Available from: 2007-11-30 Created: 2007-11-30 Last updated: 2018-05-17Bibliographically approved
Lincke, R. (2007). Validation of a Standard- and Metric-Based Software Quality Model: Creating the Prerequisites for Experimentation. (Licentiate dissertation). Reports from MSI, Växjö
Open this publication in new window or tab >>Validation of a Standard- and Metric-Based Software Quality Model: Creating the Prerequisites for Experimentation
2007 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]

Our long term research goal is to validate a standard- and metric-based software quality model. Today, ambiguous metric definitions lead to incomparable implementation variants in tools. Therefore, large scale empirical experiments, spanning different independent research groups, evaluating software quality models are not possible, since the generalization of the results of individual software measurements is impossible.

We propose a public and extensible knowledge base -- a compendium -- for the unambiguous definition of software quality metrics and their connection to a standard-based software quality model. This constitutes the basis for the measurement and prediction of software quality in current and legacy systems.

This compendium is published as specification for the metrics implementation. In order to provide well-defined metrics an unambiguous description framework for software metrics had to be developed. It includes formal definitions based on an extensible language-independent meta-model and language mapping definitions. We formalize an existing and well-defined meta-model for reverse engineering. Then, we define some software quality metrics based on this meta-model, as well as language mappings for Java and UML/XMI. A full set of metric definitions together with a quality model is provided in the compendium. Since we want our compendium to be a ``living document'', we need to maintain the provided knowledge base, the meta-models, language mappings, and metrics. Therefore, we design a maintenance process assuring the safe evolution of the common meta-model. This process is based on theorems controlling changes to the common meta-model, as well as abstraction mechanisms in form of views, hiding modifications to the common meta-model from existing metric definitions. We show feasibility by implementing our definition framework and the metrics specified in the compendium in a well known reverse engineering tool, the VizzAnalyzer. We evaluate our efforts in two industrial case studies supporting our claims and solution.

We suggest that our description framework eliminates existing ambiguities in software metric definitions, simplifying the implementation of metrics in their original sense, and permitting the definition of standardized metrics. The compendium supports exchanging experiments and the evaluation of metrics. The maintenance process allows us to extend and modify definitions, e.g., add new programming languages, in a controlled and safe way. With the tools developed, it is possible to conduct the experiments we envision, and to evaluate the suitability of software quality metrics, in a next step.

Place, publisher, year, edition, pages
Reports from MSI, Växjö, 2007. p. 256
Series
Reports from MSI, ISSN 1650-2647 ; 07046
Keywords
Software Quality, Metrics, Quality Model, Meta-model, ISO 9126
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:vxu:diva-4797 (URN)
Presentation
(English)
Available from: 2007-11-30 Created: 2007-11-30 Last updated: 2018-01-13Bibliographically approved
Organisations

Search in DiVA

Show all publications