lnu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 18) Show all publications
Hönel, S., Picha, P., Ericsson, M., Brada, P., Löwe, W. & Wingkvist, A. (2024). Activity-Based Detection of (Anti-)Patterns: An Embedded Case Study of the Fire Drill. e-Informatica Software Engineering Journal, 18(1), Article ID 240106.
Open this publication in new window or tab >>Activity-Based Detection of (Anti-)Patterns: An Embedded Case Study of the Fire Drill
Show others...
2024 (English)In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 18, no 1, article id 240106Article in journal (Refereed) Published
Abstract [en]

Background: Nowadays, expensive, error-prone, expert-based evaluations are needed to identify and assess software process anti-patterns. Process artifacts cannot be automatically used to quantitatively analyze and train prediction models without exact ground truth. Aim: Develop a replicable methodology for organizational learning from process (anti-)patterns, demonstrating the mining of reliable ground truth and exploitation of process artifacts. Method: We conduct an embedded case study to find manifestations of the Fire Drill anti-pattern in n = 15 projects. To ensure quality, three human experts agree. Their evaluation and the process’ artifacts are utilized to establish a quantitative understanding and train a prediction model. Results: Qualitative review shows many project issues. (i) Expert assessments consistently provide credible ground truth. (ii) Fire Drill phenomenological descriptions match project activity time (for example, development). (iii) Regression models trained on ≈ 12–25 examples are sufficiently stable. Conclusion: The approach is data source-independent (source code or issue-tracking). It allows leveraging process artifacts for establishing additional phenomenon knowledge and training robust predictive models. The results indicate the aptness of the methodology for the identification of the Fire Drill and similar anti-pattern instances modeled using activities. Such identification could be used in post mortem process analysis supporting organizational learning for improving processes.

Place, publisher, year, edition, pages
Wroclaw University of Science and Technology, 2024
Keywords
anti-patterns, Fire-Drill Case-study
National Category
Software Engineering Computer Sciences
Research subject
Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-128700 (URN)10.37190/e-inf240106 (DOI)2-s2.0-85188276370 (Scopus ID)
Available from: 2024-04-09 Created: 2024-04-09 Last updated: 2024-05-16Bibliographically approved
Hönel, S., Pícha, P., Brada, P., Rychtarova, L. & Danek, J. (2023). Detection of the Fire Drill anti-pattern: 15 real-world projects with ground truth, issue-tracking data, source code density, models and code.
Open this publication in new window or tab >>Detection of the Fire Drill anti-pattern: 15 real-world projects with ground truth, issue-tracking data, source code density, models and code
Show others...
2023 (English)Data set
Abstract [en]

This package contains items for 9 real-world software projects. The data is supposed to aid the detection of the presence of the Fire Drill anti-pattern. We include data, ground truth, code, and notebooks. The data supports two distinct methods of detecting the AP: a) through issue-tracking data, and b) through the underlying source code. Therefore, this package includes the following:

Original data:

  • For each project, its original artifacts (e.g., wikis, meeting minutes, mentor's notes, etc.)
  • Evaluation of raters' notes by the assessor

Fire Drill in issue-tracking data:

  • Ground truth for whether and how strong each project exhibits the Fire Drill AP, on a scale from [0,10]. This was determined by two individual raters, who also reached a consensus.
  • Coefficients for indicators for the first method, per project.
  • Detailed issue-tracing data for each project: what occurred and when.
  • Time logs for each project.

Fire Drill in source-code data:

  • Four technical reports that document the developed method of how to translate a description into a detectable pattern, and to use the pattern to detect the presence and to score it (similar to the rating). Also includes a report for how activities were assigned to individual commits.
  • Source code density data (metrics) for each commit in each of the nine projects as a separate dataset.
  • Code: a snapshot of the repository that holds all code, models, notebooks, and pre-computed results, for utmost reproducibility (the code is written in R).
Keywords
Anti-patterns, Fire Drill, ground truth, pattern detection, source code density, ALM data
National Category
Computer Sciences Software Engineering
Research subject
Computer and Information Sciences Computer Science, Computer Science; Statistics/Econometrics
Identifiers
urn:nbn:se:lnu:diva-102945 (URN)10.5281/zenodo.4734053 (DOI)
Available from: 2021-05-04 Created: 2021-05-04 Last updated: 2024-01-10Bibliographically approved
Hönel, S. (2023). Exploiting Relations, Sojourn-Times, and Joint Conditional Probabilities for Automated Commit Classification. In: Hans-Georg Fill, Francisco José Domínguez-Mayo, Marten van Sinderen, and Leszek A. Maciaszek. (Ed.), Proceedings of the 18th International Conference on Software TechnologiesJuly 10-12, 2023, in Rome, Italy: . Paper presented at 18th International Conference on Software Technologies - ICSOFT 2023, Rome, Italy, July 10–12, 2023 (pp. 323-331). SciTePress
Open this publication in new window or tab >>Exploiting Relations, Sojourn-Times, and Joint Conditional Probabilities for Automated Commit Classification
2023 (English)In: Proceedings of the 18th International Conference on Software TechnologiesJuly 10-12, 2023, in Rome, Italy / [ed] Hans-Georg Fill, Francisco José Domínguez-Mayo, Marten van Sinderen, and Leszek A. Maciaszek., SciTePress, 2023, p. 323-331Conference paper, Published paper (Refereed)
Abstract [en]

The automatic classification of commits can be exploited for numerous applications, such as fault prediction, or determining maintenance activities. Additional properties, such as parent-child relations or sojourn-times between commits, were not previously considered for this task. However, such data cannot be leveraged well using traditional machine learning models, such as Random forests. Suitable models are, e.g., Conditional Random Fields or recurrent neural networks. We reason about the Markovian nature of the problem and propose models to address it. The first model is a generalized dependent mixture model, facilitating the Forward algorithm for 1st- and 2nd-order processes, using maximum likelihood estimation. We then propose a second, non-parametric model, that uses Bayesian segmentation and kernel density estimation, which can be effortlessly adapted to work with nth-order processes. Using an existing dataset with labeled commits as ground truth, we extend this dataset with relations between and sojourn-times of commits, by re-engineering the labeling rules first and meeting a high agreement between labelers. We show the strengths and weaknesses of either kind of model and demonstrate their ability to outperform the state-of-the-art in automated commit classification.

Place, publisher, year, edition, pages
SciTePress, 2023
Series
ICSOFT, ISSN 2184-2833
Keywords
Software Maintenance, Repository Mining, Maintenance Activities
National Category
Software Engineering
Research subject
Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-124879 (URN)10.5220/0012077300003538 (DOI)9789897586651 (ISBN)
Conference
18th International Conference on Software Technologies - ICSOFT 2023, Rome, Italy, July 10–12, 2023
Available from: 2023-09-25 Created: 2023-09-25 Last updated: 2024-05-06Bibliographically approved
Hönel, S., Ericsson, M., Löwe, W. & Wingkvist, A. (2023). Metrics As Scores: A Tool- and Analysis Suite and Interactive Application for Exploring Context-Dependent Distributions. Journal of Open Source Software, 8(88), Article ID 4913.
Open this publication in new window or tab >>Metrics As Scores: A Tool- and Analysis Suite and Interactive Application for Exploring Context-Dependent Distributions
2023 (English)In: Journal of Open Source Software, E-ISSN 2475-9066, Vol. 8, no 88, article id 4913Article in journal (Refereed) Published
Abstract [en]

Metrics As Scores can be thought of as an interactive, multiple analysis of variance (abbr. "ANOVA," Chambers et al., 2017). An ANOVA might be used to estimate the goodness-of-fit of a statistical model. Beyond ANOVA, which is used to analyze the differences among hypothesized group means for a single quantity (feature), Metrics As Scores seeks to answer the question of whether a sample of a certain feature is more or less common across groups. This approach to data visualization and -exploration has been used previously (e.g., Jiang etal., 2022). Beyond this, Metrics As Scores can determine what might constitute a good/bad, acceptable/alarming, or common/extreme value, and how distant the sample is from that value, for each group. This is expressed in terms of a percentile (a standardized scale of [0, 1]), which we call score. Considering all available features among the existing groups furthermore allows the user to assess how different the groups are from each other, or whether they are indistinguishable from one another. The name Metrics As Scores was derived from its initial application: examining differences of software metrics across application domains (Hönel et al., 2022). A software metric is an aggregation of one or more raw features according to some well-defined standard, method, or calculation. In software processes, such aggregations are often counts of events or certain properties (Florac & Carleton, 1999). However, without the aggregation that is done in a quality model, raw data (samples) and software metrics are rarely of great value to analysts and decision-makers. This is because quality models are conceived to establish a connection between software metrics and certain quality goals (Kaner & Bond, 2004). It is, therefore, difficult to answer the question "is my metric value good?". With Metrics As Scores we present an approach that, given some ideal value, can transform any sample into a score, given a sample of sufficiently many relevant values. While such ideal values for software metrics were previously attempted to be derived from, e.g., experience or surveys (Benlarbi et al., 2000), benchmarks (Alves et al., 2010), or by setting practical values (Grady, 1992), with Metrics As Scores we suggest deriving ideal values additionally in non-parametric, statistical ways. To do so, data first needs to be captured in a relevant context (group). A feature value might be good in one context, while it is less so in another. Therefore, we suggest generalizing and contextualizing the approach taken by Ulan et al. (2021), in which a score is defined to always have a range of [0, 1] and linear behavior. This means that scores can now also be compared and that a fixed increment in any score is equally valuable among scores. This is not the case for raw features, otherwise. Metrics As Scores consists of a tool- and analysis suite and an interactive application that allows researchers to explore and understand differences in scores across groups. The operationalization of features as scores lies in gathering values that are context-specific (group-typical), determining an ideal value non-parametrically or by user preference, and then transforming the observed values into distances. Metrics As Scores enables this procedure by unifying the way of obtaining probability densities/masses and conducting appropriate statistical tests. More than 120 different parametric distributions (approx. 20 of which are discrete) are fitted through a common interface. Those distributions are part of the scipy package for the Python programming language, which Metrics As Scores makes extensive use of (Virtanen et al., 2020). While fitting continuous distributions is straightforward using maximum likelihood estimation, many discrete distributions have integral parameters. For these, Metrics As Scores solves a mixed-variable global optimization problem using a genetic algorithm in pymoo (Blank& Deb, 2020). Additionally to that, empirical distributions (continuous and discrete) and smooth approximate kernel density estimates are available. Applicable statistical tests for assessing the goodness-of-fit are automatically performed. These tests are used to select some best-fitting random variable in the interactive web application. As an application written in Python, Metrics As Scores is made available as a package that is installable using the PythonPackage Index (PyPI): pip install metrics-as-scores. As such, the application can be used in a stand-alone manner and does not require additional packages, such as a web server or third-party libraries.

Place, publisher, year, edition, pages
Open Journals, 2023
Keywords
Metrics, Visualization, Conditional Distributions
National Category
Probability Theory and Statistics Software Engineering
Research subject
Statistics/Econometrics; Computer Science, Information and software visualization
Identifiers
urn:nbn:se:lnu:diva-124881 (URN)10.21105/joss.04913 (DOI)
Available from: 2023-09-25 Created: 2023-09-25 Last updated: 2024-05-06Bibliographically approved
Hönel, S. (2023). Quantifying Process Quality: The Role of Effective Organizational Learning in Software Evolution. (Doctoral dissertation). Växjö: Linnaeus University Press
Open this publication in new window or tab >>Quantifying Process Quality: The Role of Effective Organizational Learning in Software Evolution
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Real-world software applications must constantly evolve to remain relevant. This evolution occurs when developing new applications or adapting existing ones to meet new requirements, make corrections, or incorporate future functionality. Traditional methods of software quality control involve software quality models and continuous code inspection tools. These measures focus on directly assessing the quality of the software. However, there is a strong correlation and causation between the quality of the development process and the resulting software product. Therefore, improving the development process indirectly improves the software product, too. To achieve this, effective learning from past processes is necessary, often embraced through post mortem organizational learning. While qualitative evaluation of large artifacts is common, smaller quantitative changes captured by application lifecycle management are often overlooked. In addition to software metrics, these smaller changes can reveal complex phenomena related to project culture and management. Leveraging these changes can help detect and address such complex issues.

Software evolution was previously measured by the size of changes, but the lack of consensus on a reliable and versatile quantification method prevents its use as a dependable metric. Different size classifications fail to reliably describe the nature of evolution. While application lifecycle management data is rich, identifying which artifacts can model detrimental managerial practices remains uncertain. Approaches such as simulation modeling, discrete events simulation, or Bayesian networks have only limited ability to exploit continuous-time process models of such phenomena. Even worse, the accessibility and mechanistic insight into such gray- or black-box models are typically very low. To address these challenges, we suggest leveraging objectively captured digital artifacts from application lifecycle management, combined with qualitative analysis, for efficient organizational learning. A new language-independent metric is proposed to robustly capture the size of changes, significantly improving the accuracy of change nature determination. The classified changes are then used to explore, visualize, and suggest maintenance activities, enabling solid prediction of malpractice presence and -severity, even with limited data. Finally, parts of the automatic quantitative analysis are made accessible, potentially replacing expert-based qualitative analysis in parts.

Place, publisher, year, edition, pages
Växjö: Linnaeus University Press, 2023
Series
Linnaeus University Dissertations ; 504
Keywords
Software Size, Software Metrics, Commit Classification, Maintenance Activities, Software Quality, Process Quality, Project Management, Organizational Learning, Machine Learning, Visualization, Optimization
National Category
Computer and Information Sciences Software Engineering Mathematical Analysis Probability Theory and Statistics
Research subject
Computer Science, Software Technology; Computer Science, Information and software visualization; Computer and Information Sciences Computer Science, Computer Science; Statistics/Econometrics
Identifiers
urn:nbn:se:lnu:diva-124916 (URN)10.15626/LUD.504.2023 (DOI)9789180820738 (ISBN)9789180820745 (ISBN)
Public defence
2023-09-29, House D, D1136A, 351 95 Växjö, Växjö, 13:00 (English)
Opponent
Supervisors
Available from: 2023-09-28 Created: 2023-09-27 Last updated: 2024-05-06Bibliographically approved
Hönel, S. (2023). Technical Reports Compilation: Detecting the Fire Drill Anti-pattern Using Source Code and Issue-Tracking Data.
Open this publication in new window or tab >>Technical Reports Compilation: Detecting the Fire Drill Anti-pattern Using Source Code and Issue-Tracking Data
2023 (English)Report (Other academic)
Abstract [en]

Detecting the presence of project management anti-patterns (AP) currently requires experts on the matter and is an expensive endeavor. Worse, experts may introduce their individual subjectivity or bias. Using the Fire Drill AP, we first introduce a novel way to translate descriptions into detectable AP that are comprised of arbitrary metrics and events such as logged time or maintenance activities, which are mined from the underlying source code or issue-tracking data, thus making the description objective as it becomes data-based. Secondly, we demonstrate a novel method to quantify and score the deviations of real-world projects to data-based AP descriptions. Using fifteen real-world projects that exhibit a Fire Drill to some degree, we show how to further enhance the translated AP. The ground truth in these projects was extracted from two individual experts and consensus was found between them. We introduce a novel method called automatic calibration, that optimizes a pattern such that only necessary and important scores remain that suffice to confidently detect the degree to which the AP is present. Without automatic calibration, the proposed patterns show only weak potential for detecting the presence. Enriching the AP with data from real-world projects significantly improves the potential. We also introduce a no-pattern approach that exploits the ground truth for establishing a new, quantitative understanding of the phenomenon, as well as for finding gray-/black-box predictive models. We conclude that the presence detection and severity assessment of the Fire Drill anti-pattern, as well as some of its related and similar patterns, is certainly possible using some of the presented approaches.

Publisher
p. 338
National Category
Computer Sciences Software Engineering Probability Theory and Statistics
Research subject
Computer and Information Sciences Computer Science, Computer Science; Computer Science, Software Technology; Natural Science, Mathematics; Statistics/Econometrics
Identifiers
urn:nbn:se:lnu:diva-105772 (URN)10.48550/arXiv.2104.15090 (DOI)
Available from: 2021-07-07 Created: 2021-07-07 Last updated: 2023-09-28Bibliographically approved
Hönel, S., Ericsson, M., Löwe, W. & Wingkvist, A. (2022). Contextual Operationalization of Metrics as Scores: Is My Metric Value Good?. In: Proceedings of the 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS): . Paper presented at 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS), Guangzhou, China, 5-9 Dec. 2022 (pp. 333-343). IEEE
Open this publication in new window or tab >>Contextual Operationalization of Metrics as Scores: Is My Metric Value Good?
2022 (English)In: Proceedings of the 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS), IEEE, 2022, p. 333-343Conference paper, Published paper (Refereed)
Abstract [en]

Software quality models aggregate metrics to indicate quality. Most metrics reflect counts derived from events or attributes that cannot directly be associated with quality. Worse, what constitutes a desirable value for a metric may vary across contexts. We demonstrate an approach to transforming arbitrary metrics into absolute quality scores by leveraging metrics captured from similar contexts. In contrast to metrics, scores represent freestanding quality properties that are also comparable. We provide a web-based tool for obtaining contextualized scores for metrics as obtained from one’s software. Our results indicate that significant differences among various metrics and contexts exist. The suggested approach works with arbitrary contexts. Given sufficient contextual information, it allows for answering the question of whether a metric value is good/bad or common/extreme.

Place, publisher, year, edition, pages
IEEE, 2022
Series
IEEE International Conference on Software Quality, Reliability and Security (QRS), ISSN 2693-9185, E-ISSN 2693-9177
Keywords
Software quality, Metrics, Scores, Software Domains, Measurement, Aggregates, Software quality, Software reliability, Security, software metrics, absolute quality scores, arbitrary metrics, contextual operationalization, contextualized scores, quality properties, software quality models, Web-based tool
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer and Information Sciences Computer Science; Computer Science, Software Technology; Computer Science, Information and software visualization
Identifiers
urn:nbn:se:lnu:diva-120165 (URN)10.1109/QRS57517.2022.00042 (DOI)2-s2.0-85151404427 (Scopus ID)9781665477048 (ISBN)
Conference
2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS), Guangzhou, China, 5-9 Dec. 2022
Available from: 2023-04-12 Created: 2023-04-12 Last updated: 2023-09-27Bibliographically approved
Picha, P., Hönel, S., Brada, P., Ericsson, M., Löwe, W., Wingkvist, A. & Danek, J. (2022). Process anti-pattern detection: a case study. In: Proceedings of the 27th European Conference on Pattern Languages of Programs, EuroPLop 2022, Irsee, Germany, July 6-10, 2022: . Paper presented at EuroPLop '22: Proceedings of the 27th European Conference on Pattern Languages of Programs, Irsee, Germany, July 6-10, 2022 (pp. 1-18). ACM Publications, Article ID 5.
Open this publication in new window or tab >>Process anti-pattern detection: a case study
Show others...
2022 (English)In: Proceedings of the 27th European Conference on Pattern Languages of Programs, EuroPLop 2022, Irsee, Germany, July 6-10, 2022, ACM Publications, 2022, p. 1-18, article id 5Conference paper, Published paper (Refereed)
Abstract [en]

Anti-patterns are harmful phenomena repeatedly occurring, e.g., in software development projects. Though widely recognized and well-known, their descriptions are traditionally not fit for automated detection. The detection is usually performed by manual audits, or on business process models. Both options are time-, effort- and expertise-heavy, prone to biases, and/or omissions. Meanwhile, collaborative software projects produce much data as a natural side product, capturing their status and day-to-day history. Long-term, our research aims at deriving models for the automated detection of process and project management anti-patterns, applicable to project data. Here, we present a general approach for studies investigating occurrences of these types of anti-patterns in projects and discuss the entire process of such studies in detail, starting from the anti-pattern descriptions in literature. We demonstrate and verify our approach with the Fire Drill anti-pattern detection as a case study, applying it to data from 15 student projects. The results of our study suggest that reliable detection of at least some process and project management anti-patterns in project data is possible, with 13 projects assessed accurately for Fire Drill presence by our automated detection when compared to the ground truth gathered from independent data. The overall approach can be similarly applied to detecting patterns and other phenomena with manifestations in Application Lifecycle Management data.

Place, publisher, year, edition, pages
ACM Publications, 2022
Keywords
Pattern detection, Project management anti-patterns, Software process anti-patterns, ALM tools, Fire Drill
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-120164 (URN)10.1145/3551902.3551965 (DOI)2-s2.0-85148442751 (Scopus ID)9781450395946 (ISBN)
Conference
EuroPLop '22: Proceedings of the 27th European Conference on Pattern Languages of Programs, Irsee, Germany, July 6-10, 2022
Available from: 2023-04-12 Created: 2023-04-12 Last updated: 2023-09-27Bibliographically approved
Hönel, S. (2020). Efficient Automatic Change Detection in Software Maintenance and Evolutionary Processes. (Licentiate dissertation). Växjö: Faculty of Technology, Linnaeus University
Open this publication in new window or tab >>Efficient Automatic Change Detection in Software Maintenance and Evolutionary Processes
2020 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Software maintenance is such an integral part of its evolutionary process that it consumes much of the total resources available. Some estimate the costs of maintenance to be up to 100 times the amount of developing a software. A software not maintained builds up technical debt, and not paying off that debt timely will eventually outweigh the value of the software, if no countermeasures are undertaken. A software must adapt to changes in its environment, or to new and changed requirements. It must further receive corrections for emerging faults and vulnerabilities. Constant maintenance can prepare a software for the accommodation of future changes.

While there may be plenty of rationale for future changes, the reasons behind historical changes may not be accessible longer. Understanding change in software evolution provides valuable insights into, e.g., the quality of a project, or aspects of the underlying development process. These are worth exploiting, for, e.g., fault prediction, managing the composition of the development team, or for effort estimation models. The size of software is a metric often used in such models, yet it is not well-defined. In this thesis, we seek to establish a robust, versatile and computationally cheap metric, that quantifies the size of changes made during maintenance. We operationalize this new metric and exploit it for automated and efficient commit classification.

Our results show that the density of a commit, that is, the ratio between its net- and gross-size, is a metric that can replace other, more expensive metrics in existing classification models. Models using this metric represent the current state of the art in automatic commit classification. The density provides a more fine-grained and detailed insight into the types of maintenance activities in a software project.

Additional properties of commits, such as their relation or intermediate sojourn-times, have not been previously exploited for improved classification of changes. We reason about the potential of these, and suggest and implement dependent mixture- and Bayesian models that exploit joint conditional densities, models that each have their own trade-offs with regard to computational cost and complexity, and prediction accuracy. Such models can outperform well-established classifiers, such as Gradient Boosting Machines.

All of our empirical evaluation comprise large datasets, software and experiments, all of which we have published alongside the results as open-access. We have reused, extended and created datasets, and released software packages for change detection and Bayesian models used for all of the studies conducted.

Place, publisher, year, edition, pages
Växjö: Faculty of Technology, Linnaeus University, 2020. p. 37
Keywords
Software Maintenance, Software Evolution, Effort Estimation, Commit Classification
National Category
Software Engineering
Research subject
Computer and Information Sciences Computer Science, Computer Science; Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-94733 (URN)
Presentation
2020-06-05, D1173, PG Vejdes Väg 29, Växjö, 10:00 (English)
Opponent
Supervisors
Available from: 2020-08-11 Created: 2020-05-13 Last updated: 2024-05-06Bibliographically approved
Hönel, S. (2020). Git Density: Analyze git repositories to extract the Source Code Density and other Commit Properties.
Open this publication in new window or tab >>Git Density: Analyze git repositories to extract the Source Code Density and other Commit Properties
2020 (English)Other (Other academic)
Abstract [en]

Git Density (git-density) is a tool to analyze git-repositories with the goal of detecting the source code density. It was developed during the research phase of the short technical paper and poster "A changeset-based approach to assess source code density and developer efficacy" and has since been extended to support thorough analyses and insights.

Keywords
git, source code density, git-hours, software metrics
National Category
Computer Sciences Computer and Information Sciences
Research subject
Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-98140 (URN)10.5281/zenodo.2565238 (DOI)
Available from: 2020-09-23 Created: 2020-09-23 Last updated: 2023-09-28Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7937-1645

Search in DiVA

Show all publications

Profile pages

ORCID