lnu.sePublications
Change search
Refine search result
123 1 - 50 of 110
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Andersson, Jesper
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Autonomic Software Product Lines (ASPL)2010In: ECSA '10 Proceedings of the Fourth European Conference on Software Architecture: Companion Volume / [ed] Carlos E. Cuesta, ACM Press, 2010, p. 324-331Conference paper (Refereed)
    Abstract [en]

    We describe ongoing work on a variability mechanism for Autonomic Software Product Lines (ASPL). The autonomic software product lines have self-management characteristics that make product line instances more resilient to context changes and some aspects of product line evolution. Instances sense the context, selects and bind the best component variants to variation-points at run-time. The variability mechanism we describe is composed of a profile guided dispatch based on off-line and on-line training processes. Together they form a simple, yet powerful variability mechanism that continuously learns, which variants to bind given the current context and system goals.

  • 2.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Andersson, Jesper
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Towards Autonomic Software Product Lines (ASPL) - A Technical Report2011Report (Other academic)
    Abstract [en]

    This report describes a work in progress to develop Autonomic Software Product Lines (ASPL). The ASPL is a dynamic software product line approach with a novel variability handling mechanism that enables traditional software product lines to adapt themselves at runtime in response to changes in their context, requirements and business goals. The ASPL variability mechanism is composed of three key activities: 1) context-profiling, 2) context-aware composition, and 3) online learning. Context-profiling is an offline activity that prepares a knowledge base for context-aware composition. The context-aware composition uses the knowledge base to derive a new product or adapts an existing product based on a product line's context attributes and goals. The online learning optimizes the knowledge base to remove errors and suboptimal information and to incorporate new knowledge. The three activities together form a simple yet powerful variability handling mechanism that learns and adapts a system at runtime in response to changes in system context and goals. We evaluated the ASPL variability mechanism on three small-scale software product lines and got promising results. The ASPL approach is, however, is yet at an initial stage and require improved development support with more rigorous evaluation. 

  • 3.
    Ambrosius, Robin
    et al.
    Dezember IT GmbH, Germany.
    Ericsson, Morgan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Wingkvist, Anna
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Interviews Aided with Machine Learning2018In: Perspectives in Business Informatics Research. BIR 2018: 17th International Conference, BIR 2018, Stockholm, Sweden, September 24-26, 2018, Proceedings / [ed] Zdravkovic J., Grabis J., Nurcan S., Stirna J., Springer, 2018, Vol. 330, p. 202-216Conference paper (Refereed)
    Abstract [en]

    We have designed and implemented a Computer Aided Personal Interview (CAPI) system that learns from expert interviews and can support less experienced interviewers by for example suggesting questions to ask or skip. We were particularly interested to streamline the due diligence process when estimating the value for software startups. For our design we evaluated some machine learning algorithms and their trade-offs, and in a small case study we evaluates their implementation and performance. We find that while there is room for improvement, the system can learn and recommend questions. The CAPI system can in principle be applied to any domain in which long interview sessions should be shortened without sacrificing the quality of the assessment.

  • 4.
    Andersson, Jesper
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Ericsson, Morgan
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Kessler, Chistoph
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Profile-guided Composition2008In: 7th International Symposium on Software Composition, Springer , 2008, p. 157-164Conference paper (Refereed)
  • 5.
    Andersson, Jesper
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Ericsson, Morgan
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Automatic Rule Derivation for Adaptive Architectures2008In: 8th IEEE/IFIP Working Conference on Software Architecture, IEEE , 2008, p. 323-326Conference paper (Refereed)
    Abstract [en]

    This paper discusses on-going work in adaptive architectures concerning automatic adaptation rule derivation. Adaptation is rule-action based but deriving rules that meet the adaptation goals are tedious and error prone. We present an approach that uses model-driven derivation and training for automatically deriving adaptation rules, and exemplify this in an environment for scientific computing.

  • 6.
    Andersson, Jesper
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Heberle, Andreas
    Kirchner, Jens
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Service Level Achievements: Distributed Knowledge for Optimal Service Selection2011In: Proceedings - 9th IEEE European Conference on Web Services, ECOWS 2011 / [ed] Gianluigi Zavattaro, Ulf Schreier, and Cesare Pautasso, IEEE, 2011, p. 125-132Conference paper (Refereed)
    Abstract [en]

    In a service-oriented setting, where services are composed to provide end user functionality, it is a challenge to find the service components with best-fit functionality and quality. A decision based on information mainly provided by service providers is inadequate as it cannot be trusted in general. In this paper, we discuss service compositions in an open market scenario where an automated best-fit service selection and composition is based on Service Level Achievements instead. Continuous monitoring updates the actual Service Level Achievements which can lead to dynamically changing compositions. Measurements of real life services exemplify the approach.

  • 7.
    Barkmann, Henrike
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Lincke, Rüdiger
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Quantitative Evaluation of Software Quality Metrics in Open-Source Projects2009In: Proceedings of The 2009 IEEE International Workshop on Quantitative Evaluation of large-scale Systems and Technologies (QuEST09), IEEE, 2009Conference paper (Refereed)
    Abstract [en]

    The validation of software quality metrics lacks statistical significance. One reason for this is that the data collection requires quite some effort. To help solve this problem, we develop tools for metrics analysis of a large number of software projects (146 projects with ca. 70.000 classes and interfaces and over 11 million lines of code). Moreover, validation of software quality metrics should focus on relevant metrics, i.e., correlated metrics need not to be validated independently. Based on our statistical basis, we identify correlation between several metrics from well-known object-oriented metrics suites. Besides, we present early results of typical metrics values and possible thresholds.

  • 8.
    Binder, Walter
    et al.
    University of Lugano .
    Bodden, EricTechnische Universität Darmstadt .Löwe, WelfLinnaeus University, Faculty of Technology, Department of Computer Science.
    Software Composition: 12th International Conference, SC 2013, Budapest, Hungary, June 19, 20132013Conference proceedings (editor) (Refereed)
  • 9.
    Bravo, Giangiacomo
    et al.
    Linnaeus University, Faculty of Social Sciences, Department of Social Studies.
    Laitinen, Mikko
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Petersson, Göran
    Linnaeus University, Faculty of Health and Life Sciences, Department of Medicine and Optometry.
    Big Data in Cross-Disciplinary Research: J.UCS Focused Topic2017In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 23, no 11, p. 1035-1037Article in journal (Other academic)
  • 10.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Kessler, Christoph
    Linköping University, Department for Computer and Information Science .
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Comparing Machine Learning Approaches for Context-Aware Composition2011In: Software Composition: 10th International Conference, SC 2011, Zurich, Switzerland, June 30 - July 1, 2011, Proceedings / [ed] Sven Apel, Ethan Jackson, Berlin: Springer, 2011, Vol. 6708, p. 18-33Chapter in book (Refereed)
    Abstract [en]

    Context-Aware Composition allows to automatically select optimal variants of algorithms, data-structures, and schedules at runtime using generalized dynamic Dispatch Tables. These tables grow exponentially with the number of significant context attributes. To make Context-Aware Composition scale, we suggest four alternative implementations to Dispatch Tables, all well-known in the field of machine learning: Decision Trees, Decision Diagrams, Naive Bayes and Support Vector Machines classifiers. We assess their decision overhead and memory consumption theoretically and practically in a number of experiments on different hardware platforms. Decision Diagrams turn out to be more compact compared to Dispatch Tables, almost as accurate, and faster in decision making. Using Decision Diagrams in Context-Aware Composition leads to a better scalability, i.e., Context-Aware Composition can be applied at more program points and regard more context attributes than before.

  • 11.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Decisions: Algebra and Implementation2011In: Machine Learning and Data Mining in Pattern Recognition: 7th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2011, New York, NY, USA, August/September 2011, Proceedings / [ed] Perner, Petra, Berlin, Heidelberg: Springer, 2011, Vol. 6871, p. 31-45Chapter in book (Refereed)
    Abstract [en]

    This paper presents a generalized theory for capturing and manipulating classification information. We define decision algebra which models decision-based classifiers as higher order decision functions abstracting from implementations using decision trees (or similar), decision rules, and decision tables. As a proof of the decision algebra concept we compare decision trees with decision graphs, yet another instantiation of the proposed theoretical framework, which implement the decision algebra operations efficiently and capture classification information in a non-redundant way. Compared to classical decision tree implementations, decision graphs gain learning and classification speed up to 20% without accuracy loss and reduce memory consumption by 44%. This is confirmed by experiments.

  • 12.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Decisions: Algebra, Implementation, and First Experiments2014In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 20, no 9, p. 1174-1231Article in journal (Refereed)
    Abstract [en]

    Classification is a constitutive part in many different fields of Computer Science. There exist several approaches that capture and manipulate classification information in order to construct a specific classification model. These approaches are often tightly coupled to certain learning strategies, special data structures for capturing the models, and to how common problems, e.g. fragmentation, replication and model overfitting, are addressed. In order to unify these different classification approaches, we define a Decision Algebra which defines models for classification as higher order decision functions abstracting from their implementations using decision trees (or similar), decision rules, decision tables, etc. Decision Algebra defines operations for learning, applying, storing, merging, approximating, and manipulating models for classification, along with some general algebraic laws regardless of the implementation used. The Decision Algebra abstraction has several advantages. First, several useful Decision Algebra operations (e.g., learning and deciding) can be derived based on the implementation of a few core operations (including merging and approximating). Second, applications using classification can be defined regardless of the different approaches. Third, certain properties of Decision Algebra operations can be proved regardless of the actual implementation. For instance, we show that the merger of a series of probably accurate decision functions is even more accurate, which can be exploited for efficient and general online learning. As a proof of the Decision Algebra concept, we compare decision trees with decision graphs, an efficient implementation of the Decision Algebra core operations, which capture classification models in a non-redundant way. Compared to classical decision tree implementations, decision graphs are 20% faster in learning and classification without accuracy loss and reduce memory consumption by 44%. This is the result of experiments on a number of standard benchmark data sets comparing accuracy, access time, and size of decision graphs and trees as constructed by the standard C4.5 algorithm. Finally, in order to test our hypothesis about increased accuracy when merging decision functions, we merged a series of decision graphs constructed over the data sets. The result shows that on each step the accuracy of the merged decision graph increases with the final accuracy growth of up to 16%.

  • 13.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Adaptation of Legacy Codes to Context-Aware Composition Using Aspect-Oriented Programming2012In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 7306, p. 68-85Article in journal (Refereed)
    Abstract [en]

    The context-aware composition approach (CAC) has shown to improve the performance of object-oriented applications on modern multi-core hardware by selecting between different (sequential and parallel) component variants in different (call and hardware) contexts. However, introducing CAC in legacy applications can be time-consuming and requires quite some effort for changing and adapting the existing code.We observe that CAC-concerns, like offline component variant profiling and runtime selection of the champion variant, can be separated from the legacy application code. We suggest separating and reusing these CAC concerns when introducing CAC to different legacy applications.

    For automating this process, we propose an approach based on Aspect-Oriented Programming (AOP) and Reflective Programming. It shows that manual adaptation to CAC requires more programming than the AOP-based approach; almost three times in our experiments. Moreover, the AOP-based approach speeds up the execution time of the legacy code, in our experiments by factors of up to 2.3 and 3.4 on multi-core machines with two and eight cores, respectively. The AOP based approach only introduces a small runtime overhead compared to the manually optimized CAC approach. For different problems, this overhead is about 2-9% of the manual adaptation approach. These results suggest that AOP-based adaptation can effectively adapt legacy applications to CAC which makes them running efficiently even on multi-core machines.

  • 14.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Context-Aware Recommender Systems for Non-functional Requirements2012In: Third International Workshop on Recommendation Systems for Software Engineering (RSSE 2012), 2012, p. 80-84Conference paper (Refereed)
    Abstract [en]

    For large software projects, system designers have to adhere to a significant number of functional and non-functional requirements, which makes software development a complex engineering task. If these requirements change during the development process, complexity even increases. In this paper, we suggest recommendation systems based on context-aware composition to enable a system designer to postpone and automate decisions regarding efficiency non-functional requirements, such as performance, and focus on the design of the core functionality of the system instead.

    Context-aware composition suggests the optimal component variants of a system for different static contexts (e.g., software and hardware environment) or even different dynamic contexts (e.g., actual parameters and resource utilization). Thus, an efficiency non-functional requirement can be automatically optimized statically or dynamically by providing possible component variants. Such a recommender system reduces time and effort spent on manually developing optimal applications that adapts to different (static or dynamic) contexts and even changes thereof.

  • 15.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Merging Classifiers of Different Classification Approaches2014In: 2014 IEEE International Conference on Data Mining Workshop (ICDMW), IEEE Press, 2014, p. 706-715Conference paper (Refereed)
    Abstract [en]

    Classification approaches, e.g. Decision trees or Naive Bayesian classifiers, are often tightly coupled to learning strategies, special data structures, the type of information captured, and to how common problems, e.g. Over fitting, are addressed. This prevents a simple combination of classifiers of differentclassification approaches learned over different data sets. Many different methods of combiningclassification models have been proposed. However, most of them are based on a combination of the actual result of classification rather then producing a new, possibly more accurate, classifier capturing the combined classification information. In this paper we propose a new general approach to combiningdifferent classification models based on a concept of Decision Algebra which provides a unified formalization of classification approaches as higher order decision functions. It defines a general combining operation, referred to as merge operation, abstracting from implementation details of differentclassifiers. We show that the combination of a series of probably accurate decision functions (regardless of the actual implementation) is even more accurate. This can be exploited, e.g., For distributed learning and for efficient general online learning. We support our results by combining a series of decision graphs and Naive Bayesian classifiers learned from random samples of the data sets. The result shows that on each step the accuracy of the combined classifier increases, with a total accuracy growth of up to 17%.

  • 16.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Zimmermann, Wolf
    Martin-Luther-Universität Halle Wittenberg, Institut für Informatik.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Decision Algebra: Parameterized Specification of Decision Models2012In: WADT 2012: 21st International Workshop on Algebraic Development Techniques, 7-10 June, 2012, Salamanca, Spain ; Technical report TR-08/12 / [ed] Narsio Martí-Oliet, Miguel Palomino, 2012, p. 40-43Conference paper (Refereed)
  • 17.
    Danylenko, Oleg
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Scheduling Stream Processing on Multi-Core Processors2011In: Fourth Swedish Workshop on Multicore Computing / [ed] Christoph Kessler, Linköping, Sweden: Linköping University , 2011, p. 136-Conference paper (Refereed)
    Abstract [en]

    Stream processing applications map continuous sequences of input data blocks to continuous sequences of output data blocks. They have demands on throughput of blocks or response time for each data block. We present theoretical bounds on the degree of parallelism of such applications, their throughput and response time. Based on these bounds, we develop scheduling heuristics for different optimization and constraint problems of stream processing applications involving throughput and response time. These results direct the manual implementation of efficient stream processing applications and will, on the long run, help generating them automatically.

  • 18.
    Danylenko, Oleg
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Rydström, Sara
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Comparing Implementation Platforms for Real-Time Stream Processing Systems on Multi-Core Hardware2011In: Proceedings of the 23rd IASTED International Conference: Parallel and Distributed Computing and Systems / [ed] T. Gonzalez, Calgary, AB, Canada: ACTA Press, 2011, p. 235-243Chapter in book (Refereed)
    Abstract [en]

    Today there exist many programming models and platforms for implementing real-time stream processing systems. A decision in favor of the wrong technology might lead to increased development time and costs. It is, therefore, necessary to decide which alternatives further efforts should concentrate on and which may be forgotten. Such decisions cannot be based sole on analytical comparisons; the present experiment seeks to complement analytical with empirical results.

    More specifically, the paper discusses the results of comparing programmability and performance of one and the same real-world real-time stream processing system implemented using three different alternative implementation platforms: C++ as a general-purpose programming language, IBM InfoSphere Streams as a dedicated stream processing platform, and MatLab as the technical computing system preferred in the application domain. As a result: system implementation based on MatLab was easiest, the C++ based implementation outperformed the others in response time, while InfoSphereStreams led to the highest data throughput. Altogether, the results give a picture of advantages and disadvantages of each technology for our real-time stream processing system. More empirical studies ought to provide similar empirical knowledge to help decide which technology to use for solving particular stream processing problems.

  • 19.
    Dressler, Danny
    et al.
    AIMO AB, Sweden.
    Liapota, Pavlo
    Softwerk AB, Sweden.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Data Driven Human Movement Assessment2019In: Intelligent Decision Technologies 2019: Proceedings of the 11th KES International Conference on Intelligent Decision Technologies (KES-IDT 2019), Volume 2 / [ed] Ireneusz Czarnowski; Robert Howlett; Lakhmi C. Jain, Springer, 2019, p. 317-327Conference paper (Refereed)
    Abstract [en]

    Quality assessment of human movements has many of applications in diagnosis and therapy of musculoskeletal insufficiencies and high performance sport. We suggest five purely data driven assessment methods for arbitrary human movements using inexpensive 3D sensor technology. We evaluate their accuracy by comparing them against a validated digitalization of a standardized human-expert-based assessment method for deep squats. We suggest the data driven method that shows high agreement with this baseline method, requires little expertise in the human movement and no expertise in the assessment method itself. It allows for an effective and efficient, automatic and quantitative assessment of  arbitrary human movements.

  • 20.
    Dressler, Danny
    et al.
    AIMO AB, Sweden.
    Liapota, Pavlo
    Softwerk AB, Sweden.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Towards an automated assessment of musculoskeletal insufficiencies2019In: Intelligent Decision Technologies 2019: Proceedings of the 11th KES International Conference on Intelligent Decision Technologies (KES-IDT 2019), Volume 1 / [ed] Ireneusz Czarnowski; Robert Howlett; Lakhmi C. Jain, Springer, 2019, p. 251-261Conference paper (Refereed)
    Abstract [en]

    The paper suggests a quantitative assessment of human movements using inexpensive 3D sensor technology and evaluates its accuracy by comparing it with human expert assessments. The two assessment methods show a high agreement. To achieve this, a novel sequence alignment algorithm was developed that works for arbitrary time series.

  • 21.
    Edvinsson, Marcus
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Parallel Data-Flow Analysis for Multi-Core Machines2011Conference paper (Refereed)
    Abstract [en]

    Static program analysis supporting software development is often part of edit-compile-cycles, and precise program analysis is time consuming. Points-to analysis is a data-flow-based static program analysis used to find object references in programs. Its applications include test case generation, compiler optimizations and program understanding, etc. Recent increases in processing power of desktop computers comes mainly from multiple cores. Parallel algorithms are vital for simultaneous use of multiple cores. An efficient parallel points-to analysis requires sufficient work for each processing unit.

    The present paper presents a parallel points-to analysis of object-oriented programs. It exploits that (1) different target methods of polymorphic calls and (2) independent control-flow branches can be analyzed in parallel. Carefully selected thresholds guarantee that each parallel thread has sufficient work to do and that only little work is redundant with other threads. Our experiments show that this approach achieves a maximum speed-up of 4.5.

  • 22.
    Edvinsson, Marcus
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Parallel Points-to Analysis for Multi-Core Machines2011In: 6th International Conference on High-Performance and Embedded Architectures and Compilers - HIPEAC 2011, 2011Conference paper (Refereed)
  • 23.
    Edvinsson, Marcus
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Parallel Reachability and Escape Analysis2010In: 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation (SCAM), IEEE Press, 2010, p. 125-134Conference paper (Refereed)
    Abstract [en]

    Static program analysis usually consists of a number of steps, each producing partial results. For example, the points-to analysis step, calculating object references in a program, usually just provides the input for larger client analyses like reachability and escape analyses. All these analyses are computationally intense and it is therefore vital to create parallel approaches that make use of the processing power that comes from multiple cores in modern desktop computers.

    The present paper presents two parallel approachesto increase the efficiency of reachability analysis and escape analysis, based on a parallel points-to analysis. The experiments show that the two parallel approaches achieve a speed-up of 1.5 for reachability analysis and 3.8 for escape analysis on 8 cores for a benchmark suite of Java programs.

     

  • 24.
    Edvinsson, Marcus
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    A Multi-Threaded Approach for Data-Flow Analysis2010Conference paper (Refereed)
    Abstract [en]

    Program analysis supporting software development is often part of edit-compile-cycles, and precise program analysis is time consuming. With the availability of parallel processing power on desktop computers, parallelization is a way to speed up program analysis. This requires a parallel data-flow analysis with sufficient work for each processing unit. The present paper suggests such an approach for object-oriented programs analyzing the target methods of polymorphic calls in parallel. With carefully selected thresholds guaranteeing sufficient work for the parallel threads and only little redundancy between them, this approach achieves a maximum speed-up of 5 (average 1.78) on 8 cores for the benchmark programs.

  • 25.
    Edvinsson, Marcus
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    A Parallel Approach for Solving Data Flow Analysis Problems2008In: The 20th IASTED International Conference on Parallel and Distributed Computing and Systems: PDCS 2008, Acta Press, Calgary , 2008Conference paper (Refereed)
    Abstract [en]

    Program analysis supporting software development is often part of edit-compile-cycles and precise program analysis is time consuming. With the availability parallel processing power on desktop computers, parallelization is a way to speed up program analysis. This paper introduces a parallelization schema for program analysis that can be translated to parallel machines using standard scheduling techniques. First benchmarks analyzing a number of Java programs indicate that the schema scales well for up to 8 processors, but not very well for 128 processors. These results are a first step towards more precise program analysis in Integrated Development Environments utilizing the computational power of today’s custom computers.

  • 26.
    Ericsson, Morgan
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Kessler, Chistoph
    Andersson, Jesper
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Composition and Optimization2008In: Proc. Int. Workshop on Component-Based High Performance Computing, 2008Conference paper (Refereed)
  • 27.
    Ericsson, Morgan
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Olsson, Tobias
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Toll, Daniel
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Wingkvist, Anna
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A Study of the Effect of Data Normalization on Software and Information Quality Assessment2013In: Software Engineering Conference (APSEC, 2013 20th Asia-Pacific), IEEE Press, 2013, p. 55-60Conference paper (Refereed)
    Abstract [en]

    Indirect metrics in quality models define weighted integrations of direct metrics to provide higher-level quality indicators. This paper presents a case study that investigates to what degree quality models depend on statistical assumptions about the distribution of direct metrics values when these are integrated and aggregated. We vary the normalization used by the quality assessment efforts of three companies, while keeping quality models, metrics, metrics implementation and, hence, metrics values constant. We find that normalization has a considerable impact on the ranking of an artifact (such as a class). We also investigate how normalization affects the quality trend and find that normalizations have a considerable effect on quality trends. Based on these findings, we find it questionable to continue to aggregate different metrics in a quality model as we do today.

  • 28.
    Ericsson, Morgan
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Wingkvist, Anna
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Probabilistic Quality Models to Improve Communication and Actionability2015In: 2015 30th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW), IEEE: IEEE Press, 2015, p. 1-4Conference paper (Refereed)
    Abstract [en]

    We need to aggregate metric values to make quality assessment actionable. However, due to properties of metric values, i.e., unknown distributions and different measurement scales, they are difficult to aggregate. We present and evaluate a method to aggregate metric values based on observed numerical distributions that are converted into cumulative density functions. We use these to determine the probability of each metric and file, and aggregate these. Our limited study suggests that the method improves correctness, communication, and the ability to take action. However, more evaluation is required.

  • 29.
    Ericsson, Morgan
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Wingkvist, Anna
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    A Software Infrastructure For Information Quality Assessment2011In: Proceedings of the International Conference on Information Quality, 2011Conference paper (Refereed)
    Abstract [en]

    Information quality assessment of technical documentation is nowadays an integral part of quality management of products and services. These are usually assessed using questionnaires, checklists, and reviews and consequently work that is cumbersome, costly and prone to errors. Acknowledging the fact that only humans can assess certain quality aspects, we suggest complementing these with automatic quality assessment using a software infrastructure that (i) reads information from documentations, (ii) performs analyses on this information, and (iii) visualizes the results to help stakeholders understand quality issues. We introduce the software infrastructure’s architecture and implementation, its adaptation to different formats of documentations and types of analyses, along with a number of real world cases exemplifying feasibility and benefit of our approach. Altogether, our approach contributes to more efficient and automatic information quality assessments. 

  • 30.
    Ericsson, Morgan
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Wingkvist, Anna
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    The Design and Implementation of a Software Infrastructure for IQ Assessment2012In: International Journal of Information Quality, ISSN 1751-0457, Vol. 3, no 1, p. 49-70Article in journal (Refereed)
    Abstract [en]

    Information quality assessment of technical documentation is an integral part of quality management of products and services. Technical documentation is usually assessed using questionnaires, checklists, and reviews. This is cumbersome, costly and prone to errors. Acknowledging the fact that only people can assess certain quality aspects, we suggest complementing these with software-supported automatic quality assessment. The many different encodings and representations of documentation, e.g., various XML dialects and XML Schemas/DTDs, is one problem. We present a system, a software infrastructure, where abstraction and meta modelling are used to define reusable analyses and visualisations that are independent of specific encodings and representations. We show how this system is implemented and how it: 1) reads information from documentations; 2) performs analyses on this information; 3) visualises the results to help stakeholders understand quality issues. We introduce the system, the architecture and implementation, its adaptation to different formats of documentations and types of analyses, along with a number of real world cases exemplifying the feasibility and benefits of our approach. Altogether, our approach contributes to more efficient information quality assessments.

     

  • 31.
    Ericsson, Morgan
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Wingkvist, Anna
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Visualization of Text Clones in Technical Documentation2012In: Proceedings of SIGRAD 2012: Interactive Visual Analysis of Data / [ed] Andreas Kerren and Stefan Seipel, 2012, p. 79-82Conference paper (Refereed)
    Abstract [en]

    An initial study of how text clones can be detected and visualized in technical documentation, i.e., semi-structured text that describe a product, software, or service. The goal of the visualizations is to support human experts to assess and prioritize the clones, since certain clones can be either intentional or harmless. We study some existing visualizations designed for source code, and provide initial and limited adaption of these. A major difficulty in this adaptation is to manage the semi-structured technical documentation compared to structured source code. 

  • 32.
    Golub, Koraljka
    et al.
    Linnaeus University, Faculty of Arts and Humanities, Department of Cultural Sciences.
    Hansson, Joacim
    Linnaeus University, Faculty of Arts and Humanities, Department of Cultural Sciences.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    LNU as a Unique iSchool2016Other (Other academic)
  • 33.
    Golub, Koraljka
    et al.
    Linnaeus University, Faculty of Arts and Humanities, Department of Cultural Sciences.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Overview of Centre for Data Intensive Sciences in Applications at Linnaeus University: invited talk2017In: A Calculus of Culture : Circumventing the Black Box of Culture Analytics, Guangxi University, China, March 21-23, 2017, 2017Conference paper (Other academic)
  • 34.
    Gutzmann, Tobias
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Khairova, Antonina
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Lundberg, Jonas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Towards Comparing and Combining Points-to Analyses2009In: Ninth IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2009), IEEE, 2009, p. 45-54Conference paper (Refereed)
    Abstract [en]

    Points-to information is the basis for many analyses and transformations, e.g., for program understanding and optimization. To justify new analysis techniques, they need to be compared to the state of the art regarding their accuracy and efficiency. Usually, benchmark suites are used to experimentally compare the different techniques. In this paper, we show that the accuracy of two analyses can only be compared in restricted cases, as there is no benchmark suite with exact Points-to information, no Gold Standard, and it is hard to construct one for realistic programs. We discuss the challenges and possible traps that may arise when comparing different Points-to analyses directly with each other, and with over- and under-approximations of a Gold Standard. Moreover, we discuss how different Points-to analyses can be combined to a more precise one. We complement the paper with experiments comparing and combining different static and dynamic Points-to analyses.

  • 35.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Collections Frameworks for Points-to Analysis2012In: IEEE 12th International Working Conference on Source Code Analysis and Manipulation (SCAM) 2012, IEEE, 2012, p. 4-13Conference paper (Refereed)
    Abstract [en]

    Points-to information is the basis for many analysesand transformations, e.g., for program understanding andoptimization. Collections frameworks are part of most modern programming languages’ infrastructures and used by many applications. The richness of features and the inherent structure of collection classes affect both performance and precision of points-to analysis negatively.

    In this paper, we discuss how to replace original collections frameworks with versions specialized for points-to analysis. We implement such a replacement for the Java Collections Framework and support its benefits for points-to analysis by applying it to three different points-to analysis implementations. In experiments, context-sensitive points-to analyses require, on average, 16-24% less time while at the same time being more precise. Context-insensitive analysis in conjunction with inlining also benefits in both precision and analysis cost.

  • 36.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Feedback-driven Points-to Analysis2010Report (Other academic)
    Abstract [en]

    Points-to analysis is a static program analysis that extracts reference information from agiven input program. Its accuracy is limited due to abstractions that any such analysisneeds to make. Further, the exact analysis results are unknown, i.e., no so-called GoldStandard exists for points-to analysis. This hinders the assessment of new ideas to pointstoanalysis, as results can be compared only relative to results obtained by other inaccurateanalyses.

    In this paper, we present feedback-driven points-to analysis. We suggest performing(any classical) points-to analysis with the points-to results at certain programpoints guarded by a-priori upper bounds. Such upper bounds can come from other pointstoanalyses – this is of interest when different approaches are not strictly ordered in termsof accuracy – and from human insight, i.e., manual proofs that certain points-to relationsare infeasible for every program run. This gives us a tool at hand to compute very accuratepoints-to analysis and, ultimately, to manually create a Gold Standard.

  • 37.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Feedback-driven Points-to Analysis2011In: 26th Symposium On Applied Computing (SAC 2011), TaiChung, March 21-24, 2011, 2011Conference paper (Refereed)
    Abstract [en]

    In this paper, we present feedback-driven points-to analysis where any classical points-to analysis has its points-to results at certain program points guarded by a-priori upper bounds. Such upper bounds can come from other points-to analyses – this is of interest when different approaches are not strictlyordered in terms of accuracy – and from human insight, i.e., manual proofs that certain points-to relations are infeasible for every program run.

  • 38.
    Gutzmann, Tobias
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Lundberg, Jonas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Towards Path-Sensitive Points-to Analysis2007In: Seventh IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM), IEEE Computer Society, Loas Alamitos, CA, USA , 2007, p. 59-68Conference paper (Refereed)
    Abstract [en]

    Points-to analysis is a static program analysis aiming at analyzing the reference structure of dynamically allocated objects at compile-time. It constitutes the basis for many analyses and optimizations in software engineering and compiler construction. Sparse program representations, such as Whole Program Points-to Graph (WPP2G) and Points-to SSA (P2SSA), represent only dataflow that is directly relevant for points-to analysis. They have proved to be practical in terms of analysis precision and efficiency. However, intra-procedural control flow information is removed from these representations, which sacrifices analysis precision to improve analysis performance.

    We show an approach for keeping control flow related information even in sparse program representations by representing control flow effects as operations on the data transferred, i.e., as dataflow information. These operations affect distinct paths of the program differently, thus yielding a certain degree of pathsensitivity. Our approach works with both WPP2G and P2SSA representations.

    We apply the approach to P2SSA-based and flow-sensitive points-to analysis and evaluate a context-insensitive and a context-sensitive variant. We assess our approach using abstract precision metrics. Moreover, we investigate the precision improvements and performance penalties when used as an input to three source-code-level analyses: dead code, cast safety, and null pointer analysis.

  • 39.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Custom-made Instrumentation Based on Static Analysis2011In: WODA 2011: Ninth International Workshop on Dynamic Analysis, 2011, p. 18-23Conference paper (Refereed)
    Abstract [en]

    Many dynamic analysis tools capture the occurrences of events at runtime. The longer programs are being monitored, the more accurate the data they provide to the user. Then, the runtime overhead must be kept as low as possible, because it decreases the user's productivity. Runtime performance overhead occurs due to identifying events, and storing them in a result data-structure. We address the latter issue by generating custom-made instrumentation code for each program. By using static analysis to get a~priori knowledge about which events of interest can occur and where they can occur, tailored code for storing those events can be generated for each program. We evaluate our idea by comparing the runtime overhead of a general purpose dynamic analysis tool that captures points-to information for Java programs with approaches based on custom-made instrumentation code. Experiments suggest highly reduced performance overhead for the latter.

  • 40.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Reducing the Performance Overhead of Dynamic Analysis through Custom-made Agents2010In: 5th International Workshop on Program Comprehension through Dynamic Analysis (PCODA 2010), 2010, p. 1-6Conference paper (Refereed)
    Abstract [en]

    The usefulness of dynamic analysis tools for programcomprehension increases with the amount oftime a given analyzed program is monitored. Thus, monitoring the analyzed program in a production environment should give the best picture of how the program actually works. However, high performance overhead is especially undesirable in such a setting, because it decreases the user’s productivity. Performance overhead occurs due to recognizing events that are of interest to the agent monitoring the program run and storing those events in data-structures. We propose to address this issue by creating a custom-made agent for each program. By using static analysis to get a priori knowledge about which events of interest can occur and where they can occur, tailored code for recognizing and storing those events can be generated for each program. We evaluate our idea by comparing a "general purpose" dynamic agent that captures fine-grained points-to information with custom-made agents that do the same job for specific programs. The latter show highly reduced performance overhead in practice. We also investigate how the precision of the static analysis affects the performance of the custom-made agents.

  • 41.
    Hagelbäck, Johan
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Liapota, Pavlo
    Softwerk AB.
    Lincke, Alisa
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Variants of Dynamic Time Warping and their Performance in Human Movement Assessment2019In: 21st International Conference on Artificial Intelligence (ICAI'19: July 29 - August 1, 2019, las Vegas, USA), CSREA Press, 2019Conference paper (Refereed)
    Abstract [en]

    The advent of commodity 3D sensor technology enabled, amongst other things, the efficient and effective assessment of human movements. Statistical and machine learning approaches map recorded movement instances to expert scores to train models for the automated assessment of new movements. However, there are many variations in selecting the approaches and setting the parameters for achieving good performance, i.e., high scoring accuracy and low response time. The present paper researches the design space and the impact of sequence alignment on accuracy and response time. More specifically, we introduce variants of Dynamic Time Warping (DTW) for aligning the phases of slow and fast movement instances and assess their effect on the scoring accuracy and response time. Results show that an automated stripping of leading and trailing frames not belonging to the movement (using one DTW variant) followed by an alignment of selected frames in the movements (based on another DTW variant) outperforms the original DTW and other suggested variants thereof. Since these results are independent of the selected learning approach and do not rely on the movement specifics, the results can help improving the performance of automated human movement assessment, in general.

  • 42.
    Hagelbäck, Johan
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Lincke, Alisa
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Rall, Eduard
    AIMO AB.
    On the Agreement of Commodity 3D Cameras2019In: 23rd International Conference on Image Processing, Computer Vision, & Pattern Recognition (IPCV'19: July 29 - August 1, 2019, USA), CSREA Press, 2019Conference paper (Refereed)
    Abstract [en]

    The advent of commodity 3D sensor technol- ogy has, amongst other things, enabled the efficient and effective assessment of human movements. Machine learning approaches do not rely manual definitions of gold standards for each new movement. However, to train models for the automated assessments of a new movement they still need a lot of data that map recorded movements to expert judg- ments. As camera technology changes, this training needs to be repeated if a new camera does not agree with the old one. The present paper presents an inexpensive method to check the agreement of cameras, which, in turn, would allow for a safe reuse of trained models regardless of the cameras. We apply the method to the Kinect, Astra Mini, and Real Sense cameras. The results show that these cameras do not agree and that the models cannot be reused without an unacceptable decay in accuracy. However, the suggested method works independent of movements and cameras and could potentially save effort when integrating new cameras in an existing assessment environment.

  • 43.
    Heberle, Andreas
    et al.
    Karlsruhe University of Applied Sciences, Germany.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Softwerk AB, Växjö.
    Gustafsson, Anders
    Södra Skog, Växjö.
    Vorrei, Orjan
    Södra Skog, Växjö.
    Digitalization Canvas - Towards Identifying Digitalization Use Cases and Projects2017In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 23, no 11, p. 1070-1097Article in journal (Refereed)
    Abstract [en]

    Nowadays, many companies are running digitalization initiatives or are planning to do so. There exist various models to evaluate the digitalization potential of a company and to define the maturity level of a company in exploiting digitalization technologies summarized under buzzwords such as Big Data, Artificial Intelligence (AI), Deep Learning, and the Industrial Internet of Things (IIoT). While platforms, protocols, patterns, technical implementations, and standards are in place to adopt these technologies, small-to mediumsized enterprises (SME) still struggle with digitalization. This is because it is hard to identify the most beneficial projects with manageable cost, limited resources and restricted know-how. In the present paper, we describe a real-life project where digitalization use cases have been identified, evaluated, and prioritized with respect to benefits and costs. This effort led to a portfolio of projects, some with quick and easy wins and some others with mid-to long-term benefits. From our experiences, we extracted a general approach that could be useful for other SMEs to identify concrete digitalization activities and to define projects implementing their digital transformation. The results are summarized in a Digitalization Canvas.

  • 44.
    Hedenborg, Mathias
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Trapp, Martin
    Senacor Technologies AG, Germany.
    Approximating Context-Sensitive Program Information2015In: Proceedings Kolloquium Programmiersprachen (KPS 2015) / [ed] Jens Knoop, 2015Conference paper (Other academic)
    Abstract [en]

    Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). In this paper we propose χ-terms as a mean to capture and manipulate context-sensitive program information in a data-flow analysis. We introduce finite k-approximation and loop approximation that limit the size of the context-sensitive information. These approximated χ-terms form a lattice with a finite depth, thus guaranteeing every data-flow analysis to reach a fixed point. 

  • 45.
    Hönel, Sebastian
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Ericsson, Morgan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Wingkvist, Anna
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    A changeset-based approach to assess source code density and developer efficacy2018In: ICSE '18 Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings, IEEE , 2018, p. 220-221Conference paper (Refereed)
    Abstract [en]

    The productivity of a (team of) developer(s) can be expressed as a ratio between effort and delivered functionality. Several different estimation models have been proposed. These are based on statistical analysis of real development projects; their accuracy depends on the number and the precision of data points. We propose a data-driven method to automate the generation of precise data points. Functionality is proportional to the code size and Lines of Code (LoC) is a fundamental metric of code size. However, code size and LoC are not well defined as they could include or exclude lines that do not affect the delivered functionality. We present a new approach to measure the density of code in software repositories. We demonstrate how the accuracy of development time spent in relation to delivered code can be improved when basing it on net-instead of the gross-size measurements. We validated our tool by studying ca. 1,650 open-source software projects.

  • 46.
    Hönel, Sebastian
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Ericsson, Morgan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Wingkvist, Anna
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Importance and Aptitude of Source code Density for Commit Classification into Maintenance Activities2019In: QRS 2019 Proceedings / [ed] Dr. David Shepherd, 2019Conference paper (Refereed)
    Abstract [en]

    Commit classification, the automatic classification of the purpose of changes to software, can support the understanding and quality improvement of software and its development process. We introduce code density of a commit, a measure of the net size of a commit, as a novel feature and study how well it is suited to determine the purpose of a change. We also compare the accuracy of code-density-based classifications with existing size-based classifications. By applying standard classification models, we demonstrate the significance of code density for the accuracy of commit classification. We achieve up to 89% accuracy and a Kappa of 0.82 for the cross-project commit classification where the model is trained on one project and applied to other projects. Such highly accurate classification of the purpose of software changes helps to improve the confidence in software (process) quality analyses exploiting this classification information.

  • 47. Kessler, Christoph
    et al.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datavetenskap.
    A Framework for Performance-aware Composition of Explicitly Parallel Components2007In: Parallel Computing: Architectures, Algorithms and Applications: Book of Abstracts of the International Conference ParCo 2007, John von Neumann-Institut für Computing (NIC), Jülich , 2007, p. 57-Conference paper (Refereed)
  • 48. Kessler, Christoph
    et al.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    A Framework for Performance-aware Composition of Explicitly Parallel Components2008In: Parallel Computing: Architectures, Algorithms and Applications: Proceedings of the International Conference ParCo 2007, IOS Press , 2008Conference paper (Refereed)
  • 49.
    Kessler, Christoph
    et al.
    Linköping University, IDA.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Context-Aware Composition of Parallel Components2011Conference paper (Refereed)
    Abstract [en]

    We describe the principles of a novel framework for performance-aware composition of sequential and explicitly parallelsoftware components with implementation variants. Context-aware composition dynamically selects, for each a performance-aware component, the expected best implementationvariant, processor allocation and schedule for the actualproblem size and processors available. The selection functionsare pre-computed statically using machine learningbased on profiling data.

  • 50.
    Kessler, Christoph
    et al.
    Linköping University, IDA.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Optimized composition of performance-aware parallel components2012In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 24, no 5, p. 481-498Article in journal (Refereed)
    Abstract [en]

    We describe the principles of a novel framework for performance-aware composition of sequential and explicitly parallel software components with implementation variants. Automatic composition results in a table-driven implementation that, for each parallel call of a performance-aware component, looks up the expected best implementation variant, processor allocation and schedule given the current problem, and processor group sizes. The dispatch tables are computed off-line at component deployment time by an interleaved dynamic programming algorithm from time-prediction meta-code provided by the component supplier. 

123 1 - 50 of 110
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf