lnu.sePublikationer
Ändra sökning
Avgränsa sökresultatet
123 1 - 50 av 139
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Abbas, Nadeem
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Andersson, Jesper
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Autonomic Software Product Lines (ASPL)2010Ingår i: ECSA '10 Proceedings of the Fourth European Conference on Software Architecture: Companion Volume / [ed] Carlos E. Cuesta, ACM Press, 2010, s. 324-331Konferensbidrag (Refereegranskat)
    Abstract [en]

    We describe ongoing work on a variability mechanism for Autonomic Software Product Lines (ASPL). The autonomic software product lines have self-management characteristics that make product line instances more resilient to context changes and some aspects of product line evolution. Instances sense the context, selects and bind the best component variants to variation-points at run-time. The variability mechanism we describe is composed of a profile guided dispatch based on off-line and on-line training processes. Together they form a simple, yet powerful variability mechanism that continuously learns, which variants to bind given the current context and system goals.

  • 2.
    Abbas, Nadeem
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Andersson, Jesper
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Towards Autonomic Software Product Lines (ASPL) - A Technical Report2011Rapport (Övrigt vetenskapligt)
    Abstract [en]

    This report describes a work in progress to develop Autonomic Software Product Lines (ASPL). The ASPL is a dynamic software product line approach with a novel variability handling mechanism that enables traditional software product lines to adapt themselves at runtime in response to changes in their context, requirements and business goals. The ASPL variability mechanism is composed of three key activities: 1) context-profiling, 2) context-aware composition, and 3) online learning. Context-profiling is an offline activity that prepares a knowledge base for context-aware composition. The context-aware composition uses the knowledge base to derive a new product or adapts an existing product based on a product line's context attributes and goals. The online learning optimizes the knowledge base to remove errors and suboptimal information and to incorporate new knowledge. The three activities together form a simple yet powerful variability handling mechanism that learns and adapts a system at runtime in response to changes in system context and goals. We evaluated the ASPL variability mechanism on three small-scale software product lines and got promising results. The ASPL approach is, however, is yet at an initial stage and require improved development support with more rigorous evaluation. 

    Ladda ner fulltext (pdf)
    fulltext
  • 3.
    Ambrosius, Robin
    et al.
    Dezember IT GmbH, Germany.
    Ericsson, Morgan
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Wingkvist, Anna
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Interviews Aided with Machine Learning2018Ingår i: Perspectives in Business Informatics Research. BIR 2018: 17th International Conference, BIR 2018, Stockholm, Sweden, September 24-26, 2018, Proceedings / [ed] Zdravkovic J., Grabis J., Nurcan S., Stirna J., Springer, 2018, Vol. 330, s. 202-216Konferensbidrag (Refereegranskat)
    Abstract [en]

    We have designed and implemented a Computer Aided Personal Interview (CAPI) system that learns from expert interviews and can support less experienced interviewers by for example suggesting questions to ask or skip. We were particularly interested to streamline the due diligence process when estimating the value for software startups. For our design we evaluated some machine learning algorithms and their trade-offs, and in a small case study we evaluates their implementation and performance. We find that while there is room for improvement, the system can learn and recommend questions. The CAPI system can in principle be applied to any domain in which long interview sessions should be shortened without sacrificing the quality of the assessment.

  • 4.
    Andersson, Jesper
    et al.
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Ericsson, Morgan
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Kessler, Chistoph
    Löwe, Welf
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Profile-guided Composition2008Ingår i: 7th International Symposium on Software Composition, Springer , 2008, s. 157-164Konferensbidrag (Refereegranskat)
  • 5.
    Andersson, Jesper
    et al.
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Ericsson, Morgan
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Löwe, Welf
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Automatic Rule Derivation for Adaptive Architectures2008Ingår i: 8th IEEE/IFIP Working Conference on Software Architecture, IEEE , 2008, s. 323-326Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper discusses on-going work in adaptive architectures concerning automatic adaptation rule derivation. Adaptation is rule-action based but deriving rules that meet the adaptation goals are tedious and error prone. We present an approach that uses model-driven derivation and training for automatically deriving adaptation rules, and exemplify this in an environment for scientific computing.

  • 6.
    Andersson, Jesper
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Heberle, Andreas
    Kirchner, Jens
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Service Level Achievements: Distributed Knowledge for Optimal Service Selection2011Ingår i: Proceedings - 9th IEEE European Conference on Web Services, ECOWS 2011 / [ed] Gianluigi Zavattaro, Ulf Schreier, and Cesare Pautasso, IEEE, 2011, s. 125-132Konferensbidrag (Refereegranskat)
    Abstract [en]

    In a service-oriented setting, where services are composed to provide end user functionality, it is a challenge to find the service components with best-fit functionality and quality. A decision based on information mainly provided by service providers is inadequate as it cannot be trusted in general. In this paper, we discuss service compositions in an open market scenario where an automated best-fit service selection and composition is based on Service Level Achievements instead. Continuous monitoring updates the actual Service Level Achievements which can lead to dynamically changing compositions. Measurements of real life services exemplify the approach.

  • 7.
    Backåberg, Sofia
    et al.
    Linnéuniversitetet, Fakulteten för Hälso- och livsvetenskap (FHL), Institutionen för hälso- och vårdvetenskap (HV). University of Calgary, Canada.
    Hellström, Amanda
    Linnéuniversitetet, Fakulteten för Hälso- och livsvetenskap (FHL), Institutionen för hälso- och vårdvetenskap (HV). Linnéuniversitetet, Kunskapsmiljöer Linné, Hållbar hälsa.
    Fagerström, Cecilia
    Linnéuniversitetet, Fakulteten för Hälso- och livsvetenskap (FHL), Institutionen för hälso- och vårdvetenskap (HV). Linnéuniversitetet, Kunskapsmiljöer Linné, Hållbar hälsa. Region Kalmar County, Sweden.
    Halling, Anders
    Lund University, Sweden.
    Lincke, Alisa
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Ekstedt, Mirjam
    Linnéuniversitetet, Fakulteten för Hälso- och livsvetenskap (FHL), Institutionen för hälso- och vårdvetenskap (HV). Karolinska Institutet, Sweden.
    Evaluation of the Skeleton Avatar Technique for Assessment of Mobility and Balance Among Older Adults2020Ingår i: Frontiers of Computer Science, ISSN 2095-2228, E-ISSN 2095-2236, Vol. 2, artikel-id 601271Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Background: Mobility and balance is essential for older adults’ well-being an independence and the ability tomaintain physically active. Early identification of functionalimpairmentmay enable early risk-of-fall assessments and preventivemeasures.  There is a need to find new solutions to assess functional ability in easy, efficient, and accurateways, which can be clinically used frequently and repetitively. Therefore, we need to understand how functional tests and expert assessments (EAs) correlate with new techniques.

    Objective: To explore whether the skeleton avatar technique (SAT) can predict the results of functional tests (FTs) of mobility and balance: Timed Up and Go (TUG), the 30-s chair stand test (30sCST), the 4-stage balance test (4SBT), and EA scoring of movement quality.

    Methods: Fifty-four older adults (+65 years) were recruited through pensioners’ associations. The test procedure contained three standardized FTs: TUG, 30sCST, and 4SBT. The test performances were recorded using a three-dimensional SAT camera. EA scoring was performed based on the video recordings of the 30sCST. Functional ability scores were aggregated from balance and mobility scores. Probability theory-based statistical analyses were used on the data to aggregate sets of individual variables into scores, with correlation analysis used to assess the dependency between variables and between scores. Machine learning techniques were used to assess the appropriateness of easily observable variables/scores as predictors of the other variables included.

    Results: The results indicate that SAT data of the fourth 4SBT stage could be used to predict the aggregated results of all stages of 4SBT (with 7.82% mean absolute error), the results of the 30sCST (11.0%), the TUG test (8.03%), and the EA of the sit-to-stand movement (8.79%). There is a moderate (significant) correlation between the 30sCST and the 4SBT (0.31, p = 0.03), but not between the EA and the 30sCST.

    Conclusion: SAT can predict the results of the 4SBT, the 30sCST (moderate accuracy), and the TUG test and might add important qualitative information to the assessment of movement performance in active older adults. SAT might in the future provide the means for a simple, easy, and accessible assessment of functional ability among older adults.

    Ladda ner fulltext (pdf)
    Skeleton Avatar Technique in Assessment of Mobility and Balance
  • 8.
    Barkmann, Henrike
    et al.
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    Lincke, Rüdiger
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    Löwe, Welf
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    Quantitative Evaluation of Software Quality Metrics in Open-Source Projects2009Ingår i: Proceedings of The 2009 IEEE International Workshop on Quantitative Evaluation of large-scale Systems and Technologies (QuEST09), IEEE, 2009Konferensbidrag (Refereegranskat)
    Abstract [en]

    The validation of software quality metrics lacks statistical significance. One reason for this is that the data collection requires quite some effort. To help solve this problem, we develop tools for metrics analysis of a large number of software projects (146 projects with ca. 70.000 classes and interfaces and over 11 million lines of code). Moreover, validation of software quality metrics should focus on relevant metrics, i.e., correlated metrics need not to be validated independently. Based on our statistical basis, we identify correlation between several metrics from well-known object-oriented metrics suites. Besides, we present early results of typical metrics values and possible thresholds.

  • 9.
    Binder, Walter
    et al.
    University of Lugano .
    Bodden, EricTechnische Universität Darmstadt .Löwe, WelfLinnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Software Composition: 12th International Conference, SC 2013, Budapest, Hungary, June 19, 20132013Proceedings (redaktörskap) (Refereegranskat)
  • 10.
    Björnberg, Dag
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). Softwerk AB, Sweden.
    Ericsson, Morgan
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Lindeberg, Johan
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för skog och träteknik (SOT).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Nordqvist, Jonas
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för matematik (MA).
    Image generation of log ends and patches of log ends with controlled properties using generative adversarial networks2024Ingår i: Signal, Image and Video Processing, ISSN 1863-1703, E-ISSN 1863-1711, Vol. 18, s. 6481-6489Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The appearance of the log cross-section provides important information when assessing the quality of the log, where properties to consider include pith location and density of annual rings. This makes tasks like estimation of pith location and annual ring detection of great interest. However, creating labeled training data for these tasks can be time-consuming and subject to misjudgments. For this reason, we aim to create generated training data with controlled properties of pith location and amount of annual rings. We propose a two-step generator based on generative adversarial networks in which we can completely avoid manual labeling, not only when generating training data but also during training of the generator itself. This opens up the possibility to train the generator on other types of log end data without the need to manually label new training data. The same method is used to create two generated training datasets; one of entire log ends and one of patches of log ends. To evaluate how the generated data compares to real data, we train two deep learning models to perform estimation of pith location and ring counting, respectively. The models are trained separately on real and generated data and evaluated on real data only. The results show that the performance of both estimation of pith location and ring counting can be improved by replacing real training data with larger sets of generated training data.

    Ladda ner fulltext (pdf)
    fulltext
  • 11.
    Björneld, Olof
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). Region Kalmar, Sweden.
    Carlsson, Martin
    Linnéuniversitetet, Fakulteten för Hälso- och livsvetenskap (FHL), Institutionen för medicin och optometri (MEO). Region Kalmar län.
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Case study: Feature engineering inspired by domain experts on real world medical data2023Ingår i: Intelligence-Based Medicine, ISSN 2666-5212, Vol. 8, artikel-id 100110Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    To perform data mining projects for knowledge discovery based on health data produced in a daily health care stored in electronic health records (EHR) can be time consuming. This study exemplifies that the involvement of a data scientist improves classification performances. We have performed a case study that comprises two real world medical research projects, comparing feature engineering and knowledge discovery based on classification performance. Project (P1) comprised 82,742 patients with the research question “Can we predict patient falls by use of EHR data” and the second project (P2) included 23,396 patients with the focus on “Negative side effects of antiepileptic drug consumption on bone structure”.

    The results concluded three salient results. (i) It is valuable for medical researchers to involve a data scientist when medical research based on real world medical data is performed. The findings were justified with an analysis of classification metrics when iteratively engineered features were used. The features were generated from domain experts and computer scientists in collaboration with medical researchers. We gave this process the name domain knowledge-driven feature engineering (KDFE).

    To evaluate the classification performance the metric area under the receiver operating characteristic curve (AUROC) was used. (ii) Domain experts are benefited in quantitative terms by KDFE. When KDFE was compared to baseline, the average classification performance measured by AUROC for the engineered features rose for P1 from 0.62 to 0.82 and for P2 from 0.61 to 0.89 (p-values << 0.001). (iii) The engineered features were represented in a systematic structure, which is the foundation of a theoretical model for automated KDFE (aKDFE).

    To our knowledge, this is the first study that proves that via quantitative measures KDFE adds value to real-world. However, the method is not limited to the medical domain. Other areas with similar data properties should also benefit from KDFE.

    Ladda ner fulltext (pdf)
    fulltext
  • 12.
    Bravo, Giangiacomo
    et al.
    Linnéuniversitetet, Fakulteten för samhällsvetenskap (FSV), Institutionen för samhällsstudier (SS).
    Laitinen, Mikko
    Linnéuniversitetet, Fakulteten för konst och humaniora (FKH), Institutionen för språk (SPR).
    Levin, Magnus
    Linnéuniversitetet, Fakulteten för konst och humaniora (FKH), Institutionen för språk (SPR).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM), Institutionen för datavetenskap (DV).
    Petersson, Göran
    Linnéuniversitetet, Fakulteten för Hälso- och livsvetenskap (FHL), Institutionen för medicin och optometri (MEO).
    Big Data in Cross-Disciplinary Research: J.UCS Focused Topic2017Ingår i: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 23, nr 11, s. 1035-1037Artikel i tidskrift (Övrigt vetenskapligt)
  • 13.
    Danylenko, Antonina
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Kessler, Christoph
    Linköping University, Department for Computer and Information Science .
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Comparing Machine Learning Approaches for Context-Aware Composition2011Ingår i: Software Composition: 10th International Conference, SC 2011, Zurich, Switzerland, June 30 - July 1, 2011, Proceedings / [ed] Sven Apel, Ethan Jackson, Berlin: Springer, 2011, Vol. 6708, s. 18-33Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Context-Aware Composition allows to automatically select optimal variants of algorithms, data-structures, and schedules at runtime using generalized dynamic Dispatch Tables. These tables grow exponentially with the number of significant context attributes. To make Context-Aware Composition scale, we suggest four alternative implementations to Dispatch Tables, all well-known in the field of machine learning: Decision Trees, Decision Diagrams, Naive Bayes and Support Vector Machines classifiers. We assess their decision overhead and memory consumption theoretically and practically in a number of experiments on different hardware platforms. Decision Diagrams turn out to be more compact compared to Dispatch Tables, almost as accurate, and faster in decision making. Using Decision Diagrams in Context-Aware Composition leads to a better scalability, i.e., Context-Aware Composition can be applied at more program points and regard more context attributes than before.

  • 14.
    Danylenko, Antonina
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Lundberg, Jonas
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Decisions: Algebra and Implementation2011Ingår i: Machine Learning and Data Mining in Pattern Recognition: 7th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2011, New York, NY, USA, August/September 2011, Proceedings / [ed] Perner, Petra, Berlin, Heidelberg: Springer, 2011, Vol. 6871, s. 31-45Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This paper presents a generalized theory for capturing and manipulating classification information. We define decision algebra which models decision-based classifiers as higher order decision functions abstracting from implementations using decision trees (or similar), decision rules, and decision tables. As a proof of the decision algebra concept we compare decision trees with decision graphs, yet another instantiation of the proposed theoretical framework, which implement the decision algebra operations efficiently and capture classification information in a non-redundant way. Compared to classical decision tree implementations, decision graphs gain learning and classification speed up to 20% without accuracy loss and reduce memory consumption by 44%. This is confirmed by experiments.

  • 15.
    Danylenko, Antonina
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Lundberg, Jonas
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Decisions: Algebra, Implementation, and First Experiments2014Ingår i: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 20, nr 9, s. 1174-1231Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Classification is a constitutive part in many different fields of Computer Science. There exist several approaches that capture and manipulate classification information in order to construct a specific classification model. These approaches are often tightly coupled to certain learning strategies, special data structures for capturing the models, and to how common problems, e.g. fragmentation, replication and model overfitting, are addressed. In order to unify these different classification approaches, we define a Decision Algebra which defines models for classification as higher order decision functions abstracting from their implementations using decision trees (or similar), decision rules, decision tables, etc. Decision Algebra defines operations for learning, applying, storing, merging, approximating, and manipulating models for classification, along with some general algebraic laws regardless of the implementation used. The Decision Algebra abstraction has several advantages. First, several useful Decision Algebra operations (e.g., learning and deciding) can be derived based on the implementation of a few core operations (including merging and approximating). Second, applications using classification can be defined regardless of the different approaches. Third, certain properties of Decision Algebra operations can be proved regardless of the actual implementation. For instance, we show that the merger of a series of probably accurate decision functions is even more accurate, which can be exploited for efficient and general online learning. As a proof of the Decision Algebra concept, we compare decision trees with decision graphs, an efficient implementation of the Decision Algebra core operations, which capture classification models in a non-redundant way. Compared to classical decision tree implementations, decision graphs are 20% faster in learning and classification without accuracy loss and reduce memory consumption by 44%. This is the result of experiments on a number of standard benchmark data sets comparing accuracy, access time, and size of decision graphs and trees as constructed by the standard C4.5 algorithm. Finally, in order to test our hypothesis about increased accuracy when merging decision functions, we merged a series of decision graphs constructed over the data sets. The result shows that on each step the accuracy of the merged decision graph increases with the final accuracy growth of up to 16%.

  • 16.
    Danylenko, Antonina
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Adaptation of Legacy Codes to Context-Aware Composition Using Aspect-Oriented Programming2012Ingår i: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 7306, s. 68-85Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The context-aware composition approach (CAC) has shown to improve the performance of object-oriented applications on modern multi-core hardware by selecting between different (sequential and parallel) component variants in different (call and hardware) contexts. However, introducing CAC in legacy applications can be time-consuming and requires quite some effort for changing and adapting the existing code.We observe that CAC-concerns, like offline component variant profiling and runtime selection of the champion variant, can be separated from the legacy application code. We suggest separating and reusing these CAC concerns when introducing CAC to different legacy applications.

    For automating this process, we propose an approach based on Aspect-Oriented Programming (AOP) and Reflective Programming. It shows that manual adaptation to CAC requires more programming than the AOP-based approach; almost three times in our experiments. Moreover, the AOP-based approach speeds up the execution time of the legacy code, in our experiments by factors of up to 2.3 and 3.4 on multi-core machines with two and eight cores, respectively. The AOP based approach only introduces a small runtime overhead compared to the manually optimized CAC approach. For different problems, this overhead is about 2-9% of the manual adaptation approach. These results suggest that AOP-based adaptation can effectively adapt legacy applications to CAC which makes them running efficiently even on multi-core machines.

  • 17.
    Danylenko, Antonina
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Context-Aware Recommender Systems for Non-functional Requirements2012Ingår i: Third International Workshop on Recommendation Systems for Software Engineering (RSSE 2012), 2012, s. 80-84Konferensbidrag (Refereegranskat)
    Abstract [en]

    For large software projects, system designers have to adhere to a significant number of functional and non-functional requirements, which makes software development a complex engineering task. If these requirements change during the development process, complexity even increases. In this paper, we suggest recommendation systems based on context-aware composition to enable a system designer to postpone and automate decisions regarding efficiency non-functional requirements, such as performance, and focus on the design of the core functionality of the system instead.

    Context-aware composition suggests the optimal component variants of a system for different static contexts (e.g., software and hardware environment) or even different dynamic contexts (e.g., actual parameters and resource utilization). Thus, an efficiency non-functional requirement can be automatically optimized statically or dynamically by providing possible component variants. Such a recommender system reduces time and effort spent on manually developing optimal applications that adapts to different (static or dynamic) contexts and even changes thereof.

    Ladda ner fulltext (pdf)
    fulltext
  • 18.
    Danylenko, Antonina
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Merging Classifiers of Different Classification Approaches2014Ingår i: 2014 IEEE International Conference on Data Mining Workshop (ICDMW), IEEE Press, 2014, s. 706-715Konferensbidrag (Refereegranskat)
    Abstract [en]

    Classification approaches, e.g. Decision trees or Naive Bayesian classifiers, are often tightly coupled to learning strategies, special data structures, the type of information captured, and to how common problems, e.g. Over fitting, are addressed. This prevents a simple combination of classifiers of differentclassification approaches learned over different data sets. Many different methods of combiningclassification models have been proposed. However, most of them are based on a combination of the actual result of classification rather then producing a new, possibly more accurate, classifier capturing the combined classification information. In this paper we propose a new general approach to combiningdifferent classification models based on a concept of Decision Algebra which provides a unified formalization of classification approaches as higher order decision functions. It defines a general combining operation, referred to as merge operation, abstracting from implementation details of differentclassifiers. We show that the combination of a series of probably accurate decision functions (regardless of the actual implementation) is even more accurate. This can be exploited, e.g., For distributed learning and for efficient general online learning. We support our results by combining a series of decision graphs and Naive Bayesian classifiers learned from random samples of the data sets. The result shows that on each step the accuracy of the combined classifier increases, with a total accuracy growth of up to 17%.

  • 19.
    Danylenko, Antonina
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Zimmermann, Wolf
    Martin-Luther-Universität Halle Wittenberg, Institut für Informatik.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Decision Algebra: Parameterized Specification of Decision Models2012Ingår i: WADT 2012: 21st International Workshop on Algebraic Development Techniques, 7-10 June, 2012, Salamanca, Spain ; Technical report TR-08/12 / [ed] Narsio Martí-Oliet, Miguel Palomino, 2012, s. 40-43Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 20.
    Danylenko, Oleg
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Scheduling Stream Processing on Multi-Core Processors2011Ingår i: Fourth Swedish Workshop on Multicore Computing / [ed] Christoph Kessler, Linköping, Sweden: Linköping University , 2011, s. 136-Konferensbidrag (Refereegranskat)
    Abstract [en]

    Stream processing applications map continuous sequences of input data blocks to continuous sequences of output data blocks. They have demands on throughput of blocks or response time for each data block. We present theoretical bounds on the degree of parallelism of such applications, their throughput and response time. Based on these bounds, we develop scheduling heuristics for different optimization and constraint problems of stream processing applications involving throughput and response time. These results direct the manual implementation of efficient stream processing applications and will, on the long run, help generating them automatically.

  • 21.
    Danylenko, Oleg
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Rydström, Sara
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Comparing Implementation Platforms for Real-Time Stream Processing Systems on Multi-Core Hardware2011Ingår i: Proceedings of the 23rd IASTED International Conference: Parallel and Distributed Computing and Systems / [ed] T. Gonzalez, Calgary, AB, Canada: ACTA Press, 2011, s. 235-243Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Today there exist many programming models and platforms for implementing real-time stream processing systems. A decision in favor of the wrong technology might lead to increased development time and costs. It is, therefore, necessary to decide which alternatives further efforts should concentrate on and which may be forgotten. Such decisions cannot be based sole on analytical comparisons; the present experiment seeks to complement analytical with empirical results.

    More specifically, the paper discusses the results of comparing programmability and performance of one and the same real-world real-time stream processing system implemented using three different alternative implementation platforms: C++ as a general-purpose programming language, IBM InfoSphere Streams as a dedicated stream processing platform, and MatLab as the technical computing system preferred in the application domain. As a result: system implementation based on MatLab was easiest, the C++ based implementation outperformed the others in response time, while InfoSphereStreams led to the highest data throughput. Altogether, the results give a picture of advantages and disadvantages of each technology for our real-time stream processing system. More empirical studies ought to provide similar empirical knowledge to help decide which technology to use for solving particular stream processing problems.

  • 22.
    Dressler, Danny
    et al.
    AIMO AB, Sweden.
    Liapota, Pavlo
    Softwerk AB, Sweden.
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Data Driven Human Movement Assessment2019Ingår i: Intelligent Decision Technologies 2019: Proceedings of the 11th KES International Conference on Intelligent Decision Technologies (KES-IDT 2019), Volume 2 / [ed] Ireneusz Czarnowski; Robert Howlett; Lakhmi C. Jain, Springer, 2019, s. 317-327Konferensbidrag (Refereegranskat)
    Abstract [en]

    Quality assessment of human movements has many of applications in diagnosis and therapy of musculoskeletal insufficiencies and high performance sport. We suggest five purely data driven assessment methods for arbitrary human movements using inexpensive 3D sensor technology. We evaluate their accuracy by comparing them against a validated digitalization of a standardized human-expert-based assessment method for deep squats. We suggest the data driven method that shows high agreement with this baseline method, requires little expertise in the human movement and no expertise in the assessment method itself. It allows for an effective and efficient, automatic and quantitative assessment of  arbitrary human movements.

  • 23.
    Dressler, Danny
    et al.
    AIMO AB, Sweden.
    Liapota, Pavlo
    Softwerk AB, Sweden.
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Towards an automated assessment of musculoskeletal insufficiencies2019Ingår i: Intelligent Decision Technologies 2019: Proceedings of the 11th KES International Conference on Intelligent Decision Technologies (KES-IDT 2019), Volume 1 / [ed] Ireneusz Czarnowski; Robert Howlett; Lakhmi C. Jain, Springer, 2019, s. 251-261Konferensbidrag (Refereegranskat)
    Abstract [en]

    The paper suggests a quantitative assessment of human movements using inexpensive 3D sensor technology and evaluates its accuracy by comparing it with human expert assessments. The two assessment methods show a high agreement. To achieve this, a novel sequence alignment algorithm was developed that works for arbitrary time series.

  • 24.
    Edvinsson, Marcus
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Lundberg, Jonas
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Parallel Data-Flow Analysis for Multi-Core Machines2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Static program analysis supporting software development is often part of edit-compile-cycles, and precise program analysis is time consuming. Points-to analysis is a data-flow-based static program analysis used to find object references in programs. Its applications include test case generation, compiler optimizations and program understanding, etc. Recent increases in processing power of desktop computers comes mainly from multiple cores. Parallel algorithms are vital for simultaneous use of multiple cores. An efficient parallel points-to analysis requires sufficient work for each processing unit.

    The present paper presents a parallel points-to analysis of object-oriented programs. It exploits that (1) different target methods of polymorphic calls and (2) independent control-flow branches can be analyzed in parallel. Carefully selected thresholds guarantee that each parallel thread has sufficient work to do and that only little work is redundant with other threads. Our experiments show that this approach achieves a maximum speed-up of 4.5.

  • 25.
    Edvinsson, Marcus
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Lundberg, Jonas
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Parallel Points-to Analysis for Multi-Core Machines2011Ingår i: 6th International Conference on High-Performance and Embedded Architectures and Compilers - HIPEAC 2011, 2011Konferensbidrag (Refereegranskat)
  • 26.
    Edvinsson, Marcus
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Lundberg, Jonas
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Parallel Reachability and Escape Analysis2010Ingår i: 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation (SCAM), IEEE Press, 2010, s. 125-134Konferensbidrag (Refereegranskat)
    Abstract [en]

    Static program analysis usually consists of a number of steps, each producing partial results. For example, the points-to analysis step, calculating object references in a program, usually just provides the input for larger client analyses like reachability and escape analyses. All these analyses are computationally intense and it is therefore vital to create parallel approaches that make use of the processing power that comes from multiple cores in modern desktop computers.

    The present paper presents two parallel approachesto increase the efficiency of reachability analysis and escape analysis, based on a parallel points-to analysis. The experiments show that the two parallel approaches achieve a speed-up of 1.5 for reachability analysis and 3.8 for escape analysis on 8 cores for a benchmark suite of Java programs.

     

  • 27.
    Edvinsson, Marcus
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    A Multi-Threaded Approach for Data-Flow Analysis2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    Program analysis supporting software development is often part of edit-compile-cycles, and precise program analysis is time consuming. With the availability of parallel processing power on desktop computers, parallelization is a way to speed up program analysis. This requires a parallel data-flow analysis with sufficient work for each processing unit. The present paper suggests such an approach for object-oriented programs analyzing the target methods of polymorphic calls in parallel. With carefully selected thresholds guaranteeing sufficient work for the parallel threads and only little redundancy between them, this approach achieves a maximum speed-up of 5 (average 1.78) on 8 cores for the benchmark programs.

  • 28.
    Edvinsson, Marcus
    et al.
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    Löwe, Welf
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    A Parallel Approach for Solving Data Flow Analysis Problems2008Ingår i: The 20th IASTED International Conference on Parallel and Distributed Computing and Systems: PDCS 2008, Acta Press, Calgary , 2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Program analysis supporting software development is often part of edit-compile-cycles and precise program analysis is time consuming. With the availability parallel processing power on desktop computers, parallelization is a way to speed up program analysis. This paper introduces a parallelization schema for program analysis that can be translated to parallel machines using standard scheduling techniques. First benchmarks analyzing a number of Java programs indicate that the schema scales well for up to 8 processors, but not very well for 128 processors. These results are a first step towards more precise program analysis in Integrated Development Environments utilizing the computational power of today’s custom computers.

  • 29.
    Ericsson, Morgan
    et al.
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Löwe, Welf
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Kessler, Chistoph
    Andersson, Jesper
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Computer Science.
    Composition and Optimization2008Ingår i: Proc. Int. Workshop on Component-Based High Performance Computing, 2008Konferensbidrag (Refereegranskat)
  • 30.
    Ericsson, Morgan
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Olsson, Tobias
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Toll, Daniel
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Wingkvist, Anna
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    A Study of the Effect of Data Normalization on Software and Information Quality Assessment2013Ingår i: Software Engineering Conference (APSEC, 2013 20th Asia-Pacific), IEEE Press, 2013, s. 55-60Konferensbidrag (Refereegranskat)
    Abstract [en]

    Indirect metrics in quality models define weighted integrations of direct metrics to provide higher-level quality indicators. This paper presents a case study that investigates to what degree quality models depend on statistical assumptions about the distribution of direct metrics values when these are integrated and aggregated. We vary the normalization used by the quality assessment efforts of three companies, while keeping quality models, metrics, metrics implementation and, hence, metrics values constant. We find that normalization has a considerable impact on the ranking of an artifact (such as a class). We also investigate how normalization affects the quality trend and find that normalizations have a considerable effect on quality trends. Based on these findings, we find it questionable to continue to aggregate different metrics in a quality model as we do today.

  • 31.
    Ericsson, Morgan
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Wingkvist, Anna
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Probabilistic Quality Models to Improve Communication and Actionability2015Ingår i: 2015 30th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW), IEEE: IEEE Press, 2015, s. 1-4Konferensbidrag (Refereegranskat)
    Abstract [en]

    We need to aggregate metric values to make quality assessment actionable. However, due to properties of metric values, i.e., unknown distributions and different measurement scales, they are difficult to aggregate. We present and evaluate a method to aggregate metric values based on observed numerical distributions that are converted into cumulative density functions. We use these to determine the probability of each metric and file, and aggregate these. Our limited study suggests that the method improves correctness, communication, and the ability to take action. However, more evaluation is required.

  • 32.
    Ericsson, Morgan
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Wingkvist, Anna
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    A Software Infrastructure For Information Quality Assessment2011Ingår i: Proceedings of the International Conference on Information Quality, 2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Information quality assessment of technical documentation is nowadays an integral part of quality management of products and services. These are usually assessed using questionnaires, checklists, and reviews and consequently work that is cumbersome, costly and prone to errors. Acknowledging the fact that only humans can assess certain quality aspects, we suggest complementing these with automatic quality assessment using a software infrastructure that (i) reads information from documentations, (ii) performs analyses on this information, and (iii) visualizes the results to help stakeholders understand quality issues. We introduce the software infrastructure’s architecture and implementation, its adaptation to different formats of documentations and types of analyses, along with a number of real world cases exemplifying feasibility and benefit of our approach. Altogether, our approach contributes to more efficient and automatic information quality assessments. 

  • 33.
    Ericsson, Morgan
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Wingkvist, Anna
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    The Design and Implementation of a Software Infrastructure for IQ Assessment2012Ingår i: International Journal of Information Quality, ISSN 1751-0457, Vol. 3, nr 1, s. 49-70Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Information quality assessment of technical documentation is an integral part of quality management of products and services. Technical documentation is usually assessed using questionnaires, checklists, and reviews. This is cumbersome, costly and prone to errors. Acknowledging the fact that only people can assess certain quality aspects, we suggest complementing these with software-supported automatic quality assessment. The many different encodings and representations of documentation, e.g., various XML dialects and XML Schemas/DTDs, is one problem. We present a system, a software infrastructure, where abstraction and meta modelling are used to define reusable analyses and visualisations that are independent of specific encodings and representations. We show how this system is implemented and how it: 1) reads information from documentations; 2) performs analyses on this information; 3) visualises the results to help stakeholders understand quality issues. We introduce the system, the architecture and implementation, its adaptation to different formats of documentations and types of analyses, along with a number of real world cases exemplifying the feasibility and benefits of our approach. Altogether, our approach contributes to more efficient information quality assessments.

     

  • 34.
    Ericsson, Morgan
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Wingkvist, Anna
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Visualization of Text Clones in Technical Documentation2012Ingår i: Proceedings of SIGRAD 2012: Interactive Visual Analysis of Data / [ed] Andreas Kerren and Stefan Seipel, 2012, s. 79-82Konferensbidrag (Refereegranskat)
    Abstract [en]

    An initial study of how text clones can be detected and visualized in technical documentation, i.e., semi-structured text that describe a product, software, or service. The goal of the visualizations is to support human experts to assess and prioritize the clones, since certain clones can be either intentional or harmless. We study some existing visualizations designed for source code, and provide initial and limited adaption of these. A major difficulty in this adaptation is to manage the semi-structured technical documentation compared to structured source code. 

  • 35.
    Gauss, Joela F.
    et al.
    Karlsruhe University of Applied Sciences, Germany.
    Brandin, Christoph
    AIMO GmbH, Germany.
    Heberle, Andreas
    Karlsruhe University of Applied Sciences, Germany.
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Smoothing Skeleton Avatar Visualizations Using Signal Processing Technology2021Ingår i: SN Computer Science, ISSN 2662-995X, Vol. 2, nr 6, artikel-id 429Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Movements of a person can be recorded with a mobile camera and visualized as sequences of stick figures for assessments in health and elderly care, physio-therapy, and sports. However, since the visualizations flicker due to noisy input data, the visualizations themselves and even whole assessment applications are not trusted in general. The present paper evaluates different filters for smoothing the movement visualizations but keeping their validity for a visual physio-therapeutic assessment. It evaluates variants of moving average, high-pass, and Kalman filters with different parameters. Moreover, it presents a framework for the quantitative evaluation of smoothness and validity. As these two criteria are contradicting, the framework also allows to weight them differently and to automatically find the correspondingly best-fitting filter and its parameters. Different filters can be recommended for different weightings of smoothness and validity. The evaluation framework is applicable in more general contexts and with more filters than the three filters assessed. However, as a practical result of this work, a suitable filter for stick figure visualizations in a mobile application for assessing movement quality could be selected and used in a mobile app. The application is now more trustworthy and used by medical and sports experts, and end customers alike.

  • 36.
    Ghayvat, Hemant
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Awais, Muhammad
    Univ Texas MD Anderson Canc Ctr, USA.
    Geddam, Rebakah
    Nirma Univ, India.
    Tiwari, Prayag
    Halmstad University, Sweden.
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Revolutionizing healthcare: IoMT-enabled digital enhancement via multimodal ADL data fusion2024Ingår i: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 111, artikel-id 102518Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The present research develops a framework to refine the classification of an individual's activities and recognize wellness associated with their routine. The framework improves the accuracy of the classification of routine activities of a person, the activation time data of sensors fixed on objects linked with the routine activities of the person, and the aptness of an incessant activity pattern with the routine activities. The existing techniques need continuous monitoring and are non-adaptive to a person's persistent habitual variations or individualities. The research involves applying Internet of Medical Things (IoMT)-based sensor information fusion to the novel multimodel data analytics to develop Activities of Daily Living (ADL) pattern, behavioral pattern generation and anomaly recognition. The novel multimodel data analytics approach is named AiCareLiving. AicareLiving is an IoMT and artificial intelligence (AI) enabled approach. The research work describes activity data using an individual's activities within a specified area before evaluating the activity data to detect the existence of an anomaly by identifying the deviation of the activity data from the activity profile, which indicates the anticipated behavior and activity of the person. This wellness information would be shared to the caregivers, related healthcare professionals, care providers and municipalities through the secured healthcare information exchange protocol and IoMT. AiCareLiving framework aims to least false positive in terms of anomaly detection and forecasting; the high precision is close to the confidence level of 95%.

    Ladda ner fulltext (pdf)
    fulltext
  • 37.
    Golub, Koraljka
    et al.
    Linnéuniversitetet, Fakulteten för konst och humaniora (FKH), Institutionen för kulturvetenskaper (KV).
    Hansson, Joacim
    Linnéuniversitetet, Fakulteten för konst och humaniora (FKH), Institutionen för kulturvetenskaper (KV).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Milrad, Marcelo
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för medieteknik (ME).
    LNU as a Unique iSchool2016Övrigt (Övrigt vetenskapligt)
    Ladda ner (pdf)
    poster
  • 38.
    Golub, Koraljka
    et al.
    Linnéuniversitetet, Fakulteten för konst och humaniora (FKH), Institutionen för kulturvetenskaper (KV).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
    Overview of Centre for Data Intensive Sciences in Applications at Linnaeus University: invited talk2017Ingår i: A Calculus of Culture : Circumventing the Black Box of Culture Analytics, Guangxi University, China, March 21-23, 2017, 2017Konferensbidrag (Övrigt vetenskapligt)
  • 39.
    Gundermann, Niels
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). data experts GmbH, Germany;University of Applied Science, Germany.
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). Department of Computer Science and Media Technology, Faculty of Technology, Linnaeus University, 35195 Växjö, Sweden.
    Fransson, Johan
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för skog och träteknik (SOT).
    Olofsson, Erika
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för skog och träteknik (SOT).
    Wehrenpfennig, Andreas
    University of Applied Science, Germany.
    Object Identification in Land Parcels Using a Machine Learning Approach2024Ingår i: Remote Sensing, E-ISSN 2072-4292, Vol. 16, nr 7, artikel-id 1143Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper introduces an AI-based approach to detect human-made objects and changes in these on land parcels. To this end, we used binary image classification performed by a convolutional neural network. Binary classification requires the selection of a decision boundary, and we provided a deterministic method for this selection. Furthermore, we varied different parameters to improve the performance of our approach, leading to a true positive rate of 91.3% and a true negative rate of 63.0%. A specific application of our work supports the administration of agricultural land parcels eligible for subsidiaries. As a result of our findings, authorities could reduce the effort involved in the detection of human made changes by approximately 50%.

    Ladda ner fulltext (pdf)
    fulltext
  • 40.
    Gutzmann, Tobias
    et al.
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen.
    Khairova, Antonina
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen.
    Lundberg, Jonas
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen.
    Löwe, Welf
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen.
    Towards Comparing and Combining Points-to Analyses2009Ingår i: Ninth IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2009), IEEE, 2009, s. 45-54Konferensbidrag (Refereegranskat)
    Abstract [en]

    Points-to information is the basis for many analyses and transformations, e.g., for program understanding and optimization. To justify new analysis techniques, they need to be compared to the state of the art regarding their accuracy and efficiency. Usually, benchmark suites are used to experimentally compare the different techniques. In this paper, we show that the accuracy of two analyses can only be compared in restricted cases, as there is no benchmark suite with exact Points-to information, no Gold Standard, and it is hard to construct one for realistic programs. We discuss the challenges and possible traps that may arise when comparing different Points-to analyses directly with each other, and with over- and under-approximations of a Gold Standard. Moreover, we discuss how different Points-to analyses can be combined to a more precise one. We complement the paper with experiments comparing and combining different static and dynamic Points-to analyses.

  • 41.
    Gutzmann, Tobias
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Lundberg, Jonas
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Collections Frameworks for Points-to Analysis2012Ingår i: IEEE 12th International Working Conference on Source Code Analysis and Manipulation (SCAM) 2012, IEEE, 2012, s. 4-13Konferensbidrag (Refereegranskat)
    Abstract [en]

    Points-to information is the basis for many analysesand transformations, e.g., for program understanding andoptimization. Collections frameworks are part of most modern programming languages’ infrastructures and used by many applications. The richness of features and the inherent structure of collection classes affect both performance and precision of points-to analysis negatively.

    In this paper, we discuss how to replace original collections frameworks with versions specialized for points-to analysis. We implement such a replacement for the Java Collections Framework and support its benefits for points-to analysis by applying it to three different points-to analysis implementations. In experiments, context-sensitive points-to analyses require, on average, 16-24% less time while at the same time being more precise. Context-insensitive analysis in conjunction with inlining also benefits in both precision and analysis cost.

  • 42.
    Gutzmann, Tobias
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Lundberg, Jonas
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Feedback-driven Points-to Analysis2010Rapport (Övrigt vetenskapligt)
    Abstract [en]

    Points-to analysis is a static program analysis that extracts reference information from agiven input program. Its accuracy is limited due to abstractions that any such analysisneeds to make. Further, the exact analysis results are unknown, i.e., no so-called GoldStandard exists for points-to analysis. This hinders the assessment of new ideas to pointstoanalysis, as results can be compared only relative to results obtained by other inaccurateanalyses.

    In this paper, we present feedback-driven points-to analysis. We suggest performing(any classical) points-to analysis with the points-to results at certain programpoints guarded by a-priori upper bounds. Such upper bounds can come from other pointstoanalyses – this is of interest when different approaches are not strictly ordered in termsof accuracy – and from human insight, i.e., manual proofs that certain points-to relationsare infeasible for every program run. This gives us a tool at hand to compute very accuratepoints-to analysis and, ultimately, to manually create a Gold Standard.

    Ladda ner fulltext (pdf)
    FULLTEXT01
  • 43.
    Gutzmann, Tobias
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Lundberg, Jonas
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Feedback-driven Points-to Analysis2011Ingår i: 26th Symposium On Applied Computing (SAC 2011), TaiChung, March 21-24, 2011, 2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we present feedback-driven points-to analysis where any classical points-to analysis has its points-to results at certain program points guarded by a-priori upper bounds. Such upper bounds can come from other points-to analyses – this is of interest when different approaches are not strictlyordered in terms of accuracy – and from human insight, i.e., manual proofs that certain points-to relations are infeasible for every program run.

  • 44.
    Gutzmann, Tobias
    et al.
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    Lundberg, Jonas
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    Löwe, Welf
    Växjö universitet, Fakulteten för matematik/naturvetenskap/teknik, Matematiska och systemtekniska institutionen. Datalogi.
    Towards Path-Sensitive Points-to Analysis2007Ingår i: Seventh IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM), IEEE Computer Society, Loas Alamitos, CA, USA , 2007, s. 59-68Konferensbidrag (Refereegranskat)
    Abstract [en]

    Points-to analysis is a static program analysis aiming at analyzing the reference structure of dynamically allocated objects at compile-time. It constitutes the basis for many analyses and optimizations in software engineering and compiler construction. Sparse program representations, such as Whole Program Points-to Graph (WPP2G) and Points-to SSA (P2SSA), represent only dataflow that is directly relevant for points-to analysis. They have proved to be practical in terms of analysis precision and efficiency. However, intra-procedural control flow information is removed from these representations, which sacrifices analysis precision to improve analysis performance.

    We show an approach for keeping control flow related information even in sparse program representations by representing control flow effects as operations on the data transferred, i.e., as dataflow information. These operations affect distinct paths of the program differently, thus yielding a certain degree of pathsensitivity. Our approach works with both WPP2G and P2SSA representations.

    We apply the approach to P2SSA-based and flow-sensitive points-to analysis and evaluate a context-insensitive and a context-sensitive variant. We assess our approach using abstract precision metrics. Moreover, we investigate the precision improvements and performance penalties when used as an input to three source-code-level analyses: dead code, cast safety, and null pointer analysis.

  • 45.
    Gutzmann, Tobias
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Custom-made Instrumentation Based on Static Analysis2011Ingår i: WODA 2011: Ninth International Workshop on Dynamic Analysis, 2011, s. 18-23Konferensbidrag (Refereegranskat)
    Abstract [en]

    Many dynamic analysis tools capture the occurrences of events at runtime. The longer programs are being monitored, the more accurate the data they provide to the user. Then, the runtime overhead must be kept as low as possible, because it decreases the user's productivity. Runtime performance overhead occurs due to identifying events, and storing them in a result data-structure. We address the latter issue by generating custom-made instrumentation code for each program. By using static analysis to get a~priori knowledge about which events of interest can occur and where they can occur, tailored code for storing those events can be generated for each program. We evaluate our idea by comparing the runtime overhead of a general purpose dynamic analysis tool that captures points-to information for Java programs with approaches based on custom-made instrumentation code. Experiments suggest highly reduced performance overhead for the latter.

  • 46.
    Gutzmann, Tobias
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Löwe, Welf
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Reducing the Performance Overhead of Dynamic Analysis through Custom-made Agents2010Ingår i: 5th International Workshop on Program Comprehension through Dynamic Analysis (PCODA 2010), 2010, s. 1-6Konferensbidrag (Refereegranskat)
    Abstract [en]

    The usefulness of dynamic analysis tools for programcomprehension increases with the amount oftime a given analyzed program is monitored. Thus, monitoring the analyzed program in a production environment should give the best picture of how the program actually works. However, high performance overhead is especially undesirable in such a setting, because it decreases the user’s productivity. Performance overhead occurs due to recognizing events that are of interest to the agent monitoring the program run and storing those events in data-structures. We propose to address this issue by creating a custom-made agent for each program. By using static analysis to get a priori knowledge about which events of interest can occur and where they can occur, tailored code for recognizing and storing those events can be generated for each program. We evaluate our idea by comparing a "general purpose" dynamic agent that captures fine-grained points-to information with custom-made agents that do the same job for specific programs. The latter show highly reduced performance overhead in practice. We also investigate how the precision of the static analysis affects the performance of the custom-made agents.

  • 47.
    Hagelbäck, Johan
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Liapota, Pavlo
    Softwerk, Sweden.
    Lincke, Alisa
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    The performance of some machine learning approaches in human movement assessment2019Ingår i: 13th Multi Conference on Computer Science and Information Systems (MCCSIS) / [ed] Mário Macedo, L. Rodrigues, Porto, Portugal: IADIS Press, 2019, s. 35-42Konferensbidrag (Refereegranskat)
    Abstract [en]

    The advent of commodity 3D sensor technology enabled, amongst other things, the efficient and effective assessment of human movements. Statistical and machine learning approaches map recorded movement instances to expert scores to train models for the automated assessment of new movements. However, there are many variations in selecting the approaches and setting the parameters for achieving high performance, i.e., high accuracy and low response time. The present paper researches the design space and the impact of approaches of statistical and machine learning on accuracy and response time in human movement assessment. Results show that a random forest regression approach outperforms linear regression, support vector regression and neuronal network approaches. Since the results do not rely on the movement specifics, they can help improving the performance of automated human movement assessment, in general.

  • 48.
    Hagelbäck, Johan
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Liapota, Pavlo
    Softwerk, Sweden.
    Lincke, Alisa
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Variants of Dynamic Time Warping and their Performance in Human Movement Assessment2019Ingår i: 21st International Conference on Artificial Intelligence (ICAI'19: July 29 - August 1, 2019, Las Vegas, USA), CSREA Press, 2019, s. 9-15Konferensbidrag (Refereegranskat)
    Abstract [en]

    The advent of commodity 3D sensor technology enabled, amongst other things, the efficient and effective assessment of human movements. Statistical and machine learning approaches map recorded movement instances to expert scores to train models for the automated assessment of new movements. However, there are many variations in selecting the approaches and setting the parameters for achieving good performance, i.e., high scoring accuracy and low response time. The present paper researches the design space and the impact of sequence alignment on accuracy and response time. More specifically, we introduce variants of Dynamic Time Warping (DTW) for aligning the phases of slow and fast movement instances and assess their effect on the scoring accuracy and response time. Results show that an automated stripping of leading and trailing frames not belonging to the movement (using one DTW variant) followed by an alignment of selected frames in the movements (based on another DTW variant) outperforms the original DTW and other suggested variants thereof. Since these results are independent of the selected learning approach and do not rely on the movement specifics, the results can help improving the performance of automated human movement assessment, in general.

  • 49.
    Hagelbäck, Johan
    et al.
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Lincke, Alisa
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Rall, Eduard
    AIMO AB, Sweden.
    On the Agreement of Commodity 3D Cameras2019Ingår i: Proceedings of the 2019 International Conference on Image Processing, Computer Vision, & Pattern Recognition / [ed] Hamid R. Arabnia, Leonidas Deligiannidis, Fernando G. Tinetti, CSREA Press, 2019, s. 36-42Konferensbidrag (Refereegranskat)
    Abstract [en]

    The advent of commodity 3D sensor technol- ogy has, amongst other things, enabled the efficient and effective assessment of human movements. Machine learning approaches do not rely manual definitions of gold standards for each new movement. However, to train models for the automated assessments of a new movement they still need a lot of data that map recorded movements to expert judg- ments. As camera technology changes, this training needs to be repeated if a new camera does not agree with the old one. The present paper presents an inexpensive method to check the agreement of cameras, which, in turn, would allow for a safe reuse of trained models regardless of the cameras. We apply the method to the Kinect, Astra Mini, and Real Sense cameras. The results show that these cameras do not agree and that the models cannot be reused without an unacceptable decay in accuracy. However, the suggested method works independent of movements and cameras and could potentially save effort when integrating new cameras in an existing assessment environment.

  • 50.
    Heberle, Andreas
    et al.
    Karlsruhe University of Applied Sciences, Germany.
    Löwe, Welf
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). Softwerk AB, Växjö.
    Gustafsson, Anders
    Södra Skog, Växjö.
    Vorrei, Orjan
    Södra Skog, Växjö.
    Digitalization Canvas - Towards Identifying Digitalization Use Cases and Projects2017Ingår i: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 23, nr 11, s. 1070-1097Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Nowadays, many companies are running digitalization initiatives or are planning to do so. There exist various models to evaluate the digitalization potential of a company and to define the maturity level of a company in exploiting digitalization technologies summarized under buzzwords such as Big Data, Artificial Intelligence (AI), Deep Learning, and the Industrial Internet of Things (IIoT). While platforms, protocols, patterns, technical implementations, and standards are in place to adopt these technologies, small-to mediumsized enterprises (SME) still struggle with digitalization. This is because it is hard to identify the most beneficial projects with manageable cost, limited resources and restricted know-how. In the present paper, we describe a real-life project where digitalization use cases have been identified, evaluated, and prioritized with respect to benefits and costs. This effort led to a portfolio of projects, some with quick and easy wins and some others with mid-to long-term benefits. From our experiences, we extracted a general approach that could be useful for other SMEs to identify concrete digitalization activities and to define projects implementing their digital transformation. The results are summarized in a Digitalization Canvas.

123 1 - 50 av 139
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf