lnu.sePublications
Change search
Refine search result
1 - 36 of 36
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alissandrakis, Aris
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Reski, Nico
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Laitinen, Mikko
    University of Eastern Finland, Finland.
    Tyrkkö, Jukka
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Visualizing dynamic text corpora using Virtual Reality2018In: ICAME 39 : Tampere, 30 May – 3 June, 2018: Corpus Linguistics and Changing Society : Book of Abstracts, Tampere: University of Tampere , 2018, p. 205-205Conference paper (Refereed)
    Abstract [en]

    In recent years, data visualization has become a major area in Digital Humanities research, and the same holds true also in linguistics. The rapidly increasing size of corpora, the emergence of dynamic real-time streams, and the availability of complex and enriched metadata have made it increasingly important to facilitate new and innovative approaches to presenting and exploring primary data. This demonstration showcases the uses of Virtual Reality (VR) in the visualization of geospatial linguistic data using data from the Nordic Tweet Stream (NTS) project (see Laitinen et al 2017). The NTS data for this demonstration comprises a full year of geotagged tweets (12,443,696 tweets from 273,648 user accounts) posted within the Nordic region (Denmark, Finland, Iceland, Norway, and Sweden). The dataset includes over 50 metadata parameters in addition to the tweets themselves.

    We demonstrate the potential of using VR to efficiently find meaningful patterns in vast streams of data. The VR environment allows an easy overview of any of the features (textual or metadata) in a text corpus. Our focus will be on the language identification data, which provides a previously unexplored perspective into the use of English and other non-indigenous languages in the Nordic countries alongside the native languages of the region.

    Our VR prototype utilizes the HTC Vive headset for a room-scale VR scenario, and it is being developed using the Unity3D game development engine. Each node in the VR space is displayed as a stacked cuboid, the equivalent of a bar chart in a three-dimensional space, summarizing all tweets at one geographic location for a given point in time (see: https://tinyurl.com/nts-vr). Each stacked cuboid represents information of the three most frequently used languages, appropriately color coded, enabling the user to get an overview of the language distribution at each location. The VR prototype further encourages users to move between different locations and inspect points of interest in more detail (overall location-related information, a detailed list of all languages detected, the most frequently used hashtags). An underlying map outlines country borders and facilitates orientation. In addition to spatial movement through the Nordic areas, the VR system provides an interface to explore the Twitter data based on time (days, weeks, months, or time of predefined special events), which enables users to explore data over time (see: https://tinyurl.com/nts-vr-time).

    In addition to demonstrating how the VR methods aid data visualization and exploration, we will also briefly discuss the pedagogical implications of using VR to showcase linguistic diversity.

  • 2.
    Alissandrakis, Aris
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Reski, Nico
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Laitinen, Mikko
    University of Eastern Finland, Finland.
    Tyrkkö, Jukka
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Visualizing rich corpus data using virtual reality2019In: Studies in Variation, Contacts and Change in English, E-ISSN 1797-4453, Vol. 20Article in journal (Refereed)
    Abstract [en]

    We demonstrate an approach that utilizes immersive virtual reality (VR) to explore and interact with corpus linguistics data. Our case study focuses on the language identification parameter in the Nordic Tweet Stream corpus, a dynamic corpus of Twitter data where each tweet originated within the Nordic countries. We demonstrate how VR can provide previously unexplored perspectives into the use of English and other non-indigenous languages in the Nordic countries alongside the native languages of the region and showcase its geospatial variation. We utilize a head-mounted display (HMD) for a room-scale VR scenario that allows 3D interaction by using hand gestures. In addition to spatial movement through the Nordic areas, the interface enables exploration of the Twitter data based on time (days, weeks, months, or time of predefined special events), making it particularly useful for diachronic investigations.

    In addition to demonstrating how the VR methods aid data visualization and exploration, we briefly discuss the pedagogical implications of using VR to showcase linguistic diversity. Our empirical results detail students’ reactions to working in this environment. The discussion part examines the benefits, prospects and limitations of using VR in visualizing corpus data.

  • 3.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Decisions: Algebra and Implementation2011In: Machine Learning and Data Mining in Pattern Recognition: 7th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2011, New York, NY, USA, August/September 2011, Proceedings / [ed] Perner, Petra, Berlin, Heidelberg: Springer, 2011, Vol. 6871, p. 31-45Chapter in book (Refereed)
    Abstract [en]

    This paper presents a generalized theory for capturing and manipulating classification information. We define decision algebra which models decision-based classifiers as higher order decision functions abstracting from implementations using decision trees (or similar), decision rules, and decision tables. As a proof of the decision algebra concept we compare decision trees with decision graphs, yet another instantiation of the proposed theoretical framework, which implement the decision algebra operations efficiently and capture classification information in a non-redundant way. Compared to classical decision tree implementations, decision graphs gain learning and classification speed up to 20% without accuracy loss and reduce memory consumption by 44%. This is confirmed by experiments.

  • 4.
    Danylenko, Antonina
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Decisions: Algebra, Implementation, and First Experiments2014In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 20, no 9, p. 1174-1231Article in journal (Refereed)
    Abstract [en]

    Classification is a constitutive part in many different fields of Computer Science. There exist several approaches that capture and manipulate classification information in order to construct a specific classification model. These approaches are often tightly coupled to certain learning strategies, special data structures for capturing the models, and to how common problems, e.g. fragmentation, replication and model overfitting, are addressed. In order to unify these different classification approaches, we define a Decision Algebra which defines models for classification as higher order decision functions abstracting from their implementations using decision trees (or similar), decision rules, decision tables, etc. Decision Algebra defines operations for learning, applying, storing, merging, approximating, and manipulating models for classification, along with some general algebraic laws regardless of the implementation used. The Decision Algebra abstraction has several advantages. First, several useful Decision Algebra operations (e.g., learning and deciding) can be derived based on the implementation of a few core operations (including merging and approximating). Second, applications using classification can be defined regardless of the different approaches. Third, certain properties of Decision Algebra operations can be proved regardless of the actual implementation. For instance, we show that the merger of a series of probably accurate decision functions is even more accurate, which can be exploited for efficient and general online learning. As a proof of the Decision Algebra concept, we compare decision trees with decision graphs, an efficient implementation of the Decision Algebra core operations, which capture classification models in a non-redundant way. Compared to classical decision tree implementations, decision graphs are 20% faster in learning and classification without accuracy loss and reduce memory consumption by 44%. This is the result of experiments on a number of standard benchmark data sets comparing accuracy, access time, and size of decision graphs and trees as constructed by the standard C4.5 algorithm. Finally, in order to test our hypothesis about increased accuracy when merging decision functions, we merged a series of decision graphs constructed over the data sets. The result shows that on each step the accuracy of the merged decision graph increases with the final accuracy growth of up to 16%.

  • 5.
    Edvinsson, Marcus
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Parallel Data-Flow Analysis for Multi-Core Machines2011Conference paper (Refereed)
    Abstract [en]

    Static program analysis supporting software development is often part of edit-compile-cycles, and precise program analysis is time consuming. Points-to analysis is a data-flow-based static program analysis used to find object references in programs. Its applications include test case generation, compiler optimizations and program understanding, etc. Recent increases in processing power of desktop computers comes mainly from multiple cores. Parallel algorithms are vital for simultaneous use of multiple cores. An efficient parallel points-to analysis requires sufficient work for each processing unit.

    The present paper presents a parallel points-to analysis of object-oriented programs. It exploits that (1) different target methods of polymorphic calls and (2) independent control-flow branches can be analyzed in parallel. Carefully selected thresholds guarantee that each parallel thread has sufficient work to do and that only little work is redundant with other threads. Our experiments show that this approach achieves a maximum speed-up of 4.5.

  • 6.
    Edvinsson, Marcus
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Parallel Points-to Analysis for Multi-Core Machines2011In: 6th International Conference on High-Performance and Embedded Architectures and Compilers - HIPEAC 2011, 2011Conference paper (Refereed)
  • 7.
    Edvinsson, Marcus
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Parallel Reachability and Escape Analysis2010In: 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation (SCAM), IEEE Press, 2010, p. 125-134Conference paper (Refereed)
    Abstract [en]

    Static program analysis usually consists of a number of steps, each producing partial results. For example, the points-to analysis step, calculating object references in a program, usually just provides the input for larger client analyses like reachability and escape analyses. All these analyses are computationally intense and it is therefore vital to create parallel approaches that make use of the processing power that comes from multiple cores in modern desktop computers.

    The present paper presents two parallel approachesto increase the efficiency of reachability analysis and escape analysis, based on a parallel points-to analysis. The experiments show that the two parallel approaches achieve a speed-up of 1.5 for reachability analysis and 3.8 for escape analysis on 8 cores for a benchmark suite of Java programs.

     

  • 8.
    Gutzmann, Tobias
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Khairova, Antonina
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Lundberg, Jonas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Towards Comparing and Combining Points-to Analyses2009In: Ninth IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2009), IEEE, 2009, p. 45-54Conference paper (Refereed)
    Abstract [en]

    Points-to information is the basis for many analyses and transformations, e.g., for program understanding and optimization. To justify new analysis techniques, they need to be compared to the state of the art regarding their accuracy and efficiency. Usually, benchmark suites are used to experimentally compare the different techniques. In this paper, we show that the accuracy of two analyses can only be compared in restricted cases, as there is no benchmark suite with exact Points-to information, no Gold Standard, and it is hard to construct one for realistic programs. We discuss the challenges and possible traps that may arise when comparing different Points-to analyses directly with each other, and with over- and under-approximations of a Gold Standard. Moreover, we discuss how different Points-to analyses can be combined to a more precise one. We complement the paper with experiments comparing and combining different static and dynamic Points-to analyses.

  • 9.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Collections Frameworks for Points-to Analysis2012In: IEEE 12th International Working Conference on Source Code Analysis and Manipulation (SCAM) 2012, IEEE, 2012, p. 4-13Conference paper (Refereed)
    Abstract [en]

    Points-to information is the basis for many analysesand transformations, e.g., for program understanding andoptimization. Collections frameworks are part of most modern programming languages’ infrastructures and used by many applications. The richness of features and the inherent structure of collection classes affect both performance and precision of points-to analysis negatively.

    In this paper, we discuss how to replace original collections frameworks with versions specialized for points-to analysis. We implement such a replacement for the Java Collections Framework and support its benefits for points-to analysis by applying it to three different points-to analysis implementations. In experiments, context-sensitive points-to analyses require, on average, 16-24% less time while at the same time being more precise. Context-insensitive analysis in conjunction with inlining also benefits in both precision and analysis cost.

  • 10.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Feedback-driven Points-to Analysis2010Report (Other academic)
    Abstract [en]

    Points-to analysis is a static program analysis that extracts reference information from agiven input program. Its accuracy is limited due to abstractions that any such analysisneeds to make. Further, the exact analysis results are unknown, i.e., no so-called GoldStandard exists for points-to analysis. This hinders the assessment of new ideas to pointstoanalysis, as results can be compared only relative to results obtained by other inaccurateanalyses.

    In this paper, we present feedback-driven points-to analysis. We suggest performing(any classical) points-to analysis with the points-to results at certain programpoints guarded by a-priori upper bounds. Such upper bounds can come from other pointstoanalyses – this is of interest when different approaches are not strictly ordered in termsof accuracy – and from human insight, i.e., manual proofs that certain points-to relationsare infeasible for every program run. This gives us a tool at hand to compute very accuratepoints-to analysis and, ultimately, to manually create a Gold Standard.

    Download full text (pdf)
    FULLTEXT01
  • 11.
    Gutzmann, Tobias
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Feedback-driven Points-to Analysis2011In: 26th Symposium On Applied Computing (SAC 2011), TaiChung, March 21-24, 2011, 2011Conference paper (Refereed)
    Abstract [en]

    In this paper, we present feedback-driven points-to analysis where any classical points-to analysis has its points-to results at certain program points guarded by a-priori upper bounds. Such upper bounds can come from other points-to analyses – this is of interest when different approaches are not strictlyordered in terms of accuracy – and from human insight, i.e., manual proofs that certain points-to relations are infeasible for every program run.

  • 12.
    Gutzmann, Tobias
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Lundberg, Jonas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Towards Path-Sensitive Points-to Analysis2007In: Seventh IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM), IEEE Computer Society, Loas Alamitos, CA, USA , 2007, p. 59-68Conference paper (Refereed)
    Abstract [en]

    Points-to analysis is a static program analysis aiming at analyzing the reference structure of dynamically allocated objects at compile-time. It constitutes the basis for many analyses and optimizations in software engineering and compiler construction. Sparse program representations, such as Whole Program Points-to Graph (WPP2G) and Points-to SSA (P2SSA), represent only dataflow that is directly relevant for points-to analysis. They have proved to be practical in terms of analysis precision and efficiency. However, intra-procedural control flow information is removed from these representations, which sacrifices analysis precision to improve analysis performance.

    We show an approach for keeping control flow related information even in sparse program representations by representing control flow effects as operations on the data transferred, i.e., as dataflow information. These operations affect distinct paths of the program differently, thus yielding a certain degree of pathsensitivity. Our approach works with both WPP2G and P2SSA representations.

    We apply the approach to P2SSA-based and flow-sensitive points-to analysis and evaluate a context-insensitive and a context-sensitive variant. We assess our approach using abstract precision metrics. Moreover, we investigate the precision improvements and performance penalties when used as an input to three source-code-level analyses: dead code, cast safety, and null pointer analysis.

  • 13.
    Hedenborg, Mathias
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Memory efficient context-sensitive program analysis2021In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 177, article id 110952Article in journal (Refereed)
    Abstract [en]

    Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). But then it is also more expensive in terms of memory consumption. For languages with conditions and iterations, the number of contexts grows exponentially with the program size. This problem is not just a theoretical issue. Several papers evaluating inter-procedural context-sensitive data-flow analysis report severe memory problems, and the path-explosion problem is a major issue in program verification and model checking.

    In this paper we propose χ-terms as a means to capture and manipulate context-sensitive program information in a data-flow analysis. χ-terms are implemented as directed acyclic graphs without any redundant subgraphs.

    To show the efficiency of our approach we run experiments comparing the memory usage of χ-terms with four alternative data structures. Our experiments show that χ-terms clearly outperform all the alternatives in terms of memory efficiency.

  • 14.
    Hedenborg, Mathias
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Trapp, Martin
    Senacor Technologies AG, Germany.
    A Framework for Memory Efficient Context-Sensitive Program Analysis2022In: Theory of Computing Systems, ISSN 1432-4350, E-ISSN 1433-0490, Vol. 66, p. 911-956Article in journal (Refereed)
    Abstract [en]

    Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). But then it is also more expensive in terms of memory consumption. For languages with conditions and iterations, the number of contexts grows exponentially with the program size. This problem is not just a theoretical issue. Several papers evaluating inter-procedural context-sensitive data-flow analysis report severe memory problems, and the path-explosion problem is a major issue in program verification and model checking.

    In this paper we propose χ-terms as a means to capture and manipulate context-sensitive program information in a data-flow analysis. χ-terms are implemented as directed acyclic graphs without any redundant subgraphs. We introduce the k-approximation and the l-loop-approximation that limit the size of the context-sensitive information at the cost of analysis precision. We prove that every context-insensitive data-flow analysis has a corresponding k, l-approximated context-sensitive analysis, and that these analyses are sound and guaranteed to reach a fixed point.

    We also present detailed algorithms outlining a compact, redundancy-free, and DAG-based implementation of χ-terms.

    Download full text (pdf)
    fulltext
  • 15.
    Hedenborg, Mathias
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Trapp, Martin
    Senacor Technologies AG, Germany.
    Approximating Context-Sensitive Program Information2015In: Proceedings Kolloquium Programmiersprachen (KPS 2015) / [ed] Jens Knoop, 2015Conference paper (Other academic)
    Abstract [en]

    Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). In this paper we propose χ-terms as a mean to capture and manipulate context-sensitive program information in a data-flow analysis. We introduce finite k-approximation and loop approximation that limit the size of the context-sensitive information. These approximated χ-terms form a lattice with a finite depth, thus guaranteeing every data-flow analysis to reach a fixed point. 

    Download full text (pdf)
    fulltext
  • 16.
    Iftikhar, Muhammad Usman
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. Katholieke Univ Leuven, Belgium.
    A Model Interpreter for Timed Automata2016In: Leveraging Applications of Formal Methods, Verification and Validation: Foundational Techniques, PT I, Springer, 2016, p. 243-258Conference paper (Refereed)
    Abstract [en]

    In the model-centric approach to model-driven development, the models used are sufficiently detailed to be executed. Being able to execute the model directly, without any intermediate model-to-code translation, has a number of advantages. The model is always up-to-date and runtime updates of the model are possible. This paper presents a model interpreter for timed automata, a formalism often used for modeling and verification of real-time systems. The model interpreter supports real-time system features like simultaneous execution, system wide signals, a ticking clock, and time constraints. Many existing formal representations can be verified, and many existing DSMLs can be executed. It is the combination of being both verifiable and executable that makes our approach rather unique.

  • 17.
    Khairova, Antonina
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Decision algebras. Capturing and manipulating decision information: Doctoral Forum poster2010Other (Other academic)
    Download full text (pdf)
    FULLTEXT01
  • 18.
    Laitinen, Mikko
    et al.
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages. University of Eastern Finland, Finland.
    Fatemi, Masoud
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). University of Eastern Finland, Finland.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Size matters: digital social networks and language change2020In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 3, p. 1-15, article id 46Article in journal (Refereed)
    Abstract [en]

    Social networks play a role in language variation and change, and the social network theory has offered a powerful tool in modeling innovation diffusion. Networks are characterized by ties of varying strength which influence how novel information is accessed. It is widely held that weak-ties promote change, whereas strong ties lead to norm-enforcing communities that resist change. However, the model is primarily suited to investigate small ego networks, and its predictive power remains to be tested in large digital networks of mobile individuals. This article revisits the social network model in sociolinguistics and investigates network size as a crucial component in the theory. We specifically concentrate on whether the distinction between weak and strong ties levels in large networks over 100 nodes. The article presents two computational methods that can handle large and messy social media data and render them usable for analyzing networks, thus expanding the empirical and methodological basis from small-scale ethnographic observations. The first method aims to uncover broad quantitative patterns in data and utilizes a cohort-based approach to network size. The second is an algorithm-based approach that uses mutual interaction parameters on Twitter. Our results gained from both methods suggest that network size plays a role, and that the distinction between weak ties and slightly stronger ties levels out once the network size grows beyond roughly 120 nodes. This finding is closely similar to the findings in other fields of the study of social networks and calls for new research avenues in computational sociolinguistics.

    Download full text (pdf)
    fulltext
  • 19.
    Laitinen, Mikko
    et al.
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages. University of Eastern Finland, Finland.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). University of Eastern Finland, Finland.
    ELF, language change and social networks: Evidence from real-time social media data2020In: Language Change: The Impact of English as a Lingua Franca / [ed] Anna Mauranen, Svatlana Vetchinnikova, Cambridge: Cambridge University Press, 2020, p. 179-204Chapter in book (Refereed)
    Abstract [en]

    This article extends ELF studies towards variationist and computational sociolinguistics. It uses social network theory to explore how ELF is embedded in the social structures in which it is used and explores the size and nature of social networks in ELF. The empirical part investigates if multilingual and often mobile ELF users have larger networks and more weak ties than others, and if they therefore could be more likely to act as innovators or early adopters of change than the other speaker groups. Our empirical material consists of real-time social media data from Twitter. The results show that, statistically speaking, social embedding of ELF creates conditions that favor change. ELF users have larger networks and more weak ties than the other groups examined here. With regard to methods, social embedding needs to be taken into account in future studies, and we illustrate that variationist and computational sociolinguistics offers a useful theoretical and methodological toolbox for this task.

    Download full text (pdf)
    fulltext
  • 20.
    Laitinen, Mikko
    et al.
    University of Eastern Finland, Finland.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lakaw, Alexander
    University of Eastern Finland, Finland.
    Revisiting weak ties: Using present-day social media data in variationist studies2017In: Exploring Future Paths for Historical Sociolinguistics / [ed] Tanja Säily, Minna Palander-Collin, Arja Nurmi, Anita Auer, Amsterdam: John Benjamins Publishing Company, 2017, p. 303-325Chapter in book (Refereed)
    Abstract [en]

    This article makes use of big and rich present-day data to revisit the social network model in sociolinguistics. This model predicts that mobile individuals with ties outside a home community and subsequent loose-knit networks tend to promote the diffusion of linguistic innovations. The model has been applied to a range of small ethnographic networks. We use a database of nearly 200,000 informants who send micro-blog messages in Twitter. We operationalize networks using two ratio variables; one of them is a truly weak tie and the other one a slightly stronger one. The results show that there is a straightforward increase of innovative behavior in the truly weak tie network, but the data indicate that innovations also spread under conditions of stronger networks, given that the network size is large enough. On the methodological level, our approach opens up new horizons in using big and often freely available data in sociolinguistics, both past and present.

  • 21.
    Laitinen, Mikko
    et al.
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages. Univ Eastern Finland, Finland.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lakaw, Alexander
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Utilizing Multilingual Language Data in (Nearly) Real Time: The Case of the Nordic Tweet Stream2017In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 23, no 11, p. 1038-1056Article in journal (Refereed)
    Abstract [en]

    This paper presents the Nordic Tweet Stream, a cross-disciplinary digital humanities project that downloads Twitter messages from Denmark, Finland, Iceland, Norway and Sweden. The paper first introduces some of the technical aspects in creating a real-time monitor corpus that grows every day, and then two case studies illustrate how the corpus could be used as empirical evidence in studies focusing on the global spread of English. Our approach in the case studies is sociolinguistic, and we are interested in how widespread multilingualism which involves English is in the region, and what happens to ongoing grammatical change in digital environments. The results are based on 6.6 million tweets collected during the first four months of data streaming. They show that English was the most frequently used language, accounting for almost a third. This indicates that Nordic Twitter users choose English as a means of reaching wider audiences. The preference for English is the strongest in Denmark and the weakest in Finland. Tweeting mostly occurs late in the evening, and high-profile media events such as the Eurovision Song Contest produce considerable peaks in Twitter activity. The prevalent use of informal features such as univerbated verb forms (e.g., gotta for (HAVE) got to) supports previous findings of the speech-like nature of written Twitter data, but the results indicate that tweeters are pushing the limits even further.

  • 22.
    Laitinen, Mikko
    et al.
    University of Eastern Finland, Finland.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    The Nordic Tweet Stream: A Dynamic Real-Time Monitor Corpus of Big and Rich Language Data2018In: DHN 2018 Digital Humanities in the Nordic Countries 3rd Conference: Proceedings of the Digital Humanities in the Nordic Countries 3rd Conference Helsinki, Finland, March 7-9, 2018 / [ed] Eetu Mäkelä, Mikko Tolonen, Jouni Tuominen, CEUR-WS.org , 2018, p. 349-362Conference paper (Refereed)
    Abstract [en]

    This article presents the Nordic Tweet Stream (NTS), a cross-disciplinarycorpus project of computer scientists and a group of sociolinguists interestedin language variability and in the global spread of English. Our research integratestwo types of empirical data: We not only rely on traditional structured corpusdata but also use unstructured data sources that are often big and rich inmetadata, such as Twitter streams. The NTS downloads tweets and associatedmetadata from Denmark, Finland, Iceland, Norway and Sweden. We first introducesome technical aspects in creating a dynamic real-time monitor corpus, andthe following case study illustrates how the corpus could be used as empiricalevidence in sociolinguistic studies focusing on the global spread of English tomultilingual settings. The results show that English is the most frequently usedlanguage, accounting for almost a third. These results can be used to assess howwidespread English use is in the Nordic region and offer a big data perspectivethat complement previous small-scale studies. The future objectives include annotatingthe material, making it available for the scholarly community, and expandingthe geographic scope of the data stream outside Nordic region.

    Download full text (pdf)
    fulltext
  • 23.
    Lincke, Alisa
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Lundberg, Jenny
    Thunander, Maria
    Lund University , Sweden.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Lundberg, Jonas
    Jusufi, Ilir
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Diabetes Information in Social Media2018In: Proceedings of the 11th International Symposium on Visual Information Communication and Interaction (VINCI '18) / [ed] Karsten Klein, Yi-Na Li, and Andreas Kerren, ACM Publications, 2018, p. 104-105Conference paper (Refereed)
    Abstract [en]

    Social media platforms have created new ways for people to communicate and express themselves. Thus, it is important to explore how e-health related information is generated and disseminated in these platforms. The aim of our current efforts is to investigate the content and flow of information when people in Sweden use Twitter to talk about diabetes related issues. To achieve our goals, we have used data mining and visualization techniques in order to explore, analyze and cluster Twitter data we have collected during a period of 10 months. Our initial results indicate that patients use Twitter to share diabetes related information and to communicate about their disease as an alternative way that complements the traditional channels used by health care professionals.

  • 24.
    Lincke, Rüdiger
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Lundberg, Jonas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Comparing Software Metric Tools2008In: Compilation Proceedings of the 2008 International Symposium on Software Testing and Analysis and Co-Located Workshops, ACM , 2008Conference paper (Refereed)
    Abstract [en]

    This paper shows that existing software metric tools interpret and implement the

    definitions of object-oriented software metrics differently. This

    delivers tool-dependent metrics results and has even implications on the results

    of analyses based on these metrics results. In short, the metrics-based assessment

    of a software system and measures taken to improve its design differ considerably

    from tool to tool. To support our case, we conducted an

    experiment with a number of commercial and free metrics tools. We calculated metrics values using

    the same set of standard metrics for three software systems of different sizes.

    Measurements show that, for the same software

    system and metrics, the metrics values are tool depended. We also defined a

    (simple) software quality model for "maintainability" based on the metrics

    selected. It defines a ranking of the classes that are most critical wrt.

    maintainability. Measurements show that even the ranking of classes in a software

    system is metrics tool dependent.

  • 25.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Fast and Precise Points-to Analysis2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Many software engineering applications require points-to analysis. These client applications range from optimizing compilers to integrated program development environments (IDEs) and from testing environments to reverse-engineering tools. The software engineering applications are often user-interactive, or used in an edit-compile cycle, and need the points-to analysis to be fast and precise.

    In this compilation thesis, we present a new context- and flow-sensitive approach to points-to analysis that is both fast and precise. This is accomplished by a new SSA-based flow-sensitive dataflow algorithm (Paper 1) and a new context-sensitive analysis (Paper 2). Compared to other well-known analysis approaches our approach is faster in practice, on average, twice as fast as the call string approach and by an order of magnitude faster than the object-sensitive technique. In fact, it shows to be only marginally slower than a context-insensitive baseline analysis. At the same time, it provides higher precision than the call string technique and is similar in precision to the object-sensitive technique. We confirm these statements with experiments in Paper 2.

    Paper 3 is a systematic comparison of ten different variants of context-sensitive points-to analysis using different call-depths  for separating the contexts. Previous works indicate that analyses with a call-depth  only provides slightly better precision than context-insensitive analysis and they find no substantial precision improvement when using a more expensive analyses with call-depth . The hypothesis in Paper 3 is that substantial differences between the context-sensitive approaches show if (and only if) the precision is measured by more fine-grained metrics focusing on individual objects (rather than methods and classes) and references between them. These metrics are justified by the many applications requiring such detailed object reference information.

    The main results in Paper 3 show that the differences between different context-sensitive analysis techniques are substantial, also the differences between the context-insensitive and the context-sensitive analyses with call-depth are substantial. The major surprise was that increasing the call-depth  did not lead to any substantial precision improvements. This is a negative result since it indicates that, in practice, we cannot get a more precise points-to analysis by increasing the call-depth. Further investigations show that substantial precision improvements can be detected for but they occur at such a low detail level that they are unlikely to be of any practical use.

    Download (jpg)
    presentationsbild
  • 26.
    Lundberg, Jonas
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Gutzmann, Tobias
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Edvinsson, Marcus
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Fast and Precise Points-to Analysis2009In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, no 10, p. 1428-1439Article in journal (Refereed)
    Abstract [en]

    Many software engineering applications require points-to analysis. These client applications range from optimizing compilers to integrated program development environments (IDEs) and from testing environments to reverse-engineering tools. Moreover, software engineering applications used in an edit-compile cycle need points-to analysis to be fast and precise.

    In this article, we present a new context- and flow-sensitive approach to points-to analysis where calling contexts are distinguished by the points-to sets analyzed for their call target expressions. Compared to other well-known context-sensitive techniques it is faster in practice, on average, twice as fast as the call string approach and by an order of magnitude faster than the object-sensitive technique. In fact, it shows to be only marginally slower than a context-insensitive baseline analysis. At the same time, it provides higher precision than the call string technique and is similar in precision to the object-sensitive technique. We confirm these statements with experiments using a number of abstract precision metrics and a concrete client application: escape analysis.

  • 27.
    Lundberg, Jonas
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Gutzmann, Tobias
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Fast and Precise Points-to Analysis2008In: Eighth IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2008), IEEE Computer Society, Loas Alamitos, CA, USA , 2008, p. 133-142Conference paper (Refereed)
    Abstract [en]

    Many software engineering applications require points-to analysis. Client applications range from optimizing compilers to program development and testing environments to reverse-engineering tools. In this paper, we present a new context-sensitive approach to points-to analysis where calling contexts are distinguished by the points-to sets analyzed for their target expressions. Compared to other well-known context-sensitive techniques, it is faster - twice as fast as the call string approach and by an order of magnitude faster than the object-sensitive technique - and requires less memory. At the same time, it provides higher precision than the call string technique and is similar in precision to the object-sensitive technique. These statements are confirmed by experiments.

  • 28.
    Lundberg, Jonas
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Hedenborg, Mathias
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    SSA-Based Simulated Execution2012In: Patterns, Programming and Everything / [ed] Karin K. Breitman och R. Nigel Horspool, London: Springer, 2012, p. 75-90Chapter in book (Refereed)
    Abstract [en]

    Most scalable approaches to inter-procedural dataflow analysis do not take into account the order in which fields are accessed, and methods are executed, at run-time. That is, they have no inter-procedural flow-sensitivity. In this chapter we present an approach to dataflow analysis named \emph{Simulated Execution}. It is flow-sensitive in the sense that a memory accessing operation (call or field access) will never be affected by another memory access that is executed there after in all runs of a program. This makes Simulated Execution strictly more precise than the most frequently used flow-insensitive approaches. We also outline a proof of correctness using abstract interpretation. Finally, although we present Simulated Execution as a dataflow algorithm applied to context-insensitive Points-to Analysis, it can be applied on any inter-procedural dataflow problem and in a context-sensitive manner.

  • 29.
    Lundberg, Jonas
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Laitinen, Mikko
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages. University of Eastern Finland, Finland.
    Twitter trolls: A linguistic profile of anti-democratic discourse2020In: Language sciences (Oxford), ISSN 0388-0001, E-ISSN 1873-5746, Vol. 79, p. 1-14, article id 101268Article in journal (Refereed)
    Abstract [en]

    This article focuses on anti-democratic discourse and investigates the linguistic profile of Twitter trolls. The troll data consist of some 3.5 million messages in English obtained through Twitter in late 2018. These data originate from potentially state-backed information operations aimed at sowing discord in Western societies. The baseline data, against which the troll data are compared, contain circa 4.4 million messages in English drawn from the Nordic Tweet Stream corpus. A machine learning application that enables us to select genuine personal messages in this corpus is used to prune the data. The empirical part investigates frequency-based characteristics of the two datasets. We utilize a set of automatically-extracted word-list information and the observed frequencies of personal pronouns. Our empirical findings show considerable quantitative differences so that the troll data are shorter, make use of a smaller number of lexical types and tokens, and resemble more formal registers, while the personal messages are more spoken-like. The results could be used to improve automated detection systems whose purpose is to identify troll accounts.

    Download full text (pdf)
    fulltext
  • 30.
    Lundberg, Jonas
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Points-to Analysis: A Fine-Grained Evaluation2012In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 18, no 20, p. 2851-2878Article in journal (Refereed)
    Abstract [en]

    Points-to analysis is a static program analysis that extracts reference information from programs, e.g., possible targets of a call and possible objects referenced by a field. Previous works evaluating different approaches to context-sensitive Points-to analyses use coarse-grained precision metrics focusing on references between source code entities like methods and classes. Two typical examples of such metrics are the number of nodes and edges in a call-graph. These works indicate that context-sensitive analysis with a call-depth k = 1 only provides slightly better precision than context-insensitive analysis. Moreover, these works could not find a substantial precision improvement when using the more expensive analyses with call-depth k > 1. The hypothesis in the present paper is that substantial differences between the context-sensitive approaches show if (and only if) the precision is measured by more fine-grained metrics focusing on individual objects (rather than methods and classes) and references between them. These metrics are justified by the many applications requiring such detailed object reference information. In order to experimentally validate our hypothesis we make a systematic comparison of ten different variants of context-sensitive Points-to analysis using different call-depths k >= 1 for separating the contexts. For the comparison we use a metric suite containing four different metrics that all focus on individual objects and references between them. The main results show that the differences between different context-sensitive analysis techniques are substantial, also the differences between the context-insensitive and the context-sensitive analyses with call-depth k = 1 are substantial. The major surprise was that increasing the call-depth k > 1 did not lead to any substantial precision improvements. This is a negative result since it indicates that, in practice, we cannot get a more precise Points-to analysis by increasing the call-depth. Further investigations show that substantial precision improvements can be detected for k > 1, but they occur at such a low detail level that they are unlikely to be of any practical use.

  • 31. Lundberg, Jonas
    et al.
    Nordqvist, Jonas
    Linnaeus University, Faculty of Technology, Department of Mathematics.
    Laitinen, Mikko
    University of Eastern Finland, Finland.
    Towards a language independent Twitter bot detector2019In: Proceedings of 4th Conference of The Association Digital Humanities in the Nordic Countries: Copenhagen, March 6-8 2019 / [ed] Navarretta Costanza et al., Copenhagen: University of Copenhagen , 2019, Vol. 2364, p. 308-319Conference paper (Refereed)
    Abstract [en]

    This article describes our work in developing an application that recognizes automatically generated tweets. The objective of this machine learning application is to increase data accuracy in sociolinguistic studies that utilize Twitter by reducing skewed sampling and inaccuracies in linguistic data. Most previous machine learning attempts to exclude bot material have been language dependent since they make use of monolingual Twitter text in their training phase. In this paper, we present a language independent approach which classifies each single tweet to be either autogenerated (AGT) or human-generated (HGT). We define an AGT as a tweet where all or parts of the natural language content is generated automatically by a bot or other type of program. In other words, while AGT/HGT refer to an individual message, the term bot refers to non-personal and automated accounts that post content to online social networks. Our approach classifies a tweet using only metadata that comes with every tweet, and we utilize those metadata parameters that are both language and country independent. The empirical part shows good success rates. Using a bilingual training set of Finnish and Swedish tweets, we correctly classified about 98.2% of all tweets in a test set using a third language (English).

  • 32.
    Löwe, Welf
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Lundberg, Jonas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    A Scalable Flow-Sensitive Points-to Analysis2006Report (Other academic)
  • 33.
    Löwe, Welf
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Lundberg, Jonas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Towards Parallelizing Object-Oriented Programs Automatically2012In: APPLICATIONS, TOOLS AND TECHNIQUES ON THE ROAD TO EXASCALE COMPUTING: Proceedings of the International Conference on Parallel Computing (ParCo) 2011 / [ed] DeBosschere, K; DHollander, EH; Joubert, GR; Padua, D; Peters, F, IOS Press, 2012, p. 91-98Conference paper (Refereed)
    Abstract [en]

    We describe an approach to parallelize sequential object-oriented general purpose programs automatically adapting well-known analysis and transformation techniques and combined with context-aware composition. First experiments demonstrate the potential speed-up. This approach allows sequential objectoriented programs to benefit from modern hardware developments without bearing unacceptable re-engineering costs.

  • 34.
    Schordan, Markus
    et al.
    Lawrence Livermore National Laboratory, USA.
    Beyer, Dirk
    LMU Munich, Germany.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Evaluation and Reproducibility of Program Analysis and Verification (Track Introduction)2016In: Leveraging applications of formal methods, verification and validation: foundational techniques, pt I, Springer, 2016, p. 191-194Conference paper (Refereed)
    Abstract [en]

    Manual inspection of complex software is costly and error prone. Techniques and tools that do not require manual inspection are in dire need as our software systems grow at a rapid rate. This track is concerned with the methods of comparative evaluation of program analyses and the tools that implement them. It also addresses the question how program properties that have been verified can be represented such that they remain reproducible and reusable as intermediate results for other analyses and verification phases. In particular, it is of interest how different tools can be combined to achieve better results than with only one of those tools alone.

  • 35.
    Strein, Dennis
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Lincke, Rüdiger
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Lundberg, Jonas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Löwe, Welf
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    An Extensible Meta-Model for Program Analysis2007In: IEEE Transaction on Software Engineering (TSE), Vol. 33, no 9, p. pp. 592-607Article in journal (Refereed)
    Abstract [en]

    Software maintenance tools for program analysisand refactoring rely on a meta-model capturing the relevantproperties of programs. However, what is considered relevantmay change when the tools are extended with new analyses andrefactorings, and new programming languages. This paper proposesa language independent meta-model and an architecture toconstruct instances thereof, which is extensible for new analyses,refactorings, and new front-ends of programming languages. Dueto the loose coupling between analysis-, refactoring-, and frontend-components, new components can be added independentlyand reuse existing ones. Two maintenance tools implementingthe meta-model and the architecture, VIZZANALYZER and XDEVELOP,serve as a proof of concept.

  • 36.
    Trapp, Martin
    et al.
    Senacor Technologies AG, Germany.
    Hedenborg, Mathias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Capturing and Manipulating Context-sensitive Program Information2015In: Software Engineering Workshops 2015: Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering 2015, Dresden, 17.-18. März 2015 / [ed] Wolf Zimmermann, Martin-Luther-Universität Halle-Wittenberg, CEUR-WS.org , 2015, Vol. 1337, p. 154-163Conference paper (Refereed)
    Abstract [en]

    Designers of context-sensitive program analyses need to take special care of the memory consumption of the analysis results. In general, they need to sacrifice accuracy to cope with restricted memory resources. We introduce χ-terms as a general data structure to capture and manipulate context-sensitivity analysis results. A χ-term is a compact representation of arbitrary forward program analysis distinguishing the effects of different control-flow paths. While χ-terms can be represented by trees, we propose a memory efficient representation generalizing ordered binary decision diagrams (OBDDs).

1 - 36 of 36
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf