lnu.sePublications
Change search
Refine search result
1234567 1 - 50 of 1408
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbas, Nadeem
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Designing Self-Adaptive Software Systems with Reuse2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Modern software systems are increasingly more connected, pervasive, and dynamic, as such, they are subject to more runtime variations than legacy systems. Runtime variations affect system properties, such as performance and availability. The variations are difficult to anticipate and thus mitigate in the system design.

    Self-adaptive software systems were proposed as a solution to monitor and adapt systems in response to runtime variations. Research has established a vast body of knowledge on engineering self-adaptive systems. However, there is a lack of systematic process support that leverages such engineering knowledge and provides for systematic reuse for self-adaptive systems development. 

    This thesis proposes the Autonomic Software Product Lines (ASPL), which is a strategy for developing self-adaptive software systems with systematic reuse. The strategy exploits the separation of a managed and a managing subsystem and describes three steps that transform and integrate a domain-independent managing system platform into a domain-specific software product line for self-adaptive software systems.

    Applying the ASPL strategy is however not straightforward as it involves challenges related to variability and uncertainty. We analyzed variability and uncertainty to understand their causes and effects. Based on the results, we developed the Autonomic Software Product Lines engineering (ASPLe) methodology, which provides process support for the ASPL strategy. The ASPLe has three processes, 1) ASPL Domain Engineering, 2) Specialization and 3) Integration. Each process maps to one of the steps in the ASPL strategy and defines roles, work-products, activities, and workflows for requirements, design, implementation, and testing. The focus of this thesis is on requirements and design.

    We validate the ASPLe through demonstration and evaluation. We developed three demonstrator product lines using the ASPLe. We also conducted an extensive case study to evaluate key design activities in the ASPLe with experiments, questionnaires, and interviews. The results show a statistically significant increase in quality and reuse levels for self-adaptive software systems designed using the ASPLe compared to current engineering practices.

    Download full text (pdf)
    Doctoral Thesis (Comprehensive Summary)
    Download (jpg)
    Front Page
  • 2.
    Abbas, Nadeem
    Umeå universitet.
    Properites of "Good" Java Examples2010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Example programs are well known as an important tool to learn computer programming. Realizing the signicance of example programs, this study has been conducted with a goalto measure and evaluate the quality of examples used in academia. We make a distinctionbetween good and bad examples, as badly designed examples may prove harmful for novice learners. In general, students differ from expert programmers in their approach to read and comprehend a program. How do students understand example programs is explored in the light of classical theories and models of program comprehension. Key factors that impact program quality and comprehension are identified. To evaluate as well as improve the quality of examples, a set of quality attributes is proposed. Relationship between program complexity and quality is examined. We rate readability as a prime quality attribute and hypothesize that example programs with low readability are difficult to understand. Software Reading Ease Score (SRES), a program readability metric proposed by Börstler et al. is implemented to provide a readability measurement tool. SRES is based on lexical tokens and is easy to compute using static code analysis techniques. To validate SRES metric, results are statistically analyzed in correlation to earlier existing well acknowledged software metrics.

  • 3.
    Abbas, Nadeem
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Towards autonomic software product lines2011In: SPLC '11 Proceedings of the 15th International Software Product Line Conference, Volume 2, ACM Press, 2011, p. 44:1-44:8Conference paper (Refereed)
    Abstract [en]

    We envision an Autonomic Software Product Line (ASPL). The ASPL is a dynamic software product line that supports self adaptable products. We plan to use reflective architecture to model and develop ASPL. To evaluate the approach, we have implemented three autonomic product lines which show promising results. The ASPL approach is at initial stages, and require additional work. We plan to exploit online learning to realize more dynamic software product lines to cope with the problem of product line evolution. We propose on-line knowledge sharing among products in a product line to achieve continuous improvement of quality in product line products.

  • 4.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Andersson, Jesper
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    ASPLe: a methodology to develop self-adaptive software systems with reuse2017Report (Other academic)
    Abstract [en]

    Advances in computing technologies are pushing software systems and their operating environments to become more dynamic and complex. The growing complexity of software systems coupled with uncertainties induced by runtime variations leads to challenges in software analysis and design. Self-Adaptive Software Systems (SASS) have been proposed as a solution to address design time complexity and uncertainty by adapting software systems at runtime. A vast body of knowledge on engineering self-adaptive software systems has been established. However, to the best of our knowledge, no or little work has considered systematic reuse of this knowledge. To that end, this study contributes an Autonomic Software Product Lines engineering (ASPLe) methodology. The ASPLe is based on a multi-product lines strategy which leverages systematic reuse through separation of application and adaptation logic. It provides developers with repeatable process support to design and develop self-adaptive software systems with reuse across several application domains. The methodology is composed of three core processes, and each process is organized for requirements, design, implementation, and testing activities. To exemplify and demonstrate the use of the ASPLe methodology, three application domains are used as running examples throughout the report.

    Download full text (pdf)
    ASPLe2017
  • 5.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Andersson, Jesper
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Harnessing Variability in Product-lines of Self-adaptive Software Systems2015In: Proceedings of the 19th International Conference on Software Product Line: SPLC '15, ACM Press, 2015, p. 191-200Conference paper (Refereed)
    Abstract [en]

    This work studies systematic reuse in the context of self-adaptive software systems. In our work, we realized that managing variability for such platforms is different compared to traditional platforms, primarily due to the run-time variability and system uncertainties. Motivated by the fact that recent trends show that self-adaptation will be used more often in future system generation and that software reuse state-of-practice or research do not provide sufficient support, we have investigated the problems and possibly resolutions in this context. We have analyzed variability for these systems, using a systematic reuse prism, and identified a research gap in variability management. The analysis divides variability handling into four activities: (1) identify variability, (2) constrain variability, (3) implement variability, and (4) manage variability. Based on the findings we envision a reuse framework for the specific domain and present an example framework that addresses some of the identified challenges. We argue that it provides basic support for engineering self-adaptive software systems with systematic reuse. We discuss some important avenues of research for achieving the vision.

  • 6.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Andersson, Jesper
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Autonomic Software Product Lines (ASPL)2010In: ECSA '10 Proceedings of the Fourth European Conference on Software Architecture: Companion Volume / [ed] Carlos E. Cuesta, ACM Press, 2010, p. 324-331Conference paper (Refereed)
    Abstract [en]

    We describe ongoing work on a variability mechanism for Autonomic Software Product Lines (ASPL). The autonomic software product lines have self-management characteristics that make product line instances more resilient to context changes and some aspects of product line evolution. Instances sense the context, selects and bind the best component variants to variation-points at run-time. The variability mechanism we describe is composed of a profile guided dispatch based on off-line and on-line training processes. Together they form a simple, yet powerful variability mechanism that continuously learns, which variants to bind given the current context and system goals.

  • 7.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Andersson, Jesper
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Löwe, Welf
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Towards Autonomic Software Product Lines (ASPL) - A Technical Report2011Report (Other academic)
    Abstract [en]

    This report describes a work in progress to develop Autonomic Software Product Lines (ASPL). The ASPL is a dynamic software product line approach with a novel variability handling mechanism that enables traditional software product lines to adapt themselves at runtime in response to changes in their context, requirements and business goals. The ASPL variability mechanism is composed of three key activities: 1) context-profiling, 2) context-aware composition, and 3) online learning. Context-profiling is an offline activity that prepares a knowledge base for context-aware composition. The context-aware composition uses the knowledge base to derive a new product or adapts an existing product based on a product line's context attributes and goals. The online learning optimizes the knowledge base to remove errors and suboptimal information and to incorporate new knowledge. The three activities together form a simple yet powerful variability handling mechanism that learns and adapts a system at runtime in response to changes in system context and goals. We evaluated the ASPL variability mechanism on three small-scale software product lines and got promising results. The ASPL approach is, however, is yet at an initial stage and require improved development support with more rigorous evaluation. 

    Download full text (pdf)
    fulltext
  • 8.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Andersson, Jesper
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Weyns, Danny
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Knowledge evolution in autonomic software product lines2011In: SPLC '11 Proceedings of the 15th International Software Product Line Conference, Volume 2, New York, NY, USA: ACM Press, 2011, p. 36:1-36:8Conference paper (Refereed)
    Abstract [en]

    We describe ongoing work in knowledge evolution management for autonomic software product lines. We explore how an autonomic product line may benefit from new knowledge originating from different source activities and artifacts at run time. The motivation for sharing run-time knowledge is that products may self-optimize at run time and thus improve quality faster compared to traditional software product line evolution. We propose two mechanisms that support knowledge evolution in product lines: online learning and knowledge sharing. We describe two basic scenarios for runtime knowledge evolution that involves these mechanisms. We evaluate online learning and knowledge sharing in a small product line setting that shows promising results.

  • 9.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Andersson, Jesper
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Weyns, Danny
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Modeling Variability in Product Lines Using Domain Quality Attribute Scenarios2012In: Proceedings of the WICSA/ECSA 2012 Companion Volume, ACM Press, 2012, p. 135-142Conference paper (Refereed)
    Abstract [en]

    The concept of variability is fundamental in software product lines and a successful implementation of a product line largely depends on how well domain requirements and their variability are specified, managed, and realized. While developing an educational software product line, we identified a lack of support to specify variability in quality concerns. To address this problem we propose an approach to model variability in quality concerns, which is an extension of quality attribute scenarios. In particular, we propose domain quality attribute scenarios, which extend standard quality attribute scenarios with additional information to support specification of variability and deriving product specific scenarios. We demonstrate the approach with scenarios for robustness and upgradability requirements in the educational software product line.

  • 10.
    Abbas, Nadeem
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Awais, Mian Muhammad
    Lahore University of Management Sciences, Pakistan.
    Kurti, Arianit
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Smart Forest Observatories Network: A MAPE-K Architecture Based Approach for Detecting and Monitoring Forest Damage2023In: Proceedings of the Conference Digital solutions for detecting and monitoring forest damage: Växjö, Sweden, March 28-29, 2023, 2023Conference paper (Other academic)
    Abstract [en]

    Forests are essential for life, providing various ecological, social, and economic benefits worldwide. However, one of the main challenges faced by the world is the forest damage caused by biotic and abiotic factors. In any case, the forest damages threaten the environment, biodiversity, and ecosystem. Climate change and anthropogenic activities, such as illegal logging and industrial waste, are among the principal elements contributing to forest damage. To achieve the United Nations' Sustainable Development Goals (SDGs) related to forests and climate change, detecting and analyzing forest damages, and taking appropriate measures to prevent or reduce the damages are essential. To that end, we envision establishing a Smart Forest Observatories (SFOs) network, as shown below, which can be either a local area or a wide area network involving remote forests. The basic idea is to use Monitor, Analyze, Plan, Execute, and Knowledge (MAPE-K) architecture from autonomic computing and self-adaptive software systems domain to design and develop the SFOs network. The SFOs are planned to collect, analyze, and share the collected data and analysis results using state-of-the-art methods. The principal objective of the SFOs network is to provide accurate and real-time data to policymakers and forest managers, enabling them to develop effective policies and management strategies for global forest conservation that help to achieve SDGs related to forests and climate change.

  • 11.
    Abdeljaber, Osama
    et al.
    Qatar University, Qatar.
    Avci, Onur
    Qatar University, Qatar.
    Kiranyaz, Serkan
    Qatar University, Qatar.
    Boashash, Boualem
    Qatar University, Qatar; The University of Queensland, Herston, Australia.
    Sodano, Henry
    University of Michigan, USA.
    Inman, Daniel
    University of Michigan, USA.
    1-D CNNs for structural damage detection: verification on a structural health monitoring benchmark data2018In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 275, p. 1308-1317Article in journal (Refereed)
    Abstract [en]

    Structural damage detection has been an interdisciplinary area of interest for various engineering fields. While the available damage detection methods have been in the process of adapting machine learning concepts, most machine learning based methods extract “hand-crafted” features which are fixed and manually selected in advance. Their performance varies significantly among various patterns of data depending on the particular structure under analysis. Convolutional neural networks (CNNs), on the other hand, can fuse and simultaneously optimize two major sets of an assessment task (feature extraction and classification) into a single learning block during the training phase. This ability not only provides an improved classification performance but also yields a superior computational efficiency. 1D CNNs have recently achieved state-of-the-art performance in vibration-based structural damage detection; however, it has been reported that the training of the CNNs requires significant amount of measurements especially in large structures. In order to overcome this limitation, this paper presents an enhanced CNN-based approach that requires only two measurement sets regardless of the size of the structure. This approach is verified using the experimental data of the Phase II benchmark problem of structural health monitoring which had been introduced by IASC-ASCE Structural Health Monitoring Task Group. As a result, it is shown that the enhanced CNN-based approach successfully estimated the actual amount of damage for the nine damage scenarios of the benchmark study.

  • 12.
    Abdilrahim, Ahmad
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Alhawi, Caesar
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Studying the Relation BetweenChange- and Fault-proneness: Are Change-prone Classes MoreFault-prone, and Vice-versa?2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Software is the heartbeat of modern technology. To keep up with the new demands and expansion of requirements, changes are constantly introduced to the software, i.e., changes can also be made to fix an existing fault/defect. However, these changes might also cause further faults/defects in the software. This study aims to investigate the possible correlation between change-proneness and fault-proneness in object- oriented systems. Forty releases of five different open-source systems are analysed to quantify change- and fault-proneness; Beam, Camel, Ignite, Jenkins, and JMe- ter, then statistic evidence is presented as to answer the following: (1) Is there is a relationship between change-proneness and fault-proneness for classes in object- oriented systems? (2) Is there a relationship between size and fault-proneness for classes in object-oriented systems? and (3) Is there a relationship between size and change-proneness for classes in object-oriented systems? Using the Wilcoxon rank- sum test, the results show that: (1) there is a correlation between change- and fault- proneness at a statistically significant level and (2) a correlation also exists betweenclass size and its change- and fault-proneness at a statistically significant level.

    Download full text (pdf)
    fulltext
  • 13.
    Abdulin, Ruslan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Applying Machine Learning to Detect Historical Remains in Swedish Forestry Using LIDAR Data2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Historical remains in Swedish forests are at risk of being damaged by heavy machineryduring regular soil preparation, scarification, and regeneration activities. Thereason for this is that the exact locations of these remains are often unknown or theirrecords are inaccurate. Some of the most vulnerable historical remains are the tracesleft after years of charcoal production. In this thesis, we design and implement acomputer vision artificial intelligent model capable of identifying these traces usingtwo accessible visualizations of Light Detection and Ranging (LIDAR) data. Themodel we used was the ResNet34 Convolutional Neural Network pre-trained on theImageNet dataset. The model took advantage of the image segmentation approachand required only a small number of annotations distributed on original images fortraining. During the process of data preparation, the original images were heavilyaugmented, which bolstered the training dataset. Results showed that the model candetect charcoal burners sites and mark them on both types of LIDAR visualizations.Being implemented on modern frameworks and featured with state-of-art machinelearning techniques, the model may reduce the costs of surveys of this type of historicalremains and thereby help save cultural heritage.

    Download full text (pdf)
    fulltext
  • 14.
    Abdullahi, Abdille
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Component-based Software development2008Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Component-based Software development is a promising way to improve quality, time to market and handle the increasing complexity of software management. However, The component-based development is still a process with many problems, it is not well de_ned either from theoretical or practical point of view. This thesis gives a brief overview of Component-Based Software development and starts with brief historical evolution followed by a general explanation of the method. A detailed discussion of the underlying principles like components, component framework and compent system architecture are then presented. Some real world component stadards such as .net framework, CORBA CCM and EJB are given in detail. Finally, simple fille-sharing-program based on Apache's Avalon framework and another one based on .net framework are developed as a case study.

    Download full text (pdf)
    FULLTEXT01
  • 15.
    Abrahamsson, Claes
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Videokompression och användarnytta2007Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Digital video är ett område som expanderat kraftigt senaste åren. På grund av avancerade videokompressionsmetoder kan numera en hel långfilm rymmas på mobiltelefonen eller den handhållna videospelaren. Men hur fungerar videokompression och hur hårt kan man komprimera en viss filmsekvens med bibehållen användarnytta? I denna uppsats görs en genomgång för grunderna inom videokompression. Dess förhållande till användarnyttan undersöks genom ett experiment. I detta experiment testas tre videosekvenser komprimerade med tre olika kodekar i tre olika datahastigheter för tre olika experimentgrupper. Slutsatsen av

    denna studie är att det är svårt att hitta en exakt brytpunkt där användaren finner videokvaliteten oacceptabel. Dock kan det konstateras att den högsta datahastigheten(256 kbit/s) genererar hög användarnytta medan den lägsta datahastigheten genererar låg användarnytta. Mellanhastigheten 128 kbit/s får knappt godkänt av testpersonerna.

    Download full text (pdf)
    FULLTEXT01
  • 16.
    Abrahamsson, Jimmy
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Nätverksoptimering: Bästa möjliga tillgänglighet till lägsta möjliga länkkostnad2012Independent thesis Basic level (university diploma), 5 credits / 7,5 HE creditsStudent thesis
    Abstract [sv]

    Denna uppsats behandlar nätverksoptimering, med fokus på att få fram den bästa möjliga tillgängligheten till lägsta möjliga kostnad, i form av länkar. I studien tas en teoretiskt grundad hypotes om en metod fram för att lösa detta problemet. Samt genomförs ett test där metoden appliceras på ett studieobjekt för att bekräfta huruvida metoden verkar fungerar eller inte.

    Resultatet blir slutligen en metod som genom att analysera tillgängligheten, länk redundansen samt beräkna antal oberoende vägar mellan noder i ett nätverk. Kan genom att jämföra resultat från före en förändring med resultat från efter en förändring lyfta fram de förändringar som medför förbättringar. Varpå ett nätverk kan optimeras till bästa möjliga tillgänglighet till lägsta möjliga kostnad, i form av länkar.

    Download full text (pdf)
    Nätverksoptimering
  • 17.
    AbuHemeida, Dalya
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Alsaid, Mustafa
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Estimating the energy consumption of Java Programs: Collections & Sorting algorithms2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Java applications consume energy, which has become a controversial topic since it limits the number of machines and increases the cost of data centers. This paper investigates the potential relationship between energy consumption and some quality attributes for Java Collections and Sorting algorithms in order to raise awareness about using energy-efficient programs. In addition, introduce to the developers the most and least efficient Java Collection and Sorting algorithm in terms of energy consumption, memory, and CPU usage. This was achieved by conducting a controlled experiment to measure these terms. The data obtained for the results was used to acquire Statistical and Efficiency Analysis to answer the research questions.

    Download full text (pdf)
    Degree project
  • 18.
    Adegoke, Adekunle
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Osimosu, Emmanuel
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Service Availability in Cloud Computing: Threats and Best Practices2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cloud computing provides access to on-demand computing resources and storage space, whereby applications and data are hosted with data centers managed by third parties, on a pay-per-use price model. This allows organizations to focus on core business goals instead of managing in-house IT infrastructure.                    

    However, as more business critical applications and data are moved to the cloud, service availability is becoming a growing concern. A number of recent cloud service disruptions have questioned the reliability of cloud environments to host business critical applications and data. The impact of these disruptions varies, but, in most cases, there are financial losses and damaged reputation among consumers.        

    This thesis aims to investigate the threats to service availability in cloud computing and to provide some best practices to mitigate some of these threats. As a result, we identified eight categories of threats. They include, in no particular order: power outage, hardware failure, cyber-attack, configuration error, software bug, human error, administrative or legal dispute and network dependency. A number of systematic mitigation techniques to ensure constant availability of service by cloud providers were identified. In addition, practices that can be applied by cloud customers and users of cloud services, to improve service availability were presented.

    Download full text (pdf)
    fulltext
  • 19.
    Aghaee, Saeed
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Random Stream Cipher2007Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Stream ciphers are counted as an important part of symmetric encryption method. Their basic idea comes from One-Time-Pad cipher using XOR operator on the plain text and the key to generate the cipher. The present work brings a new idea in symmetric encryption method, which inherits stream key generation idea from synchronous stream cipher and uses division instead of xoring. The Usage of division to combine the plain text with stream key gives numerous abilities to this method that the most important one is using random factors to produce the ciphers.

    Download full text (pdf)
    FULLTEXT01
  • 20.
    Agne, Arvid
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Provisioning, Configuration and Monitoring of Single-board Computer Clusters2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Single-board computers as hardware for container orchestration have been a growing subject. Previous studies have investigated their potential of running production-grade technologies in various environments where low-resource, cheap, and flexible clusters may be of use. This report investigates the appliance of methods and processes prevalent in cluster, container orchestration, and cloud-native environments. The motivation being that if single-board computers are able to run clusters to a satisfactory degree, they should also be able to fulfill the methods and processes which permeate the same cloud-native technologies. Investigation of the subject will be conducted through the creation of different criteria for each method and process. They will then act as an evaluation basis for an experiment in which a single-board computer cluster will be built, provisioned, configured, and monitored. As a summary, the investigation has been successful, instilling more confidence in single-board computer clusters and their ability to implement cluster related methodologies and processes.

    Download full text (pdf)
    fulltext
  • 21.
    Ahlgren, Hannes
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Graph visualization with OpenGL2005Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Vizz3D is a 3D graphics code analysis tool, developed at Växjö University that optionally can use Java3D or OpenGL. However, initially Java3D was the only programming interface used. No other version was considered. Therefore the applications structure was built with the Java3D way of thought in mind. But code visualization with 3D graphics can be a demanding task for the computers processor and its graphics hardware and Java3D is known to be somewhat inefficient. So an OpenGL version was introduced.

    This thesis reflects on the work restructuring the application’s code to fit both versions within Vizz3D in a structured and object-oriented way. The thesis shows the efforts to be taken to make an existing ever evolving tool easily extendible to other API’s. Additional aspects of OpenGL specific implementations are discussed throughout the thesis.

    Download full text (pdf)
    FULLTEXT01
  • 22. Ahlgren, Per
    et al.
    Grönqvist, Leif
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Evaluation of retrieval effectiveness with incomplete relevance data: theoretical and experimental comparison of three measures2007In: International Journal of Information Processing & Management, Vol. In press, p. 14-Article in journal (Refereed)
    Abstract [en]

    The main reason for this work is to find an appropriate way to include multi-word units in a latent semantic vector model. This would be of great use since these models normally are defined in terms of words, which makes it impossible to search for many types of multi-word units when the model is used in information retrieval tasks. The paper presents a Swedish evaluation set based on synonym tests and an evaluation of vector models trained with different corpora and parameter settings, including a rather naive way to add bi- and trigrams to the models. The best results in the evaluation is actually obtained with both bi- and trigrams added. Our hope is that the results in a forthcoming evaluation in the document retrieval context, which is an important application for these models, still will be at least as good with the bi- and trigrams are added, as without.

  • 23. Ahlgren, Per
    et al.
    Grönqvist, Leif
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Measuring retrieval effectiveness with incomplete relevance data2006In: Current Research in Information Sciences and Technologies: Multidisciplinary approaches to global information systems (InSciT2006 proceedings), Open Institute of Knowledge , 2006, p. 74-78Conference paper (Refereed)
  • 24. Ahlgren, Per
    et al.
    Grönqvist, Leif
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Retrieval evaluation with incomplete relevance data: A comparative study of three measures (poster abstract)2006In: Proceedings of the 15th ACM International Conference on Information and Knowledge Management (CIKM), ACM Press, New York, NY, USA , 2006, p. 872-873Conference paper (Refereed)
  • 25.
    Ahlin, Daniel
    et al.
    Växjö University.
    Jartelius, Martin
    Växjö University.
    Tingdahl, Johanna
    Växjö University.
    Proxyserver för passiv informationssökning2005Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In today’s society the average person is flooded by information from everywhere. This is in particular the case when using the Internet; consider for a moment the fact that a decent search engine at this moment scans 8 058 million homepages. For a user that repeatedly comes back to the same site, the case is often that they know what they are looking for. The problem is to isolate the important information from all the other information embedding it.

    We would like to state that we have found one possible solution to this problem, where the user himself can define what information he is looking for at a specific server, then scan the server when visiting it with his browser. The information is then saved and made easily accessible to the user, independent of what system he is using.

    Our solution is based on a proxy-server, through which the user makes his connections. The server is configurable as to what information to scan for and where, as well as in what format, the data should be saved. Our method with an independent proxyserver is not as efficient as including this support into a browser, but it is enough to give proof of the concept. For high-speed connections to a server on the same network as the user, it might be possible for a user to notice that it is slowing down the connection, but it’s a matter of fractions of a second, and surfing under normal conditions the user is very unlikely to be bothered by the proxy. The actual loss in performance is the time required to make a second TCP-connection for each call, as well as a slight loss of efficiency due to Java’s thread synchronization.

    Download full text (pdf)
    FULLTEXT01
  • 26.
    Ahmad, Khurram
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Azeem, Muhammad
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Gold Standard Website2009Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     

    The aim of this thesis is to design a web base system which provides functionality of comparison between two java files on the basis of point-to information (P2I). User will upload Java files and analysis of Java files called point-to analysis (P2A). System will store the files in the file system for reference and download in later time. System will extract the information called P2I from P2A and it will store that information in the database.

    Database should be flexible to accommodate the changes in P2A file and system should be able to extract the P2I and store it in database with minimum support of system administrator.

     

    Download full text (pdf)
    FULLTEXT01
  • 27.
    Ahmed, Tauheed
    et al.
    Mahindra Univ, India.
    Samima, Shabnam
    Mahindra Univ, India.
    Zuhair, Mohd
    Nirma Univ, India.
    Ghayvat, Hemant
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Technol Univ Denmark, Denmark.
    Khan, Muhammad Ahmed
    Stanford Univ, USA.
    Kumar, Neeraj
    Thapar Univ, India;Univ Petr & Energy Studies, India;Lebanese Amer Univ, Lebanon;King Abdulaziz Univ, Saudi Arabia.
    FIMBISAE: A Multimodal Biometric Secured Data Access Framework for Internet of Medical Things Ecosystem2023In: IEEE Internet of Things Journal, ISSN 2327-4662, Vol. 10, no 7, p. 6259-6270Article in journal (Refereed)
    Abstract [en]

    Information from the Internet of Medical Things (IoMT) domain demands building safeguards against illegitimate access and identification. Existing user identification schemes suffer from challenges in detecting impersonation attacks which leave systems vulnerable and susceptible to misuse. Significant advancement has been achieved in the domain of biometrics and health informatics. This can take a step ahead with the usage of multimodal biometrics for the identification of healthcare system users. With this aim, the proposed work explores the fingerprint and iris modality to develop a multimodal biometric data identification and access control system for the healthcare ecosystem. In the proposed approach, minutiae-based fingerprint features and a combination of local and global iris features are considered for identification. Further, an index space based on the dimension of the feature vector is created, which gives a 1-D embedding of the high-dimensional feature set. Next, to minimize the impact of false rejection, the approach considers the possible deviation in each element of the feature vector and then stores the data in possible locations using the predefined threshold. Besides, to reduce the false acceptance rate, linking of the modalities has been done for every individual data. The modality linking thus helps in carrying out an efficient search of the queried data, thereby minimizing the false acceptance and rejection rate. Experiments on a chimeric iris and fingerprint bimodal database resulted in an average of 95% reduction in the search space at a hit rate of 98%. The results suggest that the proposed indexing scheme has the potential to substantially reduce the response time without compromising the accuracy of identification.

  • 28.
    Ahmic, Enida
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Beganovic, Alen
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Slip Detection For Robotic Lawn Mowers Using Loop Signals2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Husqvarna AB is one of the leading producers of outdoor products such as autonomous lawn mowers. One important feature of these products is the ability toquickly respond to environmental factors such as slippy areas. A reliable slip detector is needed for this mission and many different technologies exists for detectingslip events. A common technique is to check the wheel motor current, which clearlydeviates when the lawn mower is subjected to slipping. The on-board sensors opensup for an alternative solution which utilizes the loop sensors as the main slip detector. This thesis covers the construction of a slip detection prototype which is basedon the loop sensors. In the end, Husqvarna AB was provided with a new alternativesolution, which was successfully compared to the exiting solution. It proved to bea reliable slip detector for manually induced slipping indoors, outdoor performancewere not investigated. Ultimately, the implemented prototype outperformed the existing solution in the intended environment of indoor testing.

    Download full text (pdf)
    fulltext
  • 29.
    Aidemark, Jan
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Investigating Concepts for an Interconnected Socio/Technical KM Planning Approach2009In: Proceedings of the 10th European conference on Knowledge Management: ECKM 2009 - Università Degli Studi Di Padoava, Vicenza, Italy, 3-4 September 2009 / [ed] . Bolisani, E. and Scarso, E., Reading, England: Academic Publising Ltd , 2009, p. 1-9Conference paper (Refereed)
  • 30.
    Akhtar, Naeem
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Alzghoul, Ahmad
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Time performance comparison in determining the weak parts in wooden logs2009Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The steadily increasing demand of wood forces sawmills to increase the efficiency and effectiveness of their equipments. The weak parts and the twist in wooden logs have been documented as the most common and crucial defect in sawn lumber.

    In this thesis we are going to implement a program which is able to determine the weak parts in wooden logs. The implementation will be in two languages C++ and Matlab. Parts of the program are going to be implemented sometimes by C++ and sometimes by Matlab therefore different designs are going to be tested. The aim of this thesis is to check whether these designs will meet the real time bound of 10 m/s.

    The result shows that there is a huge difference in time performance for the different designs. Therefore, different discretization levels were used in order to meet the deadline of 10m/s. We found that in order to get better speed one should calculate the matrix for the function F and the Jacobian function J by using C++ not by Matlab. Also we found that when we called functions from one language to another an extra time is added.

    Download full text (pdf)
    FULLTEXT01
  • 31.
    Akkaya, Deniz
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Thalgott, Fabien
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Honeypots in network security2010Independent thesis Basic level (degree of Bachelor), 15 credits / 22,5 HE creditsStudent thesis
    Abstract [en]

    Day by day, more and more people are using internet all over the world. It is becoming apart of everyone’s life. People are checking their e-mails, surfing over internet, purchasinggoods, playing online games, paying bills on the internet etc. However, while performingall these things, how many people know about security? Do they know the risk of beingattacked, infecting by malicious software? Even some of the malicious software arespreading over network to create more threats by users. How many users are aware of thattheir computer may be used as zombie computers to target other victim systems? Astechnology is growing rapidly, newer attacks are appearing. Security is a key point to getover all these problems. In this thesis, we will make a real life scenario, using honeypots.Honeypot is a well designed system that attracts hackers into it. By luring the hackerinto the system, it is possible to monitor the processes that are started and running on thesystem by hacker. In other words, honeypot is a trap machine which looks like a realsystem in order to attract the attacker. The aim of the honeypot is analyzing, understanding,watching and tracking hacker’s behaviours in order to create more secure systems.Honeypot is great way to improve network security administrators’ knowledge and learnhow to get information from a victim system using forensic tools. Honeypot is also veryuseful for future threats to keep track of new technology attacks.

    Download full text (pdf)
    FULLTEXT01
  • 32.
    AL Jorani, Salam
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Performance assessment of Apache Spark applications2019Independent thesis Basic level (degree of Bachelor), 180 HE creditsStudent thesis
    Abstract [en]

    This thesis addresses the challenges of large software and data-intensive systems. We will discuss a Big Data software that consists of quite a bit of Linux configuration, some Scala coding and a set of frameworks that work together to achieve the smooth performance of the system. Moreover, the thesis focuses on the Apache Spark framework and the challenging of measuring the lazy evaluation of the transformation operations of Spark. Investigating the challenges are essential for the performance engineers to increase their ability to study how the system behaves and take decisions in early design iteration. Thus, we made some experiments and measurements to achieve this goal. In addition to that, and after analyzing the result we could create a formula that will be useful for the engineers to predict the performance of the system in production.

    Download full text (pdf)
    fulltext
  • 33.
    Alam Khan, Fakhri
    et al.
    University of Vienna.
    Han, Yuzhang
    University of Vienna.
    Pllana, Sabri
    University of Vienna.
    Brezany, Peter
    University of Vienna.
    An Ant-Colony-Optimization Based Approach for Determination of Parameter Significance of Scientific Workflows2010In: 24th IEEE International Conference on Advanced Information Networking and Applications, IEEE, 2010, p. 1241-1248Conference paper (Refereed)
    Abstract [en]

    In the process of a scientific experiment a workflow is executed multiple times using various values of the parameters of activities. For real-world workflows that may contain hundreds of activities, each having several parameters, it is practically not feasible to conduct a parameter sensitivity study by simply following a ”brute-force approach” (that is experimental evaluation of all possible cases). We believe that a heuristic-guided approach enables to find a near-optimal solution using a reasonable amount of resources without the need for the evaluation of all possibilities. In this paper we present a novel methodology for determination of parameter significance of scientific workflows that is based on Ant Colony Optimization (ACO). We refer to our methodology, which is a customization of ACO for Parameter Significance determination, as ACO4PS. We use ACO4PS to identify (1) which parameter strongly affects the overall result of the workflow and (2) for which combination of parameter values we obtain the expected result. ACO4PS generates a list of all workflow parameters sorted by significance as well as is capable of generating a subset of significant parameters. We empirically evaluate our methodology using a real-world scientific workflow that deals with the Non-Invasive Glucose Measurement.

  • 34.
    Alam Khan, Fakhri
    et al.
    University of Vienna.
    Han, Yuzhang
    University of Vienna.
    Pllana, Sabri
    University of Vienna.
    Brezany, Peter
    University of Vienna.
    Estimation of Parameters Sensitivity for Scientific Workflows2009In: 2009 International Conference on Parallel Processing Workshops, IEEE, 2009, p. 457-462Conference paper (Refereed)
    Abstract [en]

    Usually workflow activities in the scientific domain depend on a collection of parameters. These parameters determine the output of the activity, and consequently the output of the whole workflow. In the scientific domain, workflows have exploratory nature and are used to understand a scientific phenomenon or answer scientific questions. In the process of a scientific experiment a workflow is executed multiple times using various values of the parameters of activities. It is relevant to identify (1) which parameter strongly affects the overall result of the workflow and (2) for which combination of parameter values we obtain the expected result. Foreseeing these issues, in this paper we present our methodology to estimate the significance of all scientific workflow parameters as well as to estimate the most significant parameter to the workflow. The estimation of parameter significance will enable the scientist to fine tune, and optimize his results efficiently. Furthermore, we empirically validate our methodology on Non-Invasive Glucose Measurement workflow and discuss our results. The NIGM workflow uses the neural network model to calculate the glucose level in patient blood. The neural network model has a set of parameters, which affect the result of the workflow significantly. But, unfortunately the impact significance of these parameters is commonly unknown to the user. We present our approach for estimating and quantifying impact significance of neural network parameters.

  • 35.
    Alam Khan, Fakhri
    et al.
    University of Vienna.
    Han, Yuzhang
    University of Vienna.
    Pllana, Sabri
    University of Vienna.
    Brezany, Peter
    University of Vienna.
    Provenance Support for Grid-Enabled Scientific Workflows2008In: Fourth International Conference on Semantics, Knowledge and Grid, IEEE Computer Society, 2008, p. 173-180Conference paper (Refereed)
    Abstract [en]

    The Grid is evolving and new concepts like Semantic Grid, Knowledge Grid are rapidly emerging, where humans and distributed machines share, exchange, and manage data and resources intelligently. Computational scientists typically use workflows to describe and manage scientific discovery processes. However, the credibility of the obtained results in the scientific community is questionable if the computational experiment is not reproducible. This issue is being addressed in our research reported in this paper via development of workflow provenance system for Grid-enabled scientific workflows. Workflow provenance collects data on workflow activities, data flow and workflow clients. Provenance information can be used to trace and test workflows and the data produced. Our approach supports reproducibility (i.e. to support re-enactment of workflow by an independent user) and dataflow visualization (i.e. visualization of statistical characteristics of input/output data). We illustrate our approach on the Non-Invasive Glucose Measurement (NIGM) application.

  • 36. Albrecht, Mario
    et al.
    Kerren, Andreas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Klein, Karsten
    Kohlbacher, Oliver
    Mutzel, Petra
    Paul, Wolfgang
    Schreiber, Falk
    Wybrow, Michael
    On Open Problems in Biological Network Visualization2009In: Graph Drawing: 17th International Symposium, GD 2009, Chicago, IL, USA, September 22-25, 2009. Revised Papers / [ed] David Eppstein and Emden R. Gansner, Berlin Heidelberg New York: Springer, 2009, p. 256-267Chapter in book (Refereed)
    Abstract [en]

    Much of the data generated and analyzed in the life sciences can be interpreted and represented by networks or graphs. Network analysis and visualization methods help in investigating them, and many universal as well as special-purpose tools and libraries are available for this task. However, the two fields of graph drawing and network biology are still largely disconnected. Hence, visualization of biological networks does typically not apply state-of-the-art graph drawing techniques, and graph drawing tools do not respect the drawing conventions of the life science community.

    In this paper, we analyze some of the major problems arising in biological network visualization.We characterize these problems and formulate a series of open graph drawing problems. These use cases illustrate the need for efficient algorithms to present, explore, evaluate, and compare biological network data. For each use case, problems are discussed and possible solutions suggested.

  • 37.
    Aleksikj, Stefan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Visualization of Quantified Self data from Spotify using avatars2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The increased interest for self-tracking through the use of technology has given birth to the Quantified Self movement. The movement empowers users to gain self-knowledge from their own data. The overall idea is fairly recent and as such it provides a vast space for exploration and research. This project contributes to the Quantified self movement by proposing a concept for visualization of personal data using an avatar. The overall work finds inspiration in Chernoff faces visualization and it uses parts of the presentation method within the project design.  

    This thesis presents a visualization approach for Quantified Self data using avatars. It tests the proposed concept through a user study with two iterations. The manuscript holds a detailed overview of the designing process, questionnaire for the data mapping, implementation of the avatars, two user studies and the analysis of the results. The avatars are evaluated using Spotify data. The implementation offers a visualization library that can be reused outside of the scope of this thesis.

    The project managed to deliver an avatar that presents personal data through the use of facial expressions. The results show that the users can understand the proposed mapping of data. Some of the users were not able to gain meaningful insights from the overall use of the avatar, but the study gives directions for further improvements of the concept. 

    Download full text (pdf)
    fulltext
  • 38.
    Ali, Amjad
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Migration from Internet Protocol Version 4 To Internet Protocol Version 62014Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    IPv4 has played it big role in spreading Internet and Internet based applications for more than 20 years. Now it will hand over the stage to its more powerful successor IPv6. IP is an important component of the TCP/IP protocol suit and the Internet is built on it.

             IPv6 is a new generation protocol suite which has been proposed by the Internet Engineering Task Force (IETF) which uses the 128-bit address instead of IPv4 32-bit address. Moving to the next generation of Internet Protocol became an issue to solve many problems in the current generation.

             Unfortunately IPv4 and IPv6 are incompatible with each other. It is necessary to create smooth transition mechanisms that a transition mechanism is required during the time of migration from IPv4 to IPv6 networks. This paper aims to supplement this by presenting the design and implementation of IPv4 to IPv6 Transition Scenarios. This paper very clearly illustrates the transition of IPv4-to-IPv6 Transition mechanisms along with how to execute IPv6 commands.

    Download full text (pdf)
    fulltext
  • 39.
    Ali, Shaukat
    et al.
    Simula Research Laboratory, Norway.
    Damiani, F.
    University of Turin, Italy.
    Dustdar, Schahram
    TU Wien, Austria.
    Sanseverino, Marialuisa
    University of Turin, Italy.
    Viroli, Mirko
    University of Bologna, Italy.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Katholieke Universiteit Leuven, Belgium.
    Big data from the cloud to the edge: The aggregate computing solution2019In: PervasiveHealth: Pervasive Computing Technologies for Healthcare, ACM Publications, 2019, p. 177-182Conference paper (Refereed)
    Abstract [en]

    We advocate a novel concept of dependable intelligent edge systems (DIES) i.e., the edge systems ensuring a high degree of dependability (e.g., security, safety, and robustness) and autonomy because of their applications in critical domains. Building DIES entail a paradigm shift in architectures for acquiring, storing, and processing potentially large amounts of complex data: data management is placed at the edge between the data sources and local processing entities, with loose coupling to storage and processing services located in the cloud. As such, the literal definition of edge and intelligence is adopted, i.e., the ability to acquire and apply knowledge and skills is shifted towards the edge of the network, outside the cloud infrastructure. This paradigm shift offers flexibility, auto configuration, and auto diagnosis, but also introduces novel challenges. © 2019 ACM.

  • 40.
    Ali, Subhan
    et al.
    Norwegian University of Science & Technology, Norway.
    Akhlaq, Filza
    Sukkur IBA University, Pakistan.
    Imran, Ali Shariq
    Norwegian University of Science & Technology, Norway.
    Kastrati, Zenun
    Linnaeus University, Faculty of Technology, Department of Informatics.
    Daudpota, Sher Muhammad
    Sukkur IBA University, Pakistan.
    Moosa, Muhammad
    Norwegian University of Science & Technology, Norway.
    The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review2023In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 166, article id 107555Article in journal (Refereed)
    Abstract [en]

    In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018- 2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.

    Download full text (pdf)
    fulltext
  • 41.
    Aljadri, Sinan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Chatbot: A qualitative study of users' experience of Chatbots2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The aim of the present study has been to examine users' experience of Chatbot from a business perspective and a consumer perspective. The study has also focused on highlighting what limitations a Chatbot can have and possible improvements for future development. The study is based on a qualitative research method with semi-structured interviews that have been analyzed on the basis of a thematic analysis. The results of the interview material have been analyzed based on previous research and various theoretical perspectives such as Artificial Intelligence (AI), Natural Language Processing (NLP). The results of the study have shown that the experience of Chatbot can differ between businesses that offer Chatbot, which are more positive and consumers who use it as customer service. Limitations and suggestions for improvements around Chatbotar are also a consistent result of the study.

    Download full text (pdf)
    fulltext
  • 42.
    Alkhars, Abeer
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Mahmoud, Wasan
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Cross-Platform Desktop Development (JavaFX vs. Electron)2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today, there are many technologies available for developing cross-platform desktop apps. JavaFX is a software platform based on the Java language. It has a set of features that play a role in its success. On the other hand, Electron is a new framework that allows developers to employ web technologies (JavaScript, HTML, and CSS) to create cross-platform desktop applications. This thesis describes and compares between these two frameworks. The purpose of this report is to provide guidance in choosing the right technique for a particular cross-platform desktop application. Simple cross-platform desktop applications have been developed to compare both approaches as well as find the advantages and disadvantages. The results show that both apps satisfied the functional and nonfunctional requirements. Each framework architecture has its own advantage in building particular apps. Both frameworks have rich APIs as well as rich GUI components for building desktop apps. Electron has good documentation and community help, but it cannot be compared to JavaFX. The Electron app gives faster execution time and less memory usage than JavaFX app. However, the implementation of OOP concepts in Electron using JavaScript has some concerns in terms of encapsulation and inheritance. 

    Download full text (pdf)
    fulltext
  • 43.
    Alkhateeb, Firas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    The Status Of Web Security In Sweden2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Getting incorrect website content has increased in recent years, which is a reflection of the web security status on the Internet. However, when It comes to government and other professional organisations websites, they should have the best security requirements and follow security recommendations. This research will study websites located in the SE zone. The total number of investigated websites is 1166. The testing process was done in two ways. The firstway is a Dutch test website tool called Internet.nl. The second is using a tool developed as part of the research. The investigation focuses on Swedish websites and nine security extensions. These extensions prevent Man in the middle attack(MITM), downgrade attacks, Cross-Site Scripting (XSS), Click-jacking, and ensure that the correct information is obtained when a client requests a website. The paper evaluated the security between 2014 and 2022. What are the types of security taken and which sector has the best security awareness. The using of security headers had increased in 2022, the total use of tested security standards in the SE zone is around 50%, and banks have the best security awareness.

    Download full text (pdf)
    The status of web security in Sweden - Cyber security
  • 44.
    Alklid, Jonathan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Time to Strike: Intelligent Detection of Receptive Clients: Predicting a Contractual Expiration using Time Series Forecasting2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In recent years with the advances in Machine Learning and Artificial Intelligence, the demand for ever smarter automation solutions could seem insatiable. One such demand was identified by Fortnox AB, but undoubtedly shared by many other industries dealing with contractual services, who were looking for an intelligent solution capable of predicting the expiration date of a contractual period. As there was no clear evidence suggesting that Machine Learning models were capable of learning the patterns necessary to predict a contract's expiration, it was deemed desirable to determine subject feasibility while also investigating whether it would perform better than a commonplace rule-based solution, something that Fortnox had already investigated in the past. To do this, two different solutions capable of predicting a contractual expiration were implemented. The first one was a rule-based solution that was used as a measuring device, and the second was a Machine Learning-based solution that featured Tree Decision classifier as well as Neural Network models. The results suggest that Machine Learning models are indeed capable of learning and recognizing patterns relevant to the problem, and with an average accuracy generally being on the high end. Unfortunately, due to a lack of available data to use for testing and training, the results were too inconclusive to make a reliable assessment of overall accuracy beyond the learning capability. The conclusion of the study is that Machine Learning-based solutions show promising results, but with the caveat that the results should likely be seen as indicative of overall viability rather than representative of actual performance.

    Download full text (pdf)
    fulltext
  • 45. Allwood, Jens
    et al.
    Henrichsen, Peter Juel
    Grönqvist, Leif
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Ahlsén, Elisabeth
    Gunnarsson, Magnus
    Transliteration between Spoken Language Corpora: Moving between Danish BySoc and Swedish GSLC2005In: Nordic Journal of Linguistics, Vol. 28, no 1, p. 5-36Article in journal (Refereed)
  • 46.
    Al-Saydali, Josef
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Al-Saydali, Mahdi
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Performance comparison between Apache and NGINX under slow rate DoS attacks2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    One of the novel threats to the internet is the slow HTTP Denial of Service (DoS) attack on the application level targeting web server software. The slow HTTP attack can leave a high impact on web server availability to normal users, and it is affordable to be established compared to other types of attacks, which makes it one of the most feasible attacks against web servers. This project investigates the slow HTTP attack impact on the Apache and Nginx servers comparably, and review the available configurations for mitigating such attack. The performance of the Apache and NGINX servers against slow HTTP attack has been compared, as these two servers are the most globally used web server software. Identifying the most resilient web server software against this attack and knowing the suitable configurations to defeat it play a key role in securing web servers from one of the major threats on the internet. From comparing the results of the experiments that have been conducted on the two web servers, it has been found that NGINX performs better than the Apache server under slow rate DoS attack without using any configured defense mechanism. However, when defense mechanisms have been applied to both servers, the Apache server acted similarly to NGINX and was successful to defeat the slow rate DoS attack.

    Download full text (pdf)
    fulltext
  • 47.
    Amatya, Suyesh
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Cross-Platform Mobile Development: An Alternative to Native Mobile Development2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Mobile devices and mobile computing have made tremendous advances and become ubiquitous in the last few years. As a result, the landscape has become seriously fragmented which brings lots of challenges for the mobile development process. Whilst native approach of mobile development still is the predominant way to develop for a particular mobile platform, recently there is shifting towards cross-platform mobile development as well.

    In this thesis, a survey of the literature has been performed to see the trends in cross-platform mobile development over the last few years. With the result of the survey, it is argued that the web-based approach and in particular, hybrid approach, of mobile development serves the best for cross-platform development. Using the hybrid approach, a prototype application has also been developed and built into native application for different platforms. This has helped to get a better insight about the domain of cross-platform mobile development and its main advantage of the unification of the development and testing process.

    The results of this work indicate that even though cross platform tools are not fully matured they show great potential and reduce the cost associated in developing native mobile applications. Cross-platform mobile development is equally suitable for rapid development of high-fidelity prototypes of the mobile application as well as fairly complex, resource intensive mobile applications on its own right. As the upcoming future trends and the evolution of HTML5 continues to redefine the web, allowing its growth as a software platform, there remains great opportunities for cross-platform mobile development and hence provides an attractive alternative for the native mobile development.

    Download full text (pdf)
    Cross-Platform Mobile Development: An Alternative to Native Mobile Development - Suyesh Amatya
  • 48.
    Amberman, Madelene
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Johansson, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Tekniska krav från GDPR: Vad som krävs av en applikation som hanterar och samlar in samtycken för att tekniskt uppfylla kraven från GDPR2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Privacy Policies within EU is regulated by General Protection Data Regulation, GDPR, it exists to protect a person's integrity while maintaining the free flow of data between its members. Companies and organizations today put a lot of resources on manual handling of personal data, that could be used for more value-adding activities if this process had a more automated flow. Hence we are going to do a case study at Meriworks, they have a system, Imagevault, that handles images and consents in a manual process. By interviewing some of the organizations that uses this system and a literature study of GDPR, we compiled the requirements for an application that handles consents and personal data, from both a technical and user perspective. Using these requirements, we created a technical solution proposal, that is meant to replace the current manual process in our case study. Our list of technical requirements from GDPR is generic and can be used as guidelines for cases beyond this paper.

    Download full text (pdf)
    Tekniska krav från GDPR
  • 49.
    Anantaprayoon, Amata
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Project X: All-in-one WAF testing tool2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Web Application Firewall (WAF) is used to protect the Web application (web app). One of the advantages of having WAF is, it can detect possible attacks even if there is no validation implemented on the web app. But how can WAF protect the web app if WAF itself is vulnerable? In general, four testing methods are used to test WAF such as fuzzing, payload execution, bypassing, and footprinting. There are several open-source WAF testing tools but it appears that it only offers one or two testing methods. That means a tester is required to have multiple tools and learn how each tool works to be able to test WAF using all testing methods. This project aims to solve this difficulty by developing a WAF testing tool called ProjectX that offers all testing methods. ProjectX has been tested on a testing environment and the results show that it fulfilled its requirements. Moreover, ProjectX is available on Github for any developer who want to improve or add more functionality to it.

    Download full text (pdf)
    ProjectX-all-in-one-waftestingtool
  • 50.
    Anderson, Max
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Anderson, Benjamin
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    An Analysis of Data Compression Algorithms in the Context of Ultrasonic Bat Bioacoustics2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Audio data compression seeks to reduce the size of sound files, making them easier to store and transfer, and is thus a highly valued tool for those working with large sets of audio data. For example, some biologists work with audio recordings of bats, which are well known for their frequent use of ultrasonic echolocation, and so these biologists can accrue massive amounts of high frequency audio data. However, as many methods of audio compression are designed to specialize in the more common range of frequencies, they are not able to sufficiently compress bat audio, and many bat biologists instead work without compressing their data at all. This paper investigates the desiderata of a data compression method in the context of bat biology, experimentally compares several modern data compression algorithms, and discusses their pros and cons in terms of their potential use across various relevant contexts. The paper concludes by suggesting the algorithm Monkey’s Audio for machines able to handle the higher resource demands it has. Otherwise, FLAC and WavPack yield similar size reduction rates at a significantly faster speed while being less resource intensive. Of note is the interesting result produced by the algorithm 7-ZIP PPMd Solid, which achieved consistently outstanding results within a single dataset, but its generalizability has yet to be determined.

    Download full text (pdf)
    fulltext
1234567 1 - 50 of 1408
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf