lnu.sePublications
Change search
Refine search result
3456789 251 - 300 of 485
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251.
    Kucher, Kostiantyn
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A Self-Adaptive Software System to Support Elderly Care2013In: Proceedings of Modern Information Technology, MIT, 2013, 2013Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 252.
    Kurti, Arianit
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Dalipi, Fisnik
    Linnaeus University, Faculty of Technology, Department of Computer Science. Linnaeus University.
    Bridging the Gap between Academia and Industry: Lessons Learned from a Graduate IT Professional Development Program2017In: Abstract Book: 2nd Annual International Conference on Engineering Education & Teaching, 5-8 June 2017, Athens, Greece / [ed] Gregory T. Papanikos, Athens, 2017, p. 27-27Conference paper (Other academic)
    Abstract [en]

    The rapid advances of technologies, constantly brings new demands for new skills and expertise of the professionals in IT industry. There is a constant need for people that have in-depth understanding and know how to develop the new innovative services using these new technologies. In these settings, the real challenge is how to find the right persons with the right education in an industry where the in-thing yesterday may be out-of-date tomorrow? To add to this challenge, universities are still “increasingly stove-piped in highly specialized disciplinary fields” (Hurlburt et al., 2010) as well as there is a lack of flexibility for the professionals to have their competences developed. All this points out the great challenges that universities are facing for alignment between academic development within degree curricula and the requirements that industry demands for their specific needs (Falcone et al. 2014). In this research effort we report our experiences from an ongoing Graduate Professional Development Program where we address these challenges through a co-creation process with IT industry based on open innovation. Through this model we bring together research expertise, academic experience and experts from industry in a collaborative process for developing courses to suit the current needs of IT professionals. As an outcome of this process, the course content is tailor-made, as well as everything else in connection, such as: bite-size modules, adjustable pace, open and online educational resources, as well as a flipped classroom approach to teaching. As a result, we have developed and provided so far five courses that have been very well accepted by the IT professional. Thus, in this paper we aim to provide some insights on approaches for facilitating continuous competence development plans for IT professionals within regular university educational offer. 

  • 253.
    Källén, Patrik
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Metsi, Simon
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Robust kommunikation med Raspberry Pi2015Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Softhouse has developed prototypes in order to collect data from products and control them withthe Raspberry Pi. Companies have previously sent employees to manually collect data in thefields which is very inefficient and expensive for the companies. In order to use the Raspberry Piin other projects and strengthen their current systems, a communications protocol is needed tosafely transmit data to a central server. One important aspect is that collected data on theRaspberry Pi cannot be lost for unexpected reasons such as a power outage. The capacity of theRaspberry Pi needed to be reviewed in order to know if it would run for several years.As the basis we used TLS1.2 with AES encryption over a TCP-connection to strengthen thesafety. Parts of data are read from the Raspberry Pi, transmitted to the server and removed onceit gets a ‘ok’ from the server. This stops data from getting lost during unexpected events.Tests were run on the Raspberry Pi to see if it could run out in the field. For example the harddrive and temperature of the Raspberry Pi was tested.

    Download full text (pdf)
    fulltext
  • 254.
    Ladan, Zlatko
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Comparing performance between plain JavaScript and popular JavaScript frameworks2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    JavaScript is used on the web together with HTML and CSS, in many cases using frameworks for JavaScript such as jQuery and Backbone.js. This project is comparing the speed and memory allocation of the programming language JavaScript and its two most used frameworks as well as the language on its own. Since JavaScript is not very fast and it has some missing features or features that differ from browser to browser and frameworks solve this problem but at the cost of speed and memory allocation, the aim is to find out how well JavaScript and the two frameworks jQuery and Backbone.js are doing this on Google Chrome Canary. The results varied (mostly) between the implementations and show that the to-do application is a good enough example to use when comparing the results of heap allocation and CPU time of methods. The results where compared with their mean values and using ANOVA. JavaScript was the fastest, but it might not be enough for a developer to completely stop using frameworks. With JavaScript a developer can choose to create a custom framework, or use an existing one based on the results of this project.

    Download full text (pdf)
    fulltext
  • 255.
    Lagerkrants, Eleonor
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Holmström, Jesper
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Using machine learning to classify news articles2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In today’s society a large portion of the worlds population get their news on electronicdevices. This opens up the possibility to enhance their reading experience bypersonalizing news for the readers based on their previous preferences. We have conductedan experiment to find out how accurately a Naïve Bayes classifier can selectarticles that a user might find interesting. Our experiments was done on two userswho read and classified 200 articles as interesting or not interesting. Those articleswere divided into four datasets with the sizes 50, 100, 150 and 200. We used a NaïveBayes classifier with 16 different settings configurations to classify the articles intotwo categories. From these experiments we could find several settings configurationsthat showed good results. One settings configuration was chosen as a good generalsetting for this kind of problem. We found that for datasets with a size larger than 50there were no significant increase in classification confidence.

    Download full text (pdf)
    fulltext
  • 256.
    Laitinen, Mikko
    et al.
    University of Eastern Finland, Finland.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lakaw, Alexander
    University of Eastern Finland, Finland.
    Revisiting weak ties: Using present-day social media data in variationist studies2017In: Exploring Future Paths for Historical Sociolinguistics / [ed] Tanja Säily, Minna Palander-Collin, Arja Nurmi, Anita Auer, Amsterdam: John Benjamins Publishing Company, 2017, p. 303-325Chapter in book (Refereed)
    Abstract [en]

    This article makes use of big and rich present-day data to revisit the social network model in sociolinguistics. This model predicts that mobile individuals with ties outside a home community and subsequent loose-knit networks tend to promote the diffusion of linguistic innovations. The model has been applied to a range of small ethnographic networks. We use a database of nearly 200,000 informants who send micro-blog messages in Twitter. We operationalize networks using two ratio variables; one of them is a truly weak tie and the other one a slightly stronger one. The results show that there is a straightforward increase of innovative behavior in the truly weak tie network, but the data indicate that innovations also spread under conditions of stronger networks, given that the network size is large enough. On the methodological level, our approach opens up new horizons in using big and often freely available data in sociolinguistics, both past and present.

  • 257.
    Laitinen, Mikko
    et al.
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages. Univ Eastern Finland, Finland.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lakaw, Alexander
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Utilizing Multilingual Language Data in (Nearly) Real Time: The Case of the Nordic Tweet Stream2017In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 23, no 11, p. 1038-1056Article in journal (Refereed)
    Abstract [en]

    This paper presents the Nordic Tweet Stream, a cross-disciplinary digital humanities project that downloads Twitter messages from Denmark, Finland, Iceland, Norway and Sweden. The paper first introduces some of the technical aspects in creating a real-time monitor corpus that grows every day, and then two case studies illustrate how the corpus could be used as empirical evidence in studies focusing on the global spread of English. Our approach in the case studies is sociolinguistic, and we are interested in how widespread multilingualism which involves English is in the region, and what happens to ongoing grammatical change in digital environments. The results are based on 6.6 million tweets collected during the first four months of data streaming. They show that English was the most frequently used language, accounting for almost a third. This indicates that Nordic Twitter users choose English as a means of reaching wider audiences. The preference for English is the strongest in Denmark and the weakest in Finland. Tweeting mostly occurs late in the evening, and high-profile media events such as the Eurovision Song Contest produce considerable peaks in Twitter activity. The prevalent use of informal features such as univerbated verb forms (e.g., gotta for (HAVE) got to) supports previous findings of the speech-like nature of written Twitter data, but the results indicate that tweeters are pushing the limits even further.

  • 258.
    Landbris, Johan
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A Non-functional evaluation of NoSQL Database Management Systems2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    NoSQL is basically a family name for all Database Management Systems (DBMS) that is not Relational DBMS. The fast growth of all social networks has led to a huge amount of unstructured data that NoSQL DBMS is supposed to handle better than Relational DBMS. Most comparisons performed are between Relational DBMS and NoSQL DBMS. In this paper, the comparison is about non-functional properties for different types of NoSQL DBMS instead. Three of the most common NoSQL types are Document Stores, Key-Value Stores and Column Stores. The most used DBMS of those types are MongoDB, Redis and Apache Cassandra. After working with the databases and performing YCSB Benchmarking the conclusion is that if the database should handle an enormous amount of data, Cassandra is most probably best choice. If speed is the most important property and if all data fits within the memory; Redis is probably the most well suited database. If the database needs to be flexible and versatile, MongoDB is probably the best choice.

    Download full text (pdf)
    fulltext
  • 259.
    Ledinov, Dmytro
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    UpTime 4 - Health Monitoring Component2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    dl222cs - thesis report
  • 260.
    Legaspi Ramos, Xurxo
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Scraping Dynamic Websites for Economical Data: A Framework Approach2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Internet is a source of live data that is constantly updating with data of almost anyfield we can imagine. Having tools that can automatically detect these updates andcan select that information that we are interested in are becoming of utmost importancenowadays. That is the reason why through this thesis we will focus on someeconomic websites, studying their structures and identifying a common type of websitein this field: Dynamic Websites. Even when there are many tools that allow toextract information from the internet, not many tackle these kind of websites. Forthis reason we will study and implement some tools that allow the developers to addressthese pages from a different perspective.

    Download full text (pdf)
    Scraping Dynamic Websites for Economical Data by Xurxo Legaspi
  • 261.
    Lindmark, Anton
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Hall, Fredrik
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Säkerhet i CAN-bussen: Riskerna som medföljer Internet of Vehicles2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this report, the security in integrated systems within vehicles is evaluated. Is the CAN bus and the devices that communicate with it secure? What are the weaknesses in security when modern technologies are implemented in vehicles and connected to the CAN bus?If a vehicle is attacked, the risk of the attacker's success with the attack is quite large. There are several risks and security holes with modern technology, for example. built-in media system in vehicles.We have researched how easy it is to retrieve information from the vehicle and what can be done with this information, using both other scientific reports and a physical examination using an application that were developed.By reading using Bluetooth from the OBD2 connector, information such as signals to unlock the vehicle or press the gas pedal can be read from the vehicle. Certain information is hidden for the normal user, such as a press of the gas pedal. This information must be obtained by for example, reading hidden packages. This can be done by tracing packages through various applications, such as Wireshark. Had this information been easy to access, it could be used in a malicious way. Should the command to press the gas be controlled wirelessly, this could create major and dangerous problems. This is something that is being investigated in the report, how to proceed and what ways a vehicle can be attacked.An application was developed to investigate what information that can be relatively easily extracted. Parameters such as speed or rounds per minute on the engine are examples of this information. Using an OBD2 device, the application communicates with the vehicle. The application retrieves information about a trip from start to stop and then it’s able to report this information. Information is displayed to the user in the application both while the vehicle is traveling and then a summary of the entire trip. The application can be used to save one’s journey, if for example, you want to report your trips to your employer. The trips are stored both in a database and locally on your phone with the possibility of uploading to a web server.ivThe application saves all information you selected about your trip and can also be customized with parameters depending on your needs. It also has a field of use for monitoring your driving, for example if you at some time during the trip reach abnormal values, such as far too high rounds per minute or something similar.

    Download full text (pdf)
    fulltext
  • 262.
    Liu, Jiayi
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Visualization of Relationships in Clustered Text Data2013Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    There exist lots of text data. They are often extracted into topical tags for indexing. They are also connected by relationships like sharing the same authors, created at the same period. The entirety of above yields a big text network. In this thesis, a visualization tool to reveal information from big text data in the network is introduced. With clustering algorithm, the data are grouped. The groups with tag clouds show the overview of dataset. Edge halo, a new approach for bundling edges, represents the relationships of text data in and between the groups. An application prototype was developed to visualize clustered text data with their relationships and give an overview of the network in one view.

    Download full text (pdf)
    fulltext
  • 263.
    Liv, Jakob
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Nygren, Fredrik
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Lastbalanseringskluster: En studie om operativsystemets påverkan på lastbalanseraren2014Independent thesis Basic level (university diploma), 5 credits / 7,5 HE creditsStudent thesis
    Abstract [en]

    This report contains a study over an operating system’s impact on the load balancerHAproxy. The study was performed in an experimental environment with four virtualclients for testing, one load balancer and three web server nodes connected to the loadbalancer. The operating system was the main point in the study where the load on theload balancer’s hardware, the response time, the amount of connections and the maximumamount of connections per second were examined. The operating systems whichwere tested was Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 and OpenBSD 5.5. The resultsfrom the tests shows that the load on the hardware and the response time are almost identicalon all operating systems with the exception of OpenBSD where the conditions to beable to run the hardware tests could not be achieved. FreeBSD was the operating systemthat was able to manage the highest amount of connections along with CentOS. Ubuntuturned out to be more limited and OpenBSD was very limited. FreeBSD also managedthe highest amount of connections per second, followed by Ubuntu, CentOS and finallyOpenBSD which turned out to be the worst performer.

    Download full text (pdf)
    fulltext
  • 264.
    Luckert, Michael
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Schaefer-Kehnert, Moritz
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Using Machine Learning Methods for Evaluating the Quality of Technical Documents2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the context of an increasingly networked world, the availability of high quality translations is critical for success in the context of the growing international competition. Large international companies as well as medium sized companies are required to provide well translated, high quality technical documentation for their customers not only to be successful in the market but also to meet legal regulations and to avoid lawsuits. Therefore, this thesis focuses on the evaluation of translation quality, specifically concerning technical documentation, and answers two central questions:

    • How can the translation quality of technical documents be evaluated, given the original document is available?
    • How can the translation quality of technical documents be evaluated, given the original document is not available?

    These questions are answered using state-of-the-art machine learning algorithms and translation evaluation metrics in the context of a knowledge discovery process. The evaluations are done on a sentence level and recombined on a document level by binarily classifying sentences as automated translation and professional translation. The research is based on a database containing 22, 327 sentences and 32 translation evaluation attributes, which are used for optimizations of five different machine learning approaches. An optimization process consisting of 795, 000 evaluations shows a prediction accuracy of up to 72.24% for the binary classification. Based on the developed sentence-based classifi- cation systems, documents are classified using recombination of the affiliated sentences and a framework for rating document quality is introduced. Therefore, the taken approach successfully creates a classification and evaluation system.

    Download full text (pdf)
    fulltext
  • 265.
    Luckert, Michael
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Schaefer-Kehnert, Mortiz
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Löwe, Welf
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Ericsson, Morgan
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Wingkvist, Anna
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    A Classifier to Determine Whether a Document is Professionally or Machine Translated2016In: PERSPECTIVES IN BUSINESS INFORMATICS RESEARCH, BIR 2016, Springer, 2016, p. 339-353Conference paper (Refereed)
    Abstract [en]

    In an increasingly networked world, the availability of high quality translations is critical for success, especially in the context of international competition. International companies need to provide well translated, high quality technical documentation not only to be successful in the market but also to meet legal regulations. We seek to evaluate translation quality, specifically concerning technical documentation, and formulate a method to evaluate the translation quality of technical documents both when we do have access to the original documents and when we do not. We rely on state-of-the-art machine learning algorithms and translation evaluation metrics in the context of a knowledge discovery process. Our evaluation is performed on a sentence level where each sentence is classified as either professionally translated or machine translated. The results for each sentence is then combined to evaluate the full document. The research is based on a database that contains 22,327 sentences and 32 translation evaluation attributes, which are used to optimize Decision Trees that are used to evaluate translation quality. Our method achieves an accuracy of 70.48% on sentence level for texts in the database and can accurately classify documents with at least 100 sentences.

  • 266.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Fast and Precise Points-to Analysis2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Many software engineering applications require points-to analysis. These client applications range from optimizing compilers to integrated program development environments (IDEs) and from testing environments to reverse-engineering tools. The software engineering applications are often user-interactive, or used in an edit-compile cycle, and need the points-to analysis to be fast and precise.

    In this compilation thesis, we present a new context- and flow-sensitive approach to points-to analysis that is both fast and precise. This is accomplished by a new SSA-based flow-sensitive dataflow algorithm (Paper 1) and a new context-sensitive analysis (Paper 2). Compared to other well-known analysis approaches our approach is faster in practice, on average, twice as fast as the call string approach and by an order of magnitude faster than the object-sensitive technique. In fact, it shows to be only marginally slower than a context-insensitive baseline analysis. At the same time, it provides higher precision than the call string technique and is similar in precision to the object-sensitive technique. We confirm these statements with experiments in Paper 2.

    Paper 3 is a systematic comparison of ten different variants of context-sensitive points-to analysis using different call-depths  for separating the contexts. Previous works indicate that analyses with a call-depth  only provides slightly better precision than context-insensitive analysis and they find no substantial precision improvement when using a more expensive analyses with call-depth . The hypothesis in Paper 3 is that substantial differences between the context-sensitive approaches show if (and only if) the precision is measured by more fine-grained metrics focusing on individual objects (rather than methods and classes) and references between them. These metrics are justified by the many applications requiring such detailed object reference information.

    The main results in Paper 3 show that the differences between different context-sensitive analysis techniques are substantial, also the differences between the context-insensitive and the context-sensitive analyses with call-depth are substantial. The major surprise was that increasing the call-depth  did not lead to any substantial precision improvements. This is a negative result since it indicates that, in practice, we cannot get a more precise points-to analysis by increasing the call-depth. Further investigations show that substantial precision improvements can be detected for but they occur at such a low detail level that they are unlikely to be of any practical use.

    Download (jpg)
    presentationsbild
  • 267.
    Lundberg, Sebastian
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Sönnerfors, Peter
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Integritetsstudie av Ceph: En mjukvarubaserad lagringsplattform2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Modern organizations today normally operate an expensive IT-based infrastructure. They possess high-performance functionality and security to ensure that the daily operations progresses. Software-based storage platforms is an option that can be used to reduce these costs. By imitating advanced technology that already exists in the traditional storage platforms, this can be implemented as software on commodity hardware in order to achieve the same functionality. Ceph is an alternative that is implemented today and intends to provide this choice. We believe that software-based storage solutions are untested and a series of pre-built tests were conducted that examined whether Ceph can guarantee data integrity. What we observed was a lack of functionality to maintain data integrity after Ceph completed the recovery process, when the storage cluster experienced high utilization.

    Download full text (pdf)
    fulltext
  • 268.
    Luu, Magnus
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Karlsson, Johan
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Server Message Block: En undersökning av potentiella prestandavinster mellan SMB 2.1 och SMB 3.0 i ett befintligt nätverk2013Independent thesis Basic level (university diploma), 5 credits / 7,5 HE creditsStudent thesis
    Abstract [en]

    The dissertation addresses the comparison of Server Message Block 3.0 and its predecessor SMB 2.1 on an existing network. The comparison was performed in four laboratory environments consisting of four operating systems: Windows Server 2008R2, Windows Server 2012, Windows 7 and Windows 8. A total of four tests were performed: Feasibility Study, Test 1, Test 2 and Test 3. The feasibility study was conducted to test the network performance between two computers while the other tests put SMB 2.1 and 3.0 to the test.

    In Test 1 SMB 3.0 was considered to underperform in difference to SMB 2.1, to confirm the results Jose Barreto at Microsoft were contacted. Barreto stated that the two software applications Windows Defender and Windows Firewall could cause interference and performance reductions. Test 2 and Test 3 was therefore performed with the previously mentioned software disabled. The results were as previously varied but may well be due to other factors such as poorly updated network card drivers.

    Download full text (pdf)
    fulltext
  • 269.
    Magnusson, Erik
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Grenmyr, David
    An Investigation of Data Flow Patterns Impact on Maintainability When Implementing Additional Functionality2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    JavaScript is breaking ground with the wave of new client-side frameworks. However, there are some key differences between some of them. One major distinction is the data flow pattern they applying. As of now, there are two predominant patterns used on client side frameworks, the Two-way data flow pattern and the Unidirectional data flow pattern.

    In this research, an empirical experiment was conducted to test the data flow patterns impact on maintainability. The scope of maintainability of this research is defined by a set of metrics: Amount of lines code, an amount of files and amount of dependencies. By analyzing the results, a conclusion could not be made to prove that the data flow patterns does affect maintainability, using this research method. 

    Download full text (pdf)
    fulltext
  • 270.
    Magnusson, Ludvig
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Rovala, Johan
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    AI Approaches for Classification and Attribute Extraction in Text2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    As the amount of data online grows, the urge to use this data for different applications grows as well. Machine learning can be used with the intent to reconstruct and validate the data you are interested in. Although the problem is very domain specific, this report will attempt to shed some light on what we call strategies for classification, which in broad terms mean, a set of steps in a process where the end goal is to have classified some part of the original data. As a result, we hope to introduce clarity into the classification process in detail as well as from a broader perspective. The report will investigate two classification objectives, one of which is dependent on many variables found in the input data and one that is more literal and only dependent on one or two variables. Specifically, the data we will classify are sales-objects. Each sales-object has a text describing the object and a related image. We will attempt to place these sales-objects into the correct product category. We will also try to derive the year of creation and it’s dimensions such as height and width. Different approaches are presented in the aforementioned strategies in order to classify such attributes. The results showed that for broader attributes such as a product category, supervised learning is indeed an appropriate approach, while the same can not be said for narrower attributes, which instead had to rely on entity recognition. Experiments on image analytics in conjunction with supervised learning proved image analytics to be a good addition when requiring a higher precision score.

    Download full text (pdf)
    fulltext
  • 271.
    Magnusson, Per
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Responsiv hantering av bilder: Vad är en bästa lösning för responsiv hantering av bilder på olika enheter?2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    With responsive web design, a design is created that dynamically adapts to various devices with different screen widths, without the need to create a custom layout for each specific device. In recent years the use of mobile devices to surf the Web has increased and the demands made on the mobile surfing is growing with it. A problem that arises with mobile devices is that despite their small screens, they’re using the same high-resolution images created for larger screens. This means that, despite resizing images to responsively adjust the image to the device, an unnecessarily large image is downloaded to the device. Due to poor connection speeds, this may create a slow download, which in turn provides the user with a slow browsing experience. This could also mean that an excessive amount of bandwidth is being used. This paper investigates and evaluates different frameworks and solutions for responsive handling of images to find out what the best solution is to this problem. What a best solution is, is shown to depend a lot on what basic conditions that prevail and what requirements are initially placed on the application the solution should be implemented in.

    Download full text (pdf)
    responsiv_hantering_av_bilder.pdf
  • 272.
    Mahdavi-Hezavehi, Sara
    Linnaeus University, Faculty of Technology, Department of Computer Science. Univ Groningen, Netherlands.
    Handling Multiple Quality Attributes Trade-off in Architecture-based Self-adaptive systems2016In: ACM PROCCEDINGS OF THE 10TH EUROPEAN CONFERENCE ON SOFTWARE ARCHITECTURE WORKSHOPS (ECSA-W), Association for Computing Machinery (ACM), 2016Conference paper (Refereed)
    Abstract [en]

    Self-adaptive systems are capable of autonomously making runtime decisions in order to deal with uncertain circumstances. In architecture-based self-adaptive (ABSA) systems the feedback loop uses self-reflecting models to perform decision making and ultimately apply adaptation to the system. One aspect of this decision making mechanism is to handle systems' quality attributes trade-off. An ABSA system is required to address the potential impacts of adaptation on multiple quality attributes, and select the adaptation option which satisfies the quality attributes of the system the best. In this PhD project, we study and propose an architecture-based solution which uses runtime knowledge of the systems and its environment to handle quality attributes trade-off and decision making mechanism in presence of system's quality goals uncertainty. For validation, we will a) create and set up case studies in various domains, and b) use exemplars to benchmark our proposed method with existing approaches.

  • 273.
    Mahdavi-Hezavehi, Sara
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. University of Groningen, Netherlands.
    Avgeriou, Paris
    University of Groningen, Netherlands.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    A Classification Framework of Uncertainty in Architecture-Based Self-Adaptive Systems with Multiple Quality Requirements2017In: Managing Trade-offs in Adaptable Software Architectures / [ed] Ivan Mistrík, Nour Ali, John Grundy, Kazman Rick, Grundy John, Schmerl Bradley, Elsevier, 2017, 1, p. 45-77Chapter in book (Refereed)
    Abstract [en]

    Context The underlying uncertainty in self-adaptive systems aggravates the complexity of selecting best adaptation action alternative, and handling requirements trade-offs. To efficiently tackle uncertainty, it is necessary to have a comprehensive overview of different types of uncertainty and their specifications. Objective In this paper we aim at (a) reviewing the state-of-the-art of architecture-based approaches tackling uncertainty in self-adaptive systems with multiple quality requirements, (b) proposing a classification framework for this domain, and (c) classifying the current approaches according to this framework. Method We conducted a systematic literature review by performing an automatic search on twenty seven selected venues and books in the domain of self-adaptive systems. Results We propose a classification framework for uncertainty and its sources in the domain of architecture-based self-adaptive systems with multiple quality requirements. We map 51 identified primary studies into the framework and present the classified results. Conclusions Our results help researchers to understand the current state of research regarding uncertainty in architecture-based self-adaptive systems with multiple concerns, and identity areas for improvement in the future. © 2017 Elsevier Inc. All rights reserved.

  • 274.
    Mahdavi-Hezavehi, Sara
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. Univ Groningen, Netherlands ; Room 576 Bernoulliborg, Netherlands.
    Durelli, Vinicius H. S.
    Univ Groningen, Netherlands ; Room 576 Bernoulliborg, Netherlands ; Univ Sao Paulo, Brazil.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of Computer Science. Katholieke Univ Leuven, Belgium.
    Avgeriou, Paris
    Univ Groningen, Netherlands ; Room 576 Bernoulliborg, Netherlands.
    A systematic literature review on methods that handle multiple quality attributes in architecture-based self-adaptive systems2017In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 90, p. 1-26Article, review/survey (Refereed)
    Abstract [en]

    Context: Handling multiple quality attributes (QAs) in the domain of self-adaptive systems is an understudied research area. One well-known approach to engineer adaptive software systems and fulfill QAs of the system is architecture-based self-adaptation. In order to develop models that capture the required knowledge of the QAs of interest, and to investigate how these models can be employed at runtime to handle multiple quality attributes, we need to first examine current architecture-based self-adaptive methods. Objective: In this paper we review the state-of-the-art of architecture-based methods for handling multiple QAs in self-adaptive systems. We also provide a descriptive analysis of the collected data from the literature. Method: We conducted a systematic literature review by performing an automatic search on 28 selected venues and books in the domain of self-adaptive systems. As a result, we selected 54 primary studies which we used for data extraction and analysis. Results: Performance and cost are the most frequently addressed set of QAs. Current self-adaptive systems dealing with multiple QAs mostly belong to the domain of robotics and web-based systems paradigm. The most widely used mechanisms/models to measure and quantify QAs sets are QA data variables. After QA data variables, utility functions and Markov chain models are the most common models which are also used for decision making process and selection of the best solution in presence of many alternatives. The most widely used tools to deal with multiple QAs are PRISM and IBM's autonomic computing toolkit. KLAPER is the only language that has been specifically developed to deal with quality properties analysis. Conclusions: Our results help researchers to understand the current state of research regarding architecture-based methods for handling multiple QAs in self-adaptive systems, and to identity areas for improvement in the future. To summarize, further research is required to improve existing methods performing tradeoff analysis and preemption, and in particular, new methods may be proposed to make use of models to handle multiple QAs and to enhance and facilitate the tradeoffs analysis and decision making mechanism at runtime. (C) 2017 Published by Elsevier B.V.

  • 275.
    Malmborg, Rasmus
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ödalen Frank, Leonard
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Jämförelse mellan populära molnlagringstjänster: Ur ett hastighetsperspektiv2014Independent thesis Basic level (university diploma), 5 credits / 7,5 HE creditsStudent thesis
    Abstract [en]

    Cloud Storage services have seen increased usage and is an emerging market. This paper has focused on examining various cloud storage services from a speed perspective. When small files are exchanged between client and server, the speed of the service is of little importance. For larger transfers however, the speed of the service used plays a more important role.

    Regular speed measurements have been carried out against the most popular cloud storage services. The tests have been performed from Sweden and USA. The tests have been carried out over several days and at different times of day, to determine if speed differences exist.

    The results show that there are significant differences in speed between Sweden and the United States. In Sweden, Mega and Google Drive had the highest average speed. Within the United States, Google Drive had the highest average speed, but the variability between the services was not as great as in Sweden. In the results between different timeperiods, it was difficult to discern a pattern, with the exception of Google Drive in Sweden which consistently worked best during the night / morning. Mega also worked best during the night.

    Download full text (pdf)
    fulltext
  • 276.
    Malyutin, Oleksandr
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A System of Automated Web Service Selection2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the modern world, service oriented applications are becoming more and more popular from year to year. To remain competitive, these Web services must provide the high level of quality. From another perspective, the end user is interested in getting the service, which fits the user's requirements the best: for limited resources get the service with the best available quality. In this work, the model for automated service selection was presented to solve this problem. The main focus of this work was to provide high accuracy of this model during the prediction of Web service’s response time. Therefore, several machine learning algorithms were selected and used in the model as well as several experiments were conducted and their results were evaluated and analysed to select one machine learning algorithm, which coped best with the defined task. This machine learning algorithm was used in final version of the model.

    As a result, the selection model was implemented, whose accuracy was around 80% while selecting only one Web service as a best from the list of available. Moreover, one strategy for measuring accuracy has also been developed, the main idea of which is the following: not one but several Web services, the difference in the response time of which does not exceed the boundary value, can be considered as optimal ones. According to this strategy, the maximum accuracy of selecting the best Web service was about 89%. In addition, a strategy for selecting the best Web service from the end-user side was developed to evaluate the performance of implemented model.

    Finally, it should also be mentioned that with the help of specific tool the input data for the experiments was generated, which allowed not only generating different input datasets without huge time consumption but also using the input data with the different type (linear, periodic) for experiments.

    Download full text (pdf)
    fulltext
  • 277.
    Marchal, Jakob
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Andreasen, Mathias Andreasen
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Autopositionering för röntgensystem2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Abstrakt

    Röntgen är ett område där man ställer frågan om processen skulle kunna automatiseras för att göra den enklare för sjuksköterskor. På så sätt ökar antalet patienter som kan röntgas eftersom det skulle gå snabbare. Med hjälp av datorseende och en servostyrd röntgenkamera kan man förverkliga delar av dessa drömmar genom att låta röntgenkameran själv justera sig efter en patient och även flyttas till en vald kroppsdel.

    Här undersöks och testas open-source biblioteket OpenCV. En prototyp på ett automatiskt system tas fram med syftet att testa OpenCVs funktionalitet och besvara ett antal frågor:

    • Hur kan röntgen automatiseras genom användning av open-source programvarubibliotek med inriktning på bildbehandling?
    • Vilka för – och nackdelar kan användandet av ett datorseendebibliotek vara?
    • Kan man med dagens teknik utveckla en automatisk lösning som kan göras till en kommersiell produkt?
    Download full text (pdf)
    fulltext
  • 278.
    Marinis Artelaris, Spyridon
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Performance evaluation of routing protocols for Wireless Mesh Networks2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Wireless Mesh Networks provide an organisation or a community with the means to extend or create a network independent of infrastructure. However, the network’s dynamic topology along with the fact that devices in the network might be mobile and move randomly, brings tolight various kind of problems on the network, with the most common being the routing. In this report, the problem of routing is examined in terms of throughput, routing overhead, end-to-end delay and packet delivery ratio on two chosen algorithms, namely the Dynamic MANET On-demand (DYMO) and the Better Approach To Mobile Adhoc Networking (B.A.T.M.A.N.). Furthermore, this thesis examines also a Transmission Control Protocol (TCP) connection and compares it against several TCP congestion control mechanisms, two of which, were implemented, namely TCP-Illinois and TCP-FIT, to address the effects that different TCP congestion mechanisms have on an ad-hoc network, when reliable connections are needed. The results show that DYMO is more stable, performs good overall and has the lowest routing overhead, however in a situation with limited mobility or no mobility (as in high mobility they perform poorly) proactive protocols like B.A.T.M.A.N. are worthy protocols, should the extra penalty of routing overhead in the network traffic is not causing any problems. Furthermore, regarding the TCP results, it was observed that TCP congestion algorithms designed specifically for Wireless networks, do offer better performance and should be considered, when designing an ad-hoc network.

    Download full text (pdf)
    fulltext
  • 279.
    Martins, Rafael Messias
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Kruiger, Johannes F.
    University of Groningen, The Netherlands ; École Nationale de l’Aviation Civile, France.
    Minghim, Rosane
    University of São Paulo, Brazil.
    Telea, Alexandru C.
    University of Groningen, The Netherlands.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    MVN-Reduce: Dimensionality Reduction for the Visual Analysis of Multivariate Networks2017In: EuroVis 2017 - Short Papers / [ed] Barbora Kozlikova and Tobias Schreck and Thomas Wischgoll, Eurographics - European Association for Computer Graphics, 2017, p. 13-17Conference paper (Refereed)
    Abstract [en]

    The analysis of Multivariate Networks (MVNs) can be approached from two different perspectives: a multidimensional one, consisting of the nodes and their multiple attributes, or a relational one, consisting of the network’s topology of edges. In order to be comprehensive, a visual representation of an MVN must be able to accomodate both. In this paper, we propose a novel approach for the visualization of MVNs that works by combining these two perspectives into a single unified model, which is used as input to a dimensionality reduction method. The resulting 2D embedding takes into consideration both attribute- and edge-based similarities, with a user-controlled trade-off. We demonstrate our approach by exploring two real-world data sets: a co-authorship network and an open-source software development project. The results point out that our method is able to bring forward features of MVNs that could not be easily perceived from the investigation of the individual perspectives only. 

  • 280.
    Martins, Rafael Messias
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Simaki, Vasiliki
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. Lund University.
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Paradis, Carita
    Lund University.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    StanceXplore: Visualization for the Interactive Exploration of Stance in Social Media2017Conference paper (Refereed)
    Abstract [en]

    The use of interactive visualization techniques in Digital Humanities research can be a useful addition when traditional automated machine learning techniques face difficulties, as is often the case with the exploration of large volumes of dynamic—and in many cases, noisy and conflicting—textual data from social media. Recently, the field of stance analysis has been moving from a predominantly binary approach—either pro or con—to a multifaceted one, where each unit of text may be classified as one (or more) of multiple possible stance categories. This change adds more layers of complexity to an already hard problem, but also opens up new opportunities for obtaining richer and more relevant results from the analysis of stancetaking in social media. In this paper we propose StanceXplore, a new visualization for the interactive exploration of stance in social media. Our goal is to offer DH researchers the chance to explore stance-classified text corpora from different perspectives at the same time, using coordinated multiple views including user-defined topics, content similarity and dissimilarity, and geographical and temporal distribution. As a case study, we explore the activity of Twitter users in Sweden, analyzing their behavior in terms of topics discussed and the stances taken. Each textual unit (tweet) is labeled with one of eleven stance categories from a cognitive-functional stance framework based on recent work. We illustrate how StanceXplore can be used effectively to investigate multidimensional patterns and trends in stance-taking related to cultural events, their geographical distribution, and the confidence of the stance classifier. 

  • 281.
    Maryokhin, Tymur
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Data dissemination in large-cardinality social graphs2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Near real-time event streams are a key feature in many popular social media applications. These types of applications allow users to selectively follow event streams to receive a curated list of real-time events from various sources. Due to the emphasis on recency, relevance, personalization of content, and the highly variable cardinality of social subgraphs, it is extremely difficult to implement feed following at the scale of major social media applications. This leads to multiple architectural approaches, but no consensus has been reached as to what is considered to be an idiomatic solution. As of today, there are various theoretical approaches exploiting the dynamic nature of social graphs, but not all of them have been applied in practice. In this paper, large-cardinality graphs are placed in the context of existing research to highlight the exceptional data management challenges that are posed for large-scale real-time social media applications. This work outlines the key characteristics of data dissemination in large-cardinality social graphs, and overviews existing research and state-of-the-art approaches in industry, with the goal of stimulating further research in this direction.

    Download full text (pdf)
    report
    Download full text (pdf)
    research protocol
  • 282.
    Maushagen, Jan
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Visual Analysis of Publication Networks2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis documents the development of a web-application attacking the problem of visualization of co-authorship networks. The visualization encompasses several views.Each of them shows different aspects of the data which is loaded from Academic Archive Online (DiVa), a library system which holds all publications released in the Linnaeus University.  To detect relationships among authors, a new interactive layout for Node-Link Diagrams was developed which shows publications, authors and corresponding organizations (faculties, departments) in a radial manner. This Network-View is connected to another view showing the attributes (year, type) of the publications. In development, particular emphasis was placed on a rich support of user interaction in order to equip the user with a tool that allows graphical and explorative analysis of the underlying data.

    Download full text (pdf)
    Maushagen Thesis
  • 283.
    Melander, Mikael
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Synchronization and data merging between iOS, server and database: Solution for setup of synchronized offline capable crud functionality between iOS client and server2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Mobile applications is a rapidly growing market, and offers the opportunity

    for companies to vastly improve their productivity simply by being able to

    work anywhere. The enterprise industry does not always have the same

    requirements for a product as the personal market has. In this report, the idea

    is to building a solution that takes these requirements in to consideration in

    order to create a starting point for a more sustainable solution. A solution that

    is self-hosted in order to keep data away from third-parties, supports offline

    capabilities to make sure the productivity does not get interrupted, an

    implementation for handling data merging as to not get any data corruption

    while also being reusable in different situations to make it as cost and time

    efficient as possible. These problems are of out most importance to solve to

    make the solution work in real life. To solve this, research around the subject

    and different ways to approach the problems was conducted, including

    different technologies, frameworks and architectures. Based on those results,

    a complete working solution was implemented. This proved that the above

    problems could be solved, with some limitations, by an open-sourced

    solution. That will keep data away from big third-party solutions, without

    having to lose productivity.

    Download full text (pdf)
    fulltext
  • 284.
    Memeti, Suejb
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Programming and Optimization of Big-Data Applications on Heterogeneous Computing Systems2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The next-generation sequencing instruments enable biological researchers to generate voluminous amounts of data. In the near future, it is projected that genomics will be the largest source of big-data. A major challenge of big data is the efficient analysis of very large data-sets. Modern heterogeneous parallel computing systems, which comprise multiple CPUs, GPUs, and Intel Xeon Phis, can cope with the requirements of big-data analysis applications. However, utilizing these resources to their highest possible extent demands advanced knowledge of various hardware architectures and programming frameworks. Furthermore, optimized software execution on such systems demands consideration of many compile-time and run-time system parameters.

    In this thesis, we study and develop parallel pattern matching algorithms for heterogeneous computing systems. We apply our pattern matching algorithm for DNA sequence analysis. Experimental evaluation results show that our parallel algorithm can achieve more than 50x speedup when executed on host CPUs and more than 30x when executed on Intel Xeon Phi compared to the sequential version executed on the CPU.

    Thereafter, we combine machine learning and search-based meta-heuristics to determine near-optimal parameter configurations of parallel matching algorithms for efficient execution on heterogeneous computing systems. We use our approach to distribute the workload of the DNA sequence analysis application across the available host CPUs and accelerating devices and to determine the system configuration parameters of a heterogeneous system that comprise Intel Xeon CPUs and Xeon Phi accelerator. Experimental results show that the execution that uses the resources of both host CPUs and accelerating device outperforms the host-only and the device-only executions.

    Furthermore, we propose programming abstractions, a source-to-source compiler, and a run-time system for heterogeneous stream computing. Given a source code annotated with compiler directives, the source-to-source compiler can generate device-specific code. The run-time system can automatically distribute the workload across the available host CPUs and accelerating devices. Experimental results show that our solution significantly reduces the programming effort and the generated code delivers better performance than the CPUs-only or GPUs-only executions.

    Download full text (pdf)
    Comprehensive summary
    Download (jpg)
    Presentationsbild
  • 285.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Li, Lu
    Linköping University.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Kołodziej, Joanna
    Cracow University of Technology, Poland.
    Kessler, Christoph
    Linköping University.
    Benchmarking OpenCL, OpenACC, OpenMP, and CUDA: Programming Productivity, Performance, and Energy Consumption2017In: ProceedingARMS-CC '17 Proceedings of the 2017 Workshop on Adaptive Resource Management and Scheduling for Cloud Computing, New York, NY, USA: Association for Computing Machinery (ACM), 2017, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Many modern parallel computing systems are heterogeneous at their node level. Such nodes may comprise general purpose CPUs and accelerators (such as, GPU, or Intel Xeon Phi) that provide high performance with suitable energy-consumption characteristics. However, exploiting the available performance of heterogeneous architectures may be challenging. There are various parallel programming frameworks (such as, OpenMP, OpenCL, OpenACC, CUDA) and selecting the one that is suitable for a target context is not straightforward. In this paper, we study empirically the characteristics of OpenMP, OpenACC, OpenCL, and CUDA with respect to programming productivity, performance, and energy. To evaluate the programming productivity we use our homegrown tool CodeStat, which enables us to determine the percentage of code lines required to parallelize the code using a specific framework. We use our tools MeterPU and x-MeterPU to evaluate the energy consumption and the performance. Experiments are conducted using the industry-standard SPEC benchmark suite and the Rodinia benchmark suite for accelerated computing on heterogeneous systems that combine Intel Xeon E5 Processors with a GPU accelerator or an Intel Xeon Phi co-processor.

  • 286.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Accelerating DNA Sequence Analysis using Intel(R) Xeon Phi(TM)2015In: 2015 IEEE TRUSTCOM/BIGDATASE/ISPA, IEEE Press, 2015, Vol. 3, p. 222-227Conference paper (Refereed)
    Abstract [en]

    Genetic information is increasing exponentially, doubling every 18 months. Analyzing this information within a reasonable amount of time requires parallel computing resources. While considerable research has addressed DNA analysis using GPUs, so far not much attention has been paid to the Intel Xeon Phi coprocessor. In this paper we present an algorithm for large-scale DNA analysis that exploits thread-level and the SIMD parallelism of the Intel Xeon Phi. We evaluate our approach for various numbers of cores and thread allocation affinities in the context of real-world DNA sequences of mouse, cat, dog, chicken, human and turkey. The experimental results on Intel Xeon Phi show speed-ups of up to 10× compared to a sequential implementation running on an Intel Xeon processor E5.

    Download full text (pdf)
    fulltext
  • 287.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Analyzing large-scale DNA Sequences on Multi-core Architectures2015In: Proceedings: IEEE 18th International Conferenceon Computational Science and Engineering, CSE 2015 / [ed] Plessl, C; ElBaz, D; Cong, G; Cardoso, JMP; Veiga, L; Rauber, T, IEEE Press, 2015, p. 208-215Conference paper (Refereed)
    Abstract [en]

    Rapid analysis of DNA sequences is important in preventing the evolution of different viruses and bacteria during an early phase, early diagnosis of genetic predispositions to certain diseases (cancer, cardiovascular diseases), and in DNA forensics. However, real-world DNA sequences may comprise several Gigabytes and the process of DNA analysis demands adequate computational resources to be completed within a reasonable time. In this paper we present a scalable approach for parallel DNA analysis that is based on Finite Automata, and which is suitable for analysing very large DNA segments. We evaluate our approach for real-world DNA segments of mouse (2.7GB), cat (2.4GB), dog (2.4GB), chicken (1GB), human (3.2GB) and turkey (0.2GB). Experimental results on a dual-socket shared-memory system with 24 physical cores show speedups of up to 17.6x. Our approach is up to 3x faster than a pattern-based parallel approach that uses the RE2 library.

    Download full text (pdf)
    fulltext
  • 288.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Combinatorial optimization of DNA sequence analysis on heterogeneous systems2017In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 29, no 7, article id e4037Article in journal (Refereed)
    Abstract [en]

    Analysis of DNA sequences is a data and computational intensive problem, and therefore, it requires suitable parallel computing resources and algorithms. In this paper, we describe our parallel algorithm for DNA sequence analysis that determines how many times a pattern appears in the DNA sequence. The algorithm is engineered for heterogeneous platforms that comprise a host with multi-core processors and one or more many-core devices. For combinatorial optimization, we use the simulated annealing algorithm. The optimization goal is to determine the number of threads, thread affinities, and DNA sequence fractions for host and device, such that the overall execution time of DNA sequence analysis is minimized. We evaluate our approach experimentally using real-world DNA sequences of various organisms on a heterogeneous platform that comprises two Intel Xeon E5 processors and an Intel Xeon Phi 7120P co-processing device. By running only about 5% of possible experiments, our optimization method finds a near-optimal system configuration for DNA sequence analysis that yields with average speedup of 1.6 ×  and 2 ×  compared with the host-only and device-only execution.

  • 289.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Combinatorial Optimization of Work Distribution on Heterogeneous Systems2016In: Proceedings of 45th International Conference on Parallel Processing Workshops (ICPPW 2016), IEEE Press, 2016, p. 151-160Conference paper (Refereed)
    Abstract [en]

    We describe an approach that uses combinatorial optimization and machine learning to share the work between the host and device of heterogeneous computing systems such that the overall application execution time is minimized. We propose to use combinatorial optimization to search for the optimal system configuration in the given parameter space (such as, the number of threads, thread affinity, work distribution for the host and device). For each system configuration that is suggested by combinatorial optimization, we use machine learning for evaluation of the system performance. We evaluate our approach experimentally using a heterogeneous platform that comprises two 12-core Intel Xeon E5 CPUs and an Intel Xeon Phi 7120P co-processor with 61 cores. Using our approach we are able to find a near-optimal system configuration by performing only about 5% of all possible experiments.

  • 290.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    PaREM: a Novel Approach for Parallel Regular Expression Matching2014In: 2014 IEEE 17th International Conference on Computational Science and Engineering (CSE), IEEE Press, 2014, p. 690-697Conference paper (Refereed)
    Abstract [en]

    Regular expression matching is essential for many applications, such as finding patterns in text, exploring substrings in large DNA sequences, or lexical analysis. However, sequential regular expression matching may be time-prohibitive for large problem sizes. In this paper, we describe a novel algorithm for parallel regular expression matching via deterministic finite automata. Furthermore, we present our tool PaREM that accepts regular expressions and finite automata as input and automatically generates the corresponding code for our algorithm that is amenable for parallel execution on shared-memory systems. We evaluate our parallel algorithm empirically by comparing it with a commonly used algorithm for sequential regular expression matching. Experiments on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 21× for 48 threads.

  • 291.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    The Potential of Intel Xeon Phi for DNA Sequence Analysis2015In: ACACES 2015: Advanced Computer Architecture and Compilation for High-Performance and Embedded Systems, 2015, p. 263-266Conference paper (Other academic)
    Abstract [en]

    Genetic information is increasing exponentially, doubling every 18 months. Analyzing this information within a reasonable amount of time requires parallel computing resources. While considerable research has addressed DNA analysis using GPUs, so far not much attention has been paid to the Intel Xeon Phi coprocessor. In this paper we present an algorithm for large-scale DNA analysis that exploits the thread-level and the SIMD parallelism of the Intel Xeon Phi coprocessor. We evaluate our approach for various numbers of cores and thread allocation affinities in the context of real-world DNA sequences of mouse, cat, dog, chicken, human and turkey. The experimental results on Intel Xeon Phi show speed-ups of up to 10× compared to a sequential implementation running on an Intel Xeon processor E5.

    Download full text (pdf)
    poster
  • 292.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Work Distribution of Data-Parallel Applications on Heterogeneous Systems2016In: High Performance Computing: ISC High Performance 2016 International Workshops, ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, P^3MA, VHPC, WOPSSS, Frankfurt, Germany, June 19–23, 2016,  Revised Selected Papers / [ed] Michela Taufer, Bernd Mohr, Julian M. Kunkel, Springer, 2016, p. 69-81Chapter in book (Refereed)
    Abstract [en]

    Heterogeneous computing systems offer high peak performance and energy efficiency, and utilizing this potential is essential to achieve extreme-scale performance. However, optimal sharing of the work among processing elements in heterogeneous systems is not straightforward. In this paper, we propose an approach that uses combinatorial optimization to search for optimal system configuration in a given parameter space. The optimization goal is to determine the number of threads, thread affinities, and workload partitioning, such that the overall execution time is minimized. For combinatorial optimization we use the Simulated Annealing. We evaluate our approach with a DNA sequence analysis application on a heterogeneous platform that comprises two Intel Xeon E5 processors and an Intel Xeon Phi 7120P co-processor. The obtained results demonstrate that using the near-optimal system configuration, determined by our algorithm based on the simulated annealing, application performance is improved.

  • 293.
    Memeti, Suejb
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Kołodziej, Joanna
    Cracow University of Technology, Poland.
    Optimal Worksharing of DNA Sequence Analysis on Accelerated Platforms2016In: Resource Management for Big Data Platforms: Algorithms, Modelling, and High-Performance Computing Techniques, Springer, 2016, p. 279-309Chapter in book (Refereed)
    Abstract [en]

    In this chapter, we describe an optimized approach for DNA sequence analysis on a heterogeneous platform that is accelerated with the Intel Xeon Phi. Such platforms commonly comprise one or two general purpose CPUs and one (or more) Xeon Phi coprocessors. Our parallel DNA sequence analysis algorithm is based on Finite Automata and finds patterns in large-scale DNA sequences. To determine the optimal worksharing (that is, DNA sequence fractions for the host and accelerating device) we propose a solution that combines combinatorial optimization and machine learning. The objective function that we aim to minimize is the execution time of the DNA sequence analysis. We use combinatorial optimization to efficiently explore the system configuration space and determine with machine learning the near-optimal system configuration for execution of the DNA sequence analysis. We evaluate our approach empirically using real-world DNA segments of various organisms. For experimentation, we use an accelerated platform that comprises two 12-core Intel Xeon E5 CPUs and an Intel Xeon Phi 7120P accelerator with 61 cores.

  • 294.
    Meyer, Seva
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Understanding Software Adaptation and Evolution2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Software maintenance is a significant part of software system lifetime. Softwaresystem lifetime incorporates many processes including software adaptation and software evolution. These processes collide into one another and create confusion as theboundaries that separate them are often difficult to distinguish. Knowing what exactly these concepts indicate and how they are related can bring simplicity to futuredevelopment of adaptive systems. The following document presents a performed systematic literature review, which aims to outline the similarities and the differences ofadaptation and evolution and further explain how they are related. The results of thestudy show that adaptation and evolution have become more entwined with growthof interest to self-managing dynamic software.

    Download full text (pdf)
    fulltext
  • 295.
    Mohammadnezhad, Mahdi
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Evaluating Stream Protocol for a Data Stream Center2016Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Linnaeus University is aiming at implementing a Data Stream Centre to provide streaming of accumulated data from the websites’ newspapers and articles in order to help its scientists of University to have faster and easier access to the mentioned data. This mentioned project consists of multiple parts and the part we are responsible to research about is first nominating some text streaming protocols based on the criteria that are important for Linnaeus University and then evaluating them. Those protocols are responsible to transfer text stream from the robots (that read articles from the websites) to the data stream center and from them to the scientists. Some KPIs (Key Performance Indicators) are defined and the protocols are evaluated based on those KPIs. In this study we address evaluation of network streaming protocol by starting to read about the protocol’s specifications and nominating four protocols including TCP, HTTP1.1, Server-Sent Events and Websocket. Then, fake robot and server are implemented by each protocol to simulate the functionality of real robots, servers and scientists in LNU data stream center project. Later, the evaluation is done in the mentioned simulated environment using RawCAP, Wireshark and Message Analyzer. The results of this study indicated that the best suited protocols for transferring text stream data from robot to data stream center and from data stream center to scientist are TCP and Server-Sent Events, respectively. In the concluding part, other protocols are also suggested in the order of priority.

    Download full text (pdf)
    Thesis_Fulltxt
  • 296.
    Mousavi, Seyedamirhossein
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Maintainability Evaluation of Single Page Application Frameworks: Angular2 vs. React2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Web applications are subject to intense market forces, fast delivery and rapid requirement and code change. These are the factors that make maintainability a significant concern in any and especially web application development. In this report we develop a functional equivalent prototype from an existing Angular app, using ReactJs and afterward compare their maintainability as defined by ISO/IEC 25010. The maintainability comparison is made by calculating maintainability index for each of the applications using Plato analysis tool.

    The results do not show a significant difference in the calculated value of the final products. Source code analysis shows that changes in data flow need more modification in the Angular app, but with the objective oriented approach provided by Angular, we can have smaller chunks of code and thus higher maintainability per file and respectively a better average value.

    We conclude that regarding the lack of research and models in this area, MI is a consistent measurement model and Plato is a suitable tool for analysis. Though maintainability is highly bounded to the implementation, functionalities which are provided by the Angular framework as a bundle is more appropriate for large enterprises and complex products where React works better for smaller products.

    Download full text (pdf)
    fulltext
  • 297.
    Mubark, Athmar
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Computer Science Optimization Of Reverse auction: Reverse Auction2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Many people still confused and misunderstand the differences between auction types: In fact, we have only two major types of auctions which are the forward auction and Reverse auction[22]. In a forward auction a single seller offers an item for sale with many competitive buyers driving the price upward: In a Reverse Auction, a single buyer wants to purchase a service or an item from many sellers, they drive the price downward: There are many differences between these type of auction: Including the progress of the auctions; winner selection criterion and other factors:

    The Reverse Auction nowadays is one of the most preferred types of online auctions: It gains popularity rapidly because of representing the buyers' side and helps him to drive prices down in contrary with the forward auction or traditional auction.

    The aim of this study is to identify the most common types of the Reverse auctions and compare them to one another to determine when should be used by a buyer and propose the most efficient implementation model for some types:

    The results of this study are: achieve a written report and a small demonstrator model on how to implement English Auction and Second-Sealed bid Auction.

    Download full text (pdf)
    fulltext
  • 298.
    Muccini, Henry
    et al.
    Univ Aquila, Italy.
    Sharaf, Mohammad
    Univ Aquila, Italy.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. Katholieke Univ Leuven, Belgium.
    Self-Adaptation for Cyber-Physical Systems: A Systematic Literature Review2016In: PROCEEDINGS OF 2016 IEEE/ACM 11TH INTERNATIONAL SYMPOSIUM ON SOFTWARE ENGINEERING FOR ADAPTIVE AND SELF-MANAGING SYSTEMS (SEAMS), IEEE, 2016, p. 75-81Conference paper (Refereed)
    Abstract [en]

    Context: Cyber-physical systems (CPS) seamlessly integrate computational and physical components. Adaptability, realized through feedback loops, is a key requirement to deal with uncertain operating conditions in CPS. Objective: We aim at assessing state-of-art approaches to handle self-adaptation in CPS at the architectural level. Method: We conducted a systematic literature review by searching four major scientific data bases, resulting in 1103 candidate studies and eventually retaining 42 primary studies included for data collection after applying inclusion and exclusion criteria. Results: The primary concerns of adaptation in CPS are performance, flexibility, and reliability. 64% of the studies apply adaptation at the application layer and 24% at the middleware layer. MAPE (Monitor-Analyze-Plan-Execute) is the dominant adaptation mechanism (60%), followed by agents and self-organization (both 29%). Remarkably, 36% of the studies combine different mechanisms to realize adaptation; 17% combine MAPE with agents. The dominating application domain is energy (24%). Conclusions: Our findings show that adaptation in CPS is a cross-layer concern, where solutions combine different adaptation mechanisms within and across layers. This raises challenges for future research both in the field of CPS and self-adaptation, including: how to map concerns to layers and adaptation mechanisms, how to coordinate adaptation mechanisms within and across layers, and how to ensure system-wide consistency of adaptation.

  • 299.
    Musil, Juergen
    et al.
    Vienna University of Technology, Vienna.
    Musil, Angelika
    Vienna University of Technology, Vienna.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Biffl, Stefan
    Vienna University of Technology, Vienna.
    An Architecture Framework for Collective Intelligence Systems2015In: Proceedings: 12th Working IEEE/IFIP Conferenceon Software Architecture, WICSA 2015 / [ed] Len Bass, Patricia Lago, Philippe Kruchten, IEEE, 2015, p. 21-30Conference paper (Refereed)
    Abstract [en]

    Collective intelligence systems (CIS), such as wikis, social networks and content sharing platforms, have dramatically improved knowledge creation and sharing at society level. There is a trend to exploit the stigmergic mechanisms of CIS also at organization/corporate level. However, despite the wide adoption of CIS, there is a lack of consolidated systematic knowledge of the architectural principles and practices that underlie CIS. Software architects lack guidance to design CIS for the application context of individual organizations. To address these challenges, we contribute with an architecture framework for CIS, aligned with ISO/IEC/IEEE 42010. The CIS-AF framework provides guidance for architects to describe key CIS elements and systematically model a CIS that is well-suited for an organization's context and goals. The framework is grounded in an in-depth analysis of existing CIS, workshops and interviews with key stakeholders, and experiences from developing a prototypical CIS. We evaluated the architecture framework in two cases in industry setting where CIS have been designed and implemented using the framework. Results show that the framework effectively supports stakeholders with providing a shared vocabulary of CIS concepts, guiding them to systematically apply the stigmergic principles of CIS, and supporting them with kick starting CIS in their organizations.

    Download full text (pdf)
    fulltext
  • 300.
    Månsson, Anton
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Webbsystem säkerhet: Ur ett API och webbapplikations perspektiv2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Web applications and APIs have become more popular every year, and security risks haveincreased. Along with more security risks and the large amount of sensitive informationshared on web applications today, the problem grows. I therefore wanted to explore morein security deficiencies to increase my own knowledge and others in the field. To do that,a web application was developed and a survey was made of what security threats existtoday and what solutions they have. Some of the solutions encountered during theinvestigation were then implemented and tested in the web application. The result showedsome general solutions such as validation, which was a solution to a number of threats.The investigation also showed that security is not black and white and that it is possibleto implement actions but attackers can still find ways to attack systems.

    Download full text (pdf)
    fulltext
3456789 251 - 300 of 485
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf