While the HPC community is working towards the development of the first Exaflop computer (expected around 2020), after reaching the Petaflop milestone in 2008 still only few HPC applications are able to fully exploit the capabilities of Petaflop systems. In this paper we argue that efforts for preparing HPC applications for Exascale should start before such systems become available. We identify challenges that need to be addressed and recommend solutions in key areas of interest, including formal modeling, static analysis and optimization, runtime analysis and optimization, and autonomic computing. Furthermore, we outline a conceptual framework for porting HPC applications to future Exascale computing systems and propose steps for its implementation.
The domain of Information Technology has been discussed focusing the security of in-formation based on web application. The main purpose of the paper is to pinpoint andexplain the main attacks on web applications. In the study the I have used real world webapplication to demonstrate different types of attacks and the ways of prevention againstthem. Cyber criminals are using certain tactics to gather sensitive information throughweb applications, thus it is important to study this domain of IT. Experiment has beenconducted to demonstrate the concept and achieved outcomes have been explained. It hasbeen concluded that the most of the web application vulnerabilities come from a bad de-sign, according to Microsoft Developer Network (MSDN) Design Guidelines for SecureWeb Applications, and most of the threats can be prevented by considering basics of webapplication security while designing the application.
The development and interest in Industry 4.0 together with rapid development of Cyber Physical Systems has created magnificent opportunities to develop maintenance to a totally new level. The Maintenance 4.0 vision considers massive exploitation of information regarding factories and machines to improve maintenance efficiency and efficacy, for example by facilitating logistics of spare parts, but on the other hand this creates other logistics issues on the data itself, which only exacerbate data management issues that emerge when distributed maintenance platforms scale up. In fact, factories can be delocalized with respect to the data centers, where data has to be transferred to be processed. Moreover, any transaction needs communication, be it related to purchase of spare parts, sales contract, and decisions making in general, and it has to be verified by remote parties. Keeping in mind the current average level of Overall Equipment Efficiency (50%) i.e. there is a hidden factory behind every factory, the potential is huge. It is expected that most of this potential can be realised based on the use of the above named technologies, and relying on a new approach called blockchain technology, the latter aimed at facilitating data and transactions management. Blockchain supports logistics by a distributed ledger to record transactions in a verifiable and permanent way, thus removing the need for multiple remote parties to verify and store every transaction made, in agreement with the first “r” of maintenance (reduce, repair, reuse, recycle). Keeping in mind the total industrial influence on the consumption of natural resources, such as energy, the new technology advancements can allow for dramatic savings, and can deliver important contributions to the green economy that Europe aims for. The paper introduces the novel technologies that can support sustainability of manufacturing and industry at large, and proposes an architecture to bind together said technologies to realise the vision of Maintenance 4.0.
The purpose of the thesis is to develop a new visualization method for Gene Ontologiesand hierarchical clustering. These are both important tools in biology andmedicine to study high-throughput data such as transcriptomics and metabolomicsdata. Enrichment of ontology terms in the data is used to identify statistically overrepresentedontology terms, that give insight into relevant biological processes orfunctional modules. Hierarchical clustering is a standard method to analyze andvisualize data to nd relatively homogeneous clusters of experimental data points.Both methods support the analysis of the same data set, but are usually consideredindependently. However, often a combined view such as: visualizing a large data setin the context of an ontology under consideration of a clustering of the data.The result of the current work is a user-friendly program that combines twodi erent views for analysing Gene Ontology and Cluster simultaneously. To makeexplorations of such a big data possible we developed new visualization approach.
With the recent digitalization trends in the industry, wireless sensors are, in particular, gaining a growing interest. This is due to the possibility of being installed in inaccessible locations for wired sensors. Although great success has already been achieved in this area, energy limitation remains a major obstacle for further advances. As such, it is important to optimize the sampling with a sufficient rate to catch important information without excessive energy consumption, and one way to achieve sufficient sampling is using adaptive sampling for sensors. As software plays an important role in the techniques of adaptive sampling, a reference framework for software architecture is important in order to facilitate their design, modeling, and implementation. This study proposes a software architecture, named Rainbow, as the reference architecture, also, it develops an algorithm for adaptive sampling. The algorithm was implemented in the Rainbow architecture and tested using two datasets; the results show the proper operation of the architecture as well as the algorithm. In conclusion, the Rainbow software architecture has the potential to be used as a framework for adaptive sampling algorithms, and the developed algorithm allows adaptive sampling based on the changes in the signal.
Den här undersökningen jämför prestanda hos tre olika firmwares som är baserade på öppen källkod. DD-WRT, Open-WRT samt Tomato Firmware för MIPS-arkitektur. Testerna följer två RFC som beskriver hur en prestandaanalys av ett nätverk ska genomföras.
De sammanfattade resultaten pekar på en vinnare som presterat generellt bättre genom alla tester och det var Tomato firmware. Dessa resultat är hämtade från tre olika tester: genomströmningstest, svarstidstest och test med samtidiga sessioner.
Undersökningen visar också att prestandan rent generellt är väldigt jämlik över alla firmwares i de olika testerna. En viktig aspekt är att det finns ingen överlägsen vinnare, vilket beror på, till exempel, hur konsekventa resultaten varit. Detta hänger även ihop med en möjlig slutsats där firmwaresen presterar olika bra beroende på vilken typ av uppgift det gäller.
Som fortsatt arbete rekommenderas prestanda och funktionsanalys av liknande verktyg som varje firmware innehåller. Även en undersökning gällande gränssnittet för varje firmware skulle vara intressant.
The number of Internet of Things (IoT) devices is rising and Wireless Fidelity (Wi-Fi) networks are still widely used in IoT networks. Security protocols such as Wi-Fi Protected Access 2 (WPA2) are still in use in most Wi-Fi networks, but Wi-Fi Protected Access 3 (WPA3) is making its way as the new security standard. These security protocols are crucial in Wi-Fi networks with energy and memory-constrained devices because of adversaries that could breach confidentiality, integrity, and availability of networks through various attacks. Many research papers exist on single Wi-Fi attacks, and the strengths and weaknesses of security protocols and Wi-Fi standards. This thesis aims to provide a detailed overview of Wi-Fi attacks and corresponding mitigation techniques against IoT Wi-Fi networks in a comprehensive taxonomy. In addition tools are mentioned for each Wi-Fi attack that allows, e.g., professionals or network administrators to test the chosen Wi-Fi attacks against their IoT networks. Four types of attack (categories) were defined, Man-in-the-Middle (MitM), Key-recovery, Traffic Decryption, and Denial of Service (DoS) attacks. A set of Wi-Fi attack features were defined and decribed. The features included the security protocol and security mode, the layer (physical or data-link) that an attack targets, and the network component interaction required to allow a Wi-Fi attack to execute successfully. In total, 20 Wi-Fi attacks were selected with relevance to IoT in Wi-Fi networks based on some criteria. Additonally, each Wi-Fi attack consist of a description of possible consequences/results an adversary can achieve, such as eavesdropping, data theft, key recovery, and many more. Flow charts were also added to give the reader a visual perspective on how an attack works. As a result, tables were created for each relevant security protocol and the Open Systems Interconnection (OSI) layers to create a overview of mitigations and available tools for each attack. Furthermore, WPA3 was discussed on how it solves some shortcomings of WPA2 but has vulnerabilities of it own that lie in the design of the 4-way and dragonfly handshake itself. In conclusion, development and proper vulnerability tests on the Wi-Fi standards and security protocols have to be conducted to improve and reduce the possibility of current and upcoming vulnerabilities.
We present a machine learning based method for noise classification using a low-power and inexpensive IoT unit. We use Mel-frequency cepstral coefficients for audio feature extraction and supervised classification algorithms (that is, support vector machine and k-nearest neighbors) for noise classification. We evaluate our approach experimentally with a dataset of about 3000 sound samples grouped in eight sound classes (such as, car horn, jackhammer, or street music). We explore the parameter space of support vector machine and k-nearest neighbors algorithms to estimate the optimal parameter values for classification of sound samples in the dataset under study. We achieve a noise classification accuracy in the range 85% -- 100%. Training and testing of our k-nearest neighbors (k = 1) implementation on Raspberry Pi Zero W is less than a second for a dataset with features of more than 3000 sound samples.
Noise is any undesired environmental sound. A sound at the same dB level may be perceived as annoying noise or as pleasant music. Therefore, it is necessary to go beyond the state-of-the-art approaches that measure only the dB level and also identify the type of noise. In this paper, we present a machine learning based method for urban noise identification using an inexpensive IoT unit. We use Mel-frequency cepstral coefficients for audio feature extraction and supervised classification algorithms (that is, support vector machine, k-nearest neighbors, bootstrap aggregation, and random forest) for noise classification. We evaluate our approach experimentally with a data-set of about 3000 sound samples grouped in eight sound classes (such as car horn, jackhammer, or street music). We explore the parameter space of the four algorithms to estimate the optimal parameter values for classification of sound samples in the data-set under study. We achieve a noise classification accuracy in the range 88% - 94%.
Security testing is a widely applied measure to evaluate and improve software security by identifying vulnerabilities and ensuring security requirements related to properties like confidentiality, integrity, and availability. A confidentiality policy guarantees that attackers will not be able to expose secret information. In the context of software programs, the output that attackers observe will not carry any information about the confidential input information. Integrity is the dual of confidentiality, i.e., unauthorized and untrusted data provided to the system will not affect or modify the system’s data. Availability means that systems must be available at a reasonable time. Information flow control is a mechanism to enforce confidentiality and integrity. An accurate security assessment is critical in an age when the open nature of modern software-based systems makes them vulnerable to exploitation. Security testing that verifies and validates software systems is prone to false positives, false negatives, and other such errors, requiring more resilient tools to provide an efficient way to evaluate the threats and vulnerabilities of a given system. Therefore, the newly developed tool Reax controls information flow in Java programs by synthesizing conditions under which a method or an application is secure. Reax is a command-line application, and it is hard to be used by developers. This project has its primary goal to integrate Reax by introducing a plugin for Java IDEs to perform an advanced analysis of security flaws. Specifically, by design, a graphical plugin performs advanced security analysis that detects and reacts directly to security flaws within the graphical widget toolkit environment (SWT). The project proposed a new algorithm to find the root cause of security violations through a graphical interface as a second important goal. As a result, developers will be able to detect security violations and fix their code during the implementation phase, reducing costs.
A major challenge in modelling and simulation is the need to combine expertise in both software technologies and a given scientific domain. When High-Performance Computing (HPC) is required to solve a scientific problem, software development becomes a problematic issue. Considering the complexity of the software for HPC, it is useful to identify programming languages that can be used to alleviate this issue. Because the existing literature on the topic of HPC is very dispersed, we performed a Systematic Mapping Study (SMS) in the context of the European COST Action cHiPSet. This literature study maps characteristics of various programming languages for data-intensive HPC applications, including category, typical user profiles, effectiveness, and type of articles. We organised the SMS in two phases. In the first phase, relevant articles are identified employing an automated keyword-based search in eight digital libraries. This lead to an initial sample of 420 papers, which was then narrowed down in a second phase by human inspection of article abstracts, titles and keywords to 152 relevant articles published in the period 2006–2018. The analysis of these articles enabled us to identify 26 programming languages referred to in 33 of relevant articles. We compared the outcome of the mapping study with results of our questionnaire-based survey that involved 57 HPC experts. The mapping study and the survey revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability. Furthermore, we observed that the majority of the programming languages used in the context of data-intensive HPC applications are text-based general-purpose programming languages. Typically these have a steep learning curve, which makes them difficult to adopt. We believe that the outcome of this study will inspire future research and development in programming languages for data-intensive HPC applications.
Mobile devices and mobile computing have made tremendous advances and become ubiquitous in the last few years. As a result, the landscape has become seriously fragmented which brings lots of challenges for the mobile development process. Whilst native approach of mobile development still is the predominant way to develop for a particular mobile platform, recently there is shifting towards cross-platform mobile development as well. In this paper, we have performed a survey of the literature to see the trends in cross-platform mobile development over the last few years. With the result of the survey, we argue that the web-based approach and in particular,hybrid approach, of mobile development serves the best for cross-platform development. The results of this work indicate that even though cross platform tools are not fully matured they show great potential. Thus we consider that cross-platform development offers great opportunities for rapid development of high-fidelity prototypes of the mobile application.
Startups need investments in order to scale their business. The value of such startups, especially software-based startups, are difficult to evaluate because there is no physical value that can be judged. The company DueDive built experience in due diligence by conducting many interviews in this area, which are the base for the due diligence. These interviews are time consuming and require a lot of domain knowledge in the field, which makes them very expensive. This thesis evaluated different machine learning algorithms to integrate into a software that supports such interviews process. The goal is to shorten the interview duration and lowering the required know know for the interviewer using suggestions by the AI. The software uses completed interview sessions to provide enhanced suggestions through artificial intelligence. The proposed solution uses basket analysis and imputation to analyze the collected data. The result is a topic-independent software that is used to administrate and carry out interviews with the help of AI. The results are validated and evaluated in a case study using a generic, self-defined interview.
The digitization of industry has drastically changed the competitive landscape by requiring a higher degree of specialization and shorter time to delivery that affect the design properties a software platform should satisfy. The platform architecture must sustain continuous and rapid change to the organizational architecture, which in turn is affected by external forces: i.e., forces drive the velocity of change. In this paper, we explore the effects of digitization, characterizing internal and external forces that impact on business strategies and trigger the continuous realignment of the platform, and outline a research agenda to mitigate the effects.
Designing an application from top to bottom is a challenge for any software architect. Designing an application to be deployed in the cloud adds extra complexity and a variety of questions to the task. One of these questions is how to deploy an application? The most popular choices at this time are either Docker containers or serverless functions. This report presents a comparison between the two deployment methods based on cost and performance. The comparison did not yield a conclusive winner but it did offer some key pointers to help with the decision. Docker containers offer a standardized deployment method for a low price and with good performance. Before choosing Docker the intended market needs to be evaluated given that for each region Docker needs to serve, the price also increases. Serverless functions offer auto-scaling and easy global deployments but suffer from high complexity, slower performance, and an uncertain monthly price tag.
Denna studie och uppsats har behandlat och fokuserats på att synliggöra interaktionseffekten som uppstår när ett informationssystems gränssnitt är avskalat. Fallstudien i denna uppsats utgår ifrån ett webbaserat skrivverktyg vid namn Writer. Syftet med studien har varit att tydliggöra interaktionseffekten vid personligt handlande i en interaktiv användningssituation och hur det i sin tur påverkar användarupplevelsen. Studiens teoretiska område och ramverk är Handlingsbarhet där fokus har varit på den interaktiva handlingsnivån i koppling till den grundläggande interaktionsloopen. Den grundläggande interaktionsloopen består av fyra steg där en användare frågar, gör får ett svar och sedan utvärderar. I koppling till teorin har fem interkationskriterier för teorin inkorporerats som dessa är; Tydlig handlingsrepertoar, Känd & begriplig vokabulär, Handlingstransparens, Tydlig feedback och Ändringsbarhet . Dessa fem kriterier har varit tematiseringen för det empiriska arbetet.
Genom både det målinriktade urvalet och snöbollsurvalet har sex journalister/skribenter involverats för att samla in data. I metodarbetet genomfördes användartest på skrivverktyget med uppgifter som utformats i enlighet med de fem interaktionskriterierna. Efter detta användartest fick samtliga informanter utvärdera upplevelsen och göra tretton ställningstaganden utifrån ett användarupplevelse-frågeformulär där begrepp såsom lätt att lära sig/inte lätt att lära sig och effektiv/ineffektiv efterfrågades. Efter dessa två moment utfördes semistrukturerade intervjuer med femton framtagna frågor som också dem baserats på de fem interaktionskriterierna. För att analysera den insamlade datan tillämpades metoden innehållsanalys som resulterade i 25 kategorier med tillhörande beskrivningar och citat. Utifrån innehållsanalysen och dess resultat analyserades kategorierna i jämförelse med modellen D.EU.PS och dess 18 klasser baserat på deras definition. Detta mynnade i sin tur ut i tio stycken interaktionseffekter baserat på sju klasser. Resultatet och de tio effekterna är: Säkerhet, Tidigare referenser, Terminologi, Igenkännelse, Förståelse för sekvens/handling, Uppfattning, Enkelhet, Effektivitet Tillgänglighet och Gruppering (närhet). Utifrån det personliga handlandet som perspektiv är dessa effekter ett resultat av ett avskalat skrivverktyg som bidrar till en god användarupplevelse.
The security of critical infrastructures is of paramount importance nowadays due to the growing complexity of components and applications. This paper collects the contributions to the industry dissemination session within the 14th International Conference on Critical Information Infrastructures Security (CRITIS 2019). As such, it provides an overview of recent practical experience reports in the field of critical infrastructure protection (CIP), involving major industry players. The set of cases reported in this paper includes the usage of serious gaming for training infrastructure operators, integrated safety and security management in the chemical/process industry, risks related to the cyber-economy for energy suppliers, smart troubleshooting in the Internet of Things (IoT), as well as intrusion detection in power distribution Supervisory Control And Data Acquisition (SCADA). The session has been organized to stimulate an open scientific discussion about industry challenges, open issues and future opportunities in CIP research.
Current trends in the market of Web-enabled devices are moving the focus from desktop web pages to pages optimised for a set of other devices like smartphones or tablets. Within this thesis an approach is introduced, able to adapt and automatically transform web pages and even the web applications logic flow into a new kind of representation, specifically for a certain target group. Therefore a general process is defined to describe the various phases that have to be gone through to transform or repackage a website. It serves as the basis for the solution, which was built as part of this thesis, and incorporates state of the art concepts and methods from various fields of Web Science. The implemented artefacts demonstrate how an appropriate architecture looks like and what additional possibilities open up.
Denna kandidatuppsats är en studie gjord för att undersöka spelmekanismer i en utvald träningsapplikation. Syftet med undersökningen är att skapa ökad förståelsen på varför människor får motivationshöjande effekter med hjälp av den utvalda träningsapplikationen. Detta kommer göras genom att undersöka användare av det praktiskta exemplet Nike+ för att ta reda på vilka funktioner som de anser gett dem motivationshöjande effekter och därmed ta fram vilka spelmekanismer dessa innefattar.
Studien använder sig av en kvalitativ ansats och genom intervjuer göra en datainsamling av användare av undersökningsexemplet Nike+. Intervjufrågorna grundar sig i teori om området Gamification samt tidigare forskning kring ämnet.
Resultatet visar att det finns flera olika spelmekanismer implementerade i Nike+ som har positiv inverkan på användarna. Resultatet stärker att de spelmekanismer som informanterna ställts sig positivt till har hjälpt till att göra motionsträning en roligare form av träning samt ökat träningsfrekvensen hos informanterna.
This study investigates the creation of a web platform for managing food waste, with an emphasis on web design principles, client-server architecture, database storage options, and data visualization methods. The problem investigated involves managing and visualizing food waste data efficiently to facilitate decision-making and waste reduction. The goal is to address the critical worldwide issue of food waste and the necessity for effective IT solutions to combat it. To answer the problem of inefficient food waste data management and visualization, research was conducted on various technical aspects, followed by the implementation of frontend and backend frameworks. This research resulted in the development of a user-friendly interface, comprehensive data visualization capabilities, and robust database management capabilities.
Industrial communication networks are common in a number of manufacturing organisations. The high availability of these networks is crucial for smooth plant operations. Therefore local and remote diagnostics of these networks is of primary importance in determining issues relating to plant reliability and availability. Condition Monitoring (CM) techniques when connected to a network provide a diagnostic system for remote monitoring of manufacturing equipment. The system monitors the health of the network and the equipment and is therefore able to predict performance. However, this leads to the collection, storage and analyses of large amounts of data, which must provide value. These large data sets are commonly referred to as Big Data. This paper presents a general concept of the use of condition monitoring and big data systems to show how they complement each other to provide valuable data to enhance manufacturing competiveness.
Maintenance is crucial to manufacturing operations. In many organisations, the production equipmentrepresents the majority of invested capital, and deterioration of these facilities and equipment increasesproduction costs, reduces product quality. Over recent years the importance of maintenance, and thereforemaintenance management, within manufacturing organisations has grown. The maintenance function hasbecome an increasingly important and complex activity, particularly as automation increases. Theopportunity exists for many organisations to benefit substantially through improvements to theircompetitiveness and profitability by adopting a new approach to maintenance management. Several toolsand technologies including Condition Based Maintenance (CBM), Reliability Centred Maintenance (RCM)and more recently e-maintenance have developed under the heading of Advanced Maintenance Strategies.However, the adoption of advanced maintenance strategies and their potential benefits are usuallydemonstrated in large organisations. Unfortunately, the majority of organisations are constrained by thelack of knowledge and understanding on the requirements, which need to be in place before adopting anadvanced maintenance strategy. These are usually classified as Small and Medium Sized Enterprises(SMEs).The research strategy is based on ‘empirical iterations’ using survey secondary data, experts’ interviewsinformation and multiple case studies. The results show that there is a set of recommendations, whichstrongly influence the implementation of an Advanced Maintenance Strategy (AMS) with a Small toMedium Enterprise (SME). Organisations require a structured and integrative approach in order to takeadvantage of a new approach to maintenance management. This paper will propose recommendations forintegrating an AMS into the organisation and provide evidence of a successful implementation.
Despite the fact that some practitioners and researchers report successful stories on Software ProductLines (SPL) adaptation, the evolution of SPL remains challenging. In our research we study a specific aspect of SPL adaptation, namely on updating of deployed products. Our particular focus is on the correct execution of updates and minimal interruption of services during the updates. The update process has two stages. First, the products affected by the evolution must be identified. We call this stage SPL-wide change impact analysis. In the second stage, each of the affected products has to be updated. In our previous work we have addressed the second stage of the update process. In this paper we report on our early results of the first stage: change impact analysis. We discuss how existing variability models can be employed to support automated identification of the products that require an update. The discussion is illustrated with the examples from an educational SPL that we are developing at K.U. Leuven.
This thesis introduces the idea of visualizing complex data using timeline for problem solving and analyzing a huge database. The database contains information about vehicles, which are continuously sending information about driving behavior, current location, driver ativities etc. Data complexity can be resolved by data visualization where user can see this complex data in the abstract form of timeline visualization. visualize complex data by using timeline mgiht help to monitor and track diffrent time dependent activities. We developed web application to monitor and track monthly, weekly, and daily activities which helps in decision making and understanding complex data.
Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research. © 2016, Springer Science+Business Media Dordrecht.
Every day most people are using applications and services that are utilising machine learning, in some way, without even knowing it. Some of these applications and services could, for example, be Google’s search engine, Netflix’s recommendations, or Spotify’s music tips. For machine learning to work it needs data, and often a large amount of it. Roughly 2,5 quintillion bytes of data are created every day in the modern information society. This huge amount of data can be utilised to make applications and systems smarter and automated. Time logging systems today are usually not smart since users of these systems still must enter data manually. This bachelor thesis will explore the possibility of applying machine learning to task logging systems, to make it smarter and automated. The machine learning algorithm that is used to predict the user’s task, is called multiclass logistic regression, which is categorical. When a small amount of training data was used in the machine learning process the predictions of a task had a success rate of about 91%.
IT (Informationsteknologi) används i dag mycket för att distribuera och förmedla information internt och externt. Ett bra verktyg för informationshantering är verksamhetens intranät. Arbetet handlar om användbarhet i små SharePoint-projekt. Med användbarhet omfattas faktorerna användarvänlighet, anpassning och användaracceptans. Små SharePoint-projekt (på under 300 timmar) innebär att det finns mindre resurser för att tillämpa användbarhet.
Syftet med detta arbete är att undersöka utvecklare inom SharePoint sina åsikter och förhållningssätt kring användbarhet, samt hur användbarhet bäst kan uppnås i små projekt med begränsade resurser. En kvantitativ metod har används för att genomföra fallstudie av tre utvecklare i ett konsultföretag. Intervjuerna har varit semistrukturerade.
Resultatet av forskningen drar slutsatsen att användaracceptansen är avgörande för att ett intranät ska uppfattas som användbart för användarna. Användarvänligheten är i standard SharePoint redan relativt definierad och fastställd. Anpassning av ett system istället för att använda inbyggda standarder är ofta kostsamt och resurskrävande. Utvecklarna bör därmed fokusera på utbildning, användarmedverkan, kommunikation och andra faktorer som kan öka acceptansen hos slutanvändarna.
Adoption of location-based services (LBS) was for a long time below expectations, and most of the studies attribute it to privacy concerns of users. However, many new LBS applications are currently among the most downloaded application for smartphones, particularly entertainment applications. Therefore, this research aims to find out whether privacy concerns still matter to users and to explore the role of the privacy in the adoption of LBS entertaining applications. The adopted methodology is qualitative research and data are collected through interviews and additional information from the smartphones ofparticipants. Ten individuals among university students at Linnaeus University in Sweden are selected for this research, and this sample choice is per their experience with two selected LBS entertaining applications, Pokémon Go and Tinder. As a result, six themes have been recognized to answer the research questions. Low privacy concerns about location information, especially in entertainment applications with negligible effect on adoption have been identified. However, author of this research suggests, that developers of LBS entertaining applications should care for retaining their credibility because it can have an impact on the adoption of their LBS services.
This report aims to describe our bachelor degree projectin computer engineeringat Linnaeus university in Växjö. The project has been carried out on behalf of Danfoss whoapproached us with an interest in making it possible to use Raspberry Pi as an internal and external research platform, compatiblewith their development environment PLUS+1 GUIDE.We were therefore given the task to develop support for Raspberry Pi in PLUS+1 GUIDE. This would enable use of PLUS+1 GUIDE software without the use of Danfoss hardware. This report describes theimplementation of a Raspberry Pi support librarywhich had to be designed to be compatible with the PLUS+1 GUIDE software. It also describes the creation of acustomLinux distributionusing the YoctoProject, a comparison between existing solutionsand a usability test on thePLUS+1 GUIDEsoftware using the developedRaspberry Pi support library.The resultof this workis fully functioning support forRaspberry Pi packaged as a plugin that when installed in PLUS+1 GUIDE allows creation of applications for this platform in the same manner as fortheirother control systems.
Tillsynsrond är en stor del av arbetsgången, för driftpersonal, i maskinrummet på ett fartyg. Det tillhör det förebyggande underhåll som tillämpas ombord. Idag använd till största utbredningen en metod som innefattar en färdig rondlista som skrivs ut på papper för att sedan kunna användas av driftpersonal vid rondering. Utifrån detta undersöka en ny metod för rondering till sjöss. Syftet med arbetet var att undersöka hur en digital rond uppfattas av maskinbesättningen. Genom att låta besättningen prova en digital rond skapad utifrån deras nuvarande rondlistor och sedan svara på frågor i en enkät, kunde en uppfattning byggas om en digital rond kan praktisk tillämpas till sjöss. Även om det kan tidsbespara arbetet och om besättningen tror det finns en framtid med metoden. Resultatet visar att de flesta i testgruppen ser en framtid med en digital rond och att det kan underlätta arbetet om metoden vidareutvecklas. Med detta digitala rondverktyg tog dock ronden längre tid.
Industrial Internet of Things (IIoT) lays a new paradigm for the concept of Industry 4.0 and paves an insight for new industrial era. Nowadays smart machines and smart factories use machine learning/deep learning based models for incurring intelligence. However, storing and communicating the data to the cloud and end device leads to issues in preserving privacy. In order to address this issue, Federated Learning (FL) technology is implemented in IIoT by the researchers nowadays to provide safe, accurate, robust and unbiased models. Integrating FL in IIoT ensures that no local sensitive data is exchanged, as the distribution of learning models over the edge devices has become more common with FL. Therefore, only the encrypted notifications and parameters are communicated to the central server. In this paper, we provide a thorough overview on integrating FL with IIoT in terms of privacy, resource and data management. The survey starts by articulating IIoT characteristics and fundamentals of distributed machine learning and FL. The motivation behind integrating IIoT and FL for achieving data privacy preservation and on-device learning are summarized. Then we discuss the potential of using machine learning (ML), deep learning (DL) and blockchain techniques for FL in secure IIoT. Further we analyze and summarize several ways to handle the heterogeneous and huge data. Comprehensive background on data and resource management are then presented, followed by applications of IIoT with FL in automotive, robotics, agriculture, energy, and healthcare industries. Finally, we shed light on challenges, some possible solutions and potential directions for future research.
As the number of Internet of Things (IoT) devices that are used daily is increasing, the inadequacy of cloud computing to provide neseccary IoT-related features, such as low latency, geographic distribution and location awareness, is becoming more evident. Fog computing is introduced as a new computing paradigm, in order to solve this problem by extending the cloud‟s storage and computing resources to the network edge. However, the introduction of this new paradigm is also confronted by various security threats and challenges since the security practices that are implemented in cloud computing cannot be applied directly to this new architecture paradigm. To this end, various papers have been published in the context of fog computing security, in an effort to establish the best security practices towards the standardization of fog computing. In this thesis, we perform a systematic literature review of current research in order to provide with a classification of the various security threats and challenges in fog computing. Furthermore, we present the solutions that have been proposed so far and which security challenge do they address. Finally, we attempt to distinguish common aspects between the various proposals, evaluate current research on the subject and suggest directions for future research.
Many important scientific and engineering problems may be solved by combining multiple applications in the form of a Grid workflow. We consider that for the wide acceptance of Grid technology it is important that the user has the possibility to express requirements on Quality of Service (QoS) at workflow specification time. However, most of the existing workflow languages lack constructs for QoS specification. In this paper we present an approach for high level workflow specification that considers a comprehensive set of QoS requirements. Besides performance related QoS, it includes economical, legal and security aspects. For instance, for security or legal reasons the user may express the location affinity regarding Grid resources on which certain workflow tasks may be executed. Our QoS-aware workflow system provides support for the whole workflow life cycle from specification to execution. Workflow is specified graphically, in an intuitive manner, based on a standard visual modeling language. A set of QoS-aware service-oriented components is provided for workflow planning to support automatic constraint-based service negotiation and workflow optimization. For reducing the complexity of workflow planning, we introduce a QoS-aware workflow reduction technique. We illustrate our approach with a real-world workflow for maxillo facial surgery simulation.
The intensifying value of learning, competence, and knowledge motivates decisions toward implementing knowledge management systems (KMS) to capitalize on the potential benefits of facilitating knowledge sharing, collecting, storing, and dissemination on a global scale. However, these systems frequently remain underutilized, and organizations encounter obstacles to achieve their proposed outcome. The case company experienced practical problems regarding a newly implemented KMS. The system was largely unused for a specific process. Therefore, this case study investigates the factors affecting KMS adoption and utilization for the technical training process by capturing the perspectives of the intended system users and management. A combination of KMS success factors and The Theory of Affordances were applied to generate knowledge regarding how factors affected the usage of the KMS. It was found that Management Involvement, Organizational Culture and Structure, Employee Commitment, Perceived Benefits, System Complexity, and Compatibility and Conformity influenced the users' KMS utilization outcomes. A conceptual framework was developed to show how these factors affected individuals' affordances process.
An important issue to be addressed in transit security, in particular for driverless metro, is the assurance that a vehicle is empty before it returns to the depot. Customer specifications in recent tenders require that an automatic empty vehicle detector is provided. That improves system security since it prevents voluntary (e.g. in case of thieves or graffiti makers) or involuntary (e.g. in case of drunk or unconscious people) access of unauthorized people to the depot and possibly to other restricted areas. Without automatic systems, a manual inspection of the vehicle should be performed, requiring considerable personnel effort and being prone to failure. To address the issue, we have developed a reliable empty vehicle detection system using video content analytics techniques and standard on-board cameras. The system can automatically check whether the vehicles have been cleared from passengers, thus supporting the security staff and central control operators in providing a higher level of security. © 2013 Springer-Verlag.
This report addresses the area of data privacy, with a particular focus on detecting and anonymizing personally identifiable information (PII) in unstructured text. As digital data accumulation increases and the use of large datasets propels AI advancements, it has become crucial to protect sensitive information from closed-source programs, whose data handling practices remain a mystery. This paper explores efficient methods for identifying and anonymizing sensitive data, a key component in maintaining privacy and following the General Data Protection Regulation (GDPR). The study evaluates existing large language models (LLMs) and ways to prompt these models for specific purposes. It assesses their performance in accurately identifying various types of Personal Identifiable Information (PII) and replacing them with dummy data, ensuring the utility of the text is maintained. This contribution is significant in providing a solution adaptable to the Swedish language and format. Thereby helping organizations in effectively manage their own and their customer’s data.
This paper is about the current state of affairs when it comes tothe Information and Communication Technologies (ICTs) inCondition Monitoring (CM) and Maintenance. In addition, thereare significant efforts done to standardise the systems for CM andMaintenance under discussion. The ICTs such as the Webtechnologies are presented, since they are an integral part of theE-maintenance approach. The author goes through the latestdevelopments in the area of Web technologies and the studiesconducted by the World Wide Web Consortium (W3C). Itemphasises the Semantic Web and its standardizing practices, theWeb 2.0 and its Social media technologies, which are nonexistingto the moment at the e-maintenance applications. Itdiscusses applications in the domain of interest and how the latestICTs, especially the developments of the Web technologies andthe Cloud computing, can make impact and might affect thefuture e-maintenance applications.
The current paper reviews briefly the eHealth domain, especially falldetection and prevention features, in connection with the developments of ICTs.The timely data signal providing identification of probable fall at early stages aswell as its specifics can prevent serious injuries. It is crucial for elderly peopleliving at home alone since it could affect their independent living. Therefore, thespecific and contextual characteristics of several related factors are essential tounderstand in order to be able to diminish or remove the risk of the fall of theelderly at risk. The current paper presents research in progress and its results inthe FRONT-VL project part of Celtic plus. The paper highlights essential factorsto consider when developing and implementing a semantic database model forpurposes, such as fault prevention.
The objective of this paper is to provide a profound insight into important characteristics with regard to the integration of mobile technology into the area of industrial maintenance. The aspects highlighted in the paper uncover, for instance, acceptance models and best practices, as well as the financial impact of mobile technologies in the domain of interest. Moreover, the paper pinpoints some important characteristics that impede full integration of these technologies into this area. In addition, the economic benefits and the current situation with regard to mobile integration are highlighted. In addition, relevant literature and theories are analysed and discussed. Furthermore, the industrial integration of mobile technologies is outlined. The results indicate that there is a tendency to place an emphasis on the technical aspects of mobile devices rather than on understanding the organisational context, which affects the integration of mobile technologies into this domain.
The work provides a thorough understanding of best practices and aspects to consider for the successful integration of mobile technologies in this area. This is because the use of mobile devices enables maintenance staff to gain access to information and services pertaining to the task in hand in real time, as long as some form of network access is provided. Users of these devices thus become mobile actors who dynamically interact with the physical environment in the workplace and support information systems, leading to a faster response to events and improved organizational performance.
The development is reported of an e-maintenance system, ie a mobile maintenance decision supportsystem based on web and mobile technologies. The problem of the lack of experts to troubleshoot a fault ina machine is a long-standing one. It has led to the application of artificial intelligence and, later, distributedartificial intelligence for machine condition monitoring and diagnosis. Recently, web technology, alongwith wireless communication, is emerging as a potential tool in maintenance, facilitating acquisition ofthe desired information by the relevant personnel at any time, wherever they may be. It has been foundthat with the emergence of the new Information and Communications Technology (ICT) new conceptshave started to appear, such as e-manufacturing, e-business and e-maintenance. The paper begins byshowing the Web and Mobile architecture and then the ICT tools that are used for communication amongthe different layers of the system and the client machines. This is followed by a demonstration of the useof the system with a faulty bearing simulated signal. In addition, it is explained that a CMMS, ie a mobilework management system, has been tested, with successful results. Finally, it is shown how a mobileemulator was used to perfect the system for different requirements and how this was then tested on apersonal digital assistant (PDA).
Product lifecycle simulation (PLCS) has been given ever moreattention as the manufacturers are competing with the quality and lifecyclecosts of their products. Especially, the need of companies to try to get a strongposition in providing services for their products and thus to make themselvesless vulnerable to changes in the market has led to high interest in PLCS. Ashort summary of current status of PLCS is presented especially related to thepoor integration of data in product lifecycle management systems and in PLCS.The potential of applying semantic data management to solve these problems isthoroughly discussed in the light of recent development. A basic roadmap howthe above-described problems could be tackled with open software solutions ispresented. Finally, this paper reviews the emergent Web technologies such asthe Semantic Web framework and the Web services.
The emergence of new Information and Communication Technologies, such as the Internet of Things and big data and data analytics provides opportunities as well as challenges for the domain of interest, and this paper discusses their importance in condition monitoring and maintenance. In addition, the Open system architecture for condition-based maintenance (OSA-CBM), and the Predictive Health Monitoring methods are gone through. Thereafter, the paper uses bearing fault data from a simulation model with the aim to produce vibration signals where different parameters of the model can be controlled. In connection to the former mentioned a prototype was developed and tested for purposes of simulated rolling element bearing fault systems signals with appropriate fault diagnostic and analytics. The prototype was developed taking into consideration recommended standards (e.g., the OSA-CBM). In addition, the authors discuss the possibilities to incorporate the developed prototype into the Arrowhead framework, which would bring possibilities to: analyze various equipment geographically dispersed, especially in this case its rolling element bearing; support servitization of Predictive Health Monitoring methods and large-scale interoperability; and, to facilitate the appearance of novel actors in the area and thus competition.
The paper reviews the performance measurement in the domain of interest. Important data in asset management are further, discussed. The importance and the characteristics of today’s ICTs capabilities are also mentioned in the paper. The role of new concepts such as big data and data mining analytical technologies in managing the performance measurements in asset management are discussed in detail. The authors consequently suggest the use of the modified Balanced Scorecard methodology highlighting both quantitative and qualitative aspects, which is crucial for optimal use of the big data approach and technologies.
The paper reviews the performancemeasurement in the domain of interest. Important data in assetmanagement are further, discussed. The importance and thecharacteristics of today’s ICTs capabilities are also mentionedin the paper. The role of new concepts such as big data anddata mining analytical technologies in managing theperformance measurements in asset management are discussedin detail. The authors consequently suggest the use of themodified Balanced Scorecard methodology highlighting bothquantitative and qualitative aspects, which is crucial foroptimal use of the big data approach and technologies.
Due to the increasingly fast technological advancement of the ICT world, 'hi-tech' industries feel a growing need to open to the external world of research and innovation. The usage of external innovation sources allows overcoming the limits of internal resources in terms of capacities, skills and creativity. With respect to the traditional concept of "Closed Innovation", that is the innovation constrained within the internal R&D departments, the "Open Innovation" paradigm leverages on tools that enable importing external resources, thus boosting the quality and quantity of innovative technological solutions. In this paper the basic concepts and the possible "Open Innovation" applications will be presented, starting from the introduction of the paradigm as invented by Henry William Chesbrough in 2003. Furthermore, methodologies and computer tools will be described that are widely adopted to apply the paradigm to industrial settings, as well as the possible barriers to its implementation. Lastly, given the importance of universities, research centers and other companies as external sources for Open Innovation, some pointers will be provided to the selection process of technology innovation partners.
Contemporary application domains make more and more appealing the vision of applications built as a dynamic and opportunistic assembly of autonomous and independent resources. However, the adoption of such paradigm is challenged by: (i) the openness and scalability needs of the operating environment, which rule out approaches based on centralized architectures and, (ii) the increasing concern for sustainability issues, which makes particularly relevant, in addition to QoS constraints, the goal of reducing the application energy footprint. In this context, we contribute by proposing a decentralized architecture to build a fully functional assembly of distributed services, able to optimize its energy consumption, paying also attention to issues concerning the delivered quality of service. We suggest suitable indexes to measure from different perspectives the energy efficiency of the resulting assembly, and present the results of extensive simulation experiments to assess the effectiveness of our approach.