Kaikki aineistot
Lisää
Medicine’s ability to quickly respond to challenges raises questions from researchers, practitioners, and society as a whole. Our task in this study was to identify key and atypical current factors influencing the development of medicine and to predict the development of medicine in the short, medium, and long term. To implement our study, we selected 22 medical experts and applied the three-level Delphi method. The current trends caused by COVID-19 have a short-term impact, but they will launch other drivers that will transform the healthcare industry. Well-being technologies, data-informed personalization, and climate change will become key drivers for the development of medicine over the period of 1–50 years. Expert opinion is divided about the future of mass availability of advanced medical treatment and sustainable development of healthcare.
This qualitative study aims to understand sustainability’s role in responsible consumers’ Online Customer Experiences (OCEs). In this study, we focus on female fashion shoppers, and study three dimensions of their OCE: cognitive, affective, and social. Although online shopping and responsible consumer behaviour have increased tremendously, sustainability’s role in OCE has not been studied before from the customer’s perspective. The data consists of nine semi-structural interviews of Finnish female self-proclaimed responsible consumers and is analyzed with qualitative content analysis. The findings show that sustainability issues are present in all OCE dimensions, which are also all interconnected. In short, we find that OCE’s cognitive dimension includes customers’ evaluation of the online store’s social and environmental sustainability as well as the product’s sustainability, necessity, and longevity. The affective dimension of OCE includes a wide range of feelings arising from perceived sustainability and one’s consumption choices. The social dimension includes one’s self-presentation, social channels, and the socio-technical implementation of online stores and their social features. The findings are beneficial for online store providers and academics interested in studying sustainability and OCE from the information systems perspective.
Diplomityön tarkoituksena on selvittää eri laajakaistatekniikoiden ominaisuuksia, ja verrata niitä käyttäjien ja rakennuttajien näkökulmaan. Miten käyttäjätsuhtautuvat tekniikan kehittymiseen, ja mitä he ovat valmiita maksamaan siitä. Ovatko näkemykset rakentajien kanssa samansuuntaisia, vai onko tekniikka edellä käyttäjiä? Näihin kysymyksiin pyrittiin saamaan vastaus tässä diplomityössä haastattelemalla molempia osapuolia ja vertailemalla vastauksia. Laajakaistayhteydet ovat lähes kaikkien saatavilla nykypäivänä. Suurin osa yhteyksistä on toteutettu kuparitekniikoilla, joista ADSL on yleisin kaapeli-TV ratkaisun kanssa. Harvemmin asutuilla tai muuten hankalasti tavoitettavilla alueilla laajakaistayhteydet ovat toteutettu langattomin ratkaisuin WiMAX- tai @450-tekniikoilla. Laajakaistayhteyksien kriteerinä on ollut 256 kbit/s nopeus, mutta nykyään käyttäjien keskiarvo on noussut 2 Mbit/s nopeuteen. Nopeudet vaikuttavat sovelluksiin, mitä voidaan käyttää. Nykyään Internetin kautta on saatavilla monipuolisesti erilaisia sovelluksia ja viestintätapoja. Vaatimukset laajakaistayhteyksiltä ovat erilaisia; toiset vaativat reaaliaikaisuutta ja suurta nopeutta ja samalla toiset tyytyvät vähempään. Kaikille yhteyksille on kuitenkin yhteistä se, että niiden käyttövaatimukset ovat kasvaneet jatkuvasti. Tulevaisuutta onpyritty kartoittamaan tekniikoiden mahdollisesta kehitysnäkökulmasta, sekä sillä, miten muualla maailmassa edetään laajakaistatekniikoiden kanssa. Oman vivahteen kehitykseen tuovat kansalliset tarpeet ja resurssit.
Embedded systems, as opposed to traditional computers, bring an incredible diversity. The number of devices manufactured is constantly increasing and each has a dedicated software, commonly known as firmware. Full firmware images are often delivered as multiple releases, correcting bugs and vulnerabilities, or adding new features. Unfortunately, there is no centralized or standardized firmware distribution mechanism. It is therefore difficult to track which vendor or device a firmware package belongs to, or to identify which firmware version is used in deployed embedded devices. At the same time, discovering devices that run vulnerable firmware packages on public and private networks is crucial to the security of those networks. In this paper, we address these problems with two different, yet complementary approaches: firmware classification and embedded web interface fingerprinting. We use supervised Machine Learning on a database subset of real world firmware files. For this, we first tell apart firmware images from other kind of files and then we classify firmware images per vendor or device type. Next, we fingerprint embedded web interfaces of both physical and emulated devices. This allows recognition of web-enabled devices connected to the network. In some cases, this complementary approach allows to logically link web-enabled online devices with the corresponding firmware package that is running on the devices. Finally, we test the firmware classification approach on 215 images with an accuracy of 93.5%, and the device fingerprinting approach on 31 web interfaces with 89.4% accuracy.
To provide a diverse comprehension of teachers' TPACK (Technological, Pedagogical, and Content Knowledge) and how TPACK is reflected in practice, this study examined teacher educators' (TEs') conceptions of technology integration. Specifically, the main objective of the study was to investigate the factors influencing Nigerian teacher educators' technology integration using a self-completion survey administered to Nigerian teacher educators from three schools in the southern region of Nigeria. We utilized the partial least squares structural equation modeling (PLS-SEM) approach for the data analysis. Two frameworks—TPACK and Second Information Technology in Education Study (SITES)— guided the scale development. The results indicated that three constructs (perceived technological knowledge, teachers' knowledge [excluding technology] and perceived knowledge for integrating technology) directly influenced the TEs' technology integration, while two others (information and communication technology [ICT] pedagogical practices and perceived effect on students) did not. Among the teachers' characteristics, teaching experience, and class size were found statistically associated with their technology integration. The results of this study are beneficial for developing professional training to help teachers integrate technology specifically by developing their ICT pedagogical practices. Through such training, teachers could be enlightened on how to align their perceived effect of teaching with technology.
Today, predicting the number of tugs required to assist in a towing operation many days in advance is difficult. Towing operations, being a complicated process, are prone to human errors and conflicts, which can have severe financial consequences for all parties involved. In this thesis, a method for extracting port tugboat operations for incoming and outgoing vessels is proposed. Using the obtained tugboat operations dataset, a machine learning model is built in order to predict the number of tugs required to assist in a towing operation. The data used is a year of historical Baltic Sea AIS data and weather data from nearby weather stations near the two analysis ports. The recommended ideas and their implementation were a success from a performance standpoint. The proposed method for extracting towing operations detected the vast majority of towing operations within the analysis area. The obtained tugboat operations dataset was then used during the model construction phase. The obtained models are port-specific. One of the models achieved an overall accuracy of 87.0\%, while the other achieved an accuracy of 91.5\%. The results demonstrated that it is possible to develop a viable predictive tool for tugboat operations. When deployed, the proposed method will enable port and tugboat operators to make faster and more efficient decisions, resulting in increased operational efficiency in the port area.
Being able to critically evaluate the reliability of sources is an important skill, and a part of digital literacy competences and internet reading. Conducting research on the matter and teaching the skills related to internet reading in a digital environment require discrete software that is designed for said purposes. This thesis is associated with a project where one of the objectives is to design and implement a new web application, StudentNet, where pupils can practise and improve their skills in internet reading by completing assignments. Internet reading is goal-directed activity where the reader a) searches for information to set questions from the Internet, b) evaluates the reliability of the found information, and c) composes a synthesis based on several online sources. The basis of StudentNet is a web application called Neurone, which main purpose is research and measurement instead of education, and it provides a linear user flow. In contrast, StudentNet should provide a nonlinear model regarding the application’s user flow. In this thesis it is studied how this nonlinear model works in practice especially when the users are children, and what specific things should be considered when developing software for educational purposes. Several practical experiments were organised during the project. In these experiments, each participant completed an assignment in StudentNet and answered a set of statements in an end questionnaire. In addition, log data of users’ interactions in the application was recorded. The data collected from the questionnaire was analysed and utilised to get some information on what could be improved, and ideas for the development of new features as well as improving existing ones. In addition, the gathered feedback was used to answer the research question considering the application’s nonlinear model. It was found out that the model works very well for most pupils but for some individuals there should be additional support in the application in order to ensure a great user experience. The results also indicate that StudentNet was perceived useful, and it has good potential for further development and to be used as a complementary learning tool.
Breast cancer accounts for the largest proportion of newly diagnosed cancers in women recently. Early diagnosis of breast cancer can improve treatment outcomes and reduce mortality. Mammography is convenient and reliable, which is the most commonly used method for breast cancer screening. However, manual examinations are limited by the cost and experience of radiologists, which introduce a high false positive rate and false examination. Therefore, a high-performance computer-aided diagnosis (CAD) system is significant for lesions detection and cancer diagnosis. Traditional CADs for cancer diagnosis require a large number of features selected manually and remain a high false positive rate. The methods based on deep learning can automatically extract image features through the network, but their performance is limited by the problems of multicenter data biases, the complexity of lesion features, and the high cost of annotations. Therefore, it is necessary to propose a CAD system to improve the ability of lesion detection and cancer diagnosis, which is optimized for the above problems. This thesis aims to utilize deep learning methods to improve the CADs' performance and effectiveness of lesion detection and cancer diagnosis. Starting from the detection of multi-type lesions using deep learning methods based on full consideration of characteristics of mammography, this thesis explores the detection method of microcalcification based on multiscale feature fusion and the detection method of mass based on multi-view enhancing. Then, a classification method based on multi-instance learning is developed, which integrates the detection results from the above methods, to realize the precise lesions detection and cancer diagnosis in mammography. For the detection of microcalcification, a microcalcification detection network named MCDNet is proposed to overcome the problems of multicenter data biases, the low resolution of network inputs, and scale differences between microcalcifications. In MCDNet, Adaptive Image Adjustment mitigates the impact of multicenter biases and maximizes the input effective pixels. Then, the proposed pyramid network with shortcut connections ensures that the feature maps for detection contain more precise localization and classification information about multiscale objects. In the structure, trainable Weighted Feature Fusion is proposed to improve the detection performance of both scale objects by learning the contribution of feature maps in different stages. The experiments show that MCDNet outperforms other methods on robustness and precision. In case the average number of false positives per image is 1, the recall rates of benign and malignant microcalcification are 96.8% and 98.9%, respectively. MCDNet can effectively help radiologists detect microcalcifications in clinical applications. For the detection of breast masses, a weakly supervised multi-view enhancing mass detection network named MVMDNet is proposed to solve the lack of lesion-level labels. MVMDNet can be trained on the image-level labeled dataset and extract the extra localization information by exploring the geometric relation between multi-view mammograms. In Multi-view Enhancing, Spatial Correlation Attention is proposed to extract correspondent location information between different views while Sigmoid Weighted Fusion module fuse diagnostic and auxiliary features to improve the precision of localization. CAM-based Detection module is proposed to provide detections for mass through the classification labels. The results of experiments on both in-house dataset and public dataset, 0.92@0.52 and 0.96@0.77 (recall rate@average number of false positive per image), demonstrate MVMDNet achieves state-of-art performances among weakly supervised methods and has robust generalization ability to alleviate the multicenter biases. In the study of cancer diagnosis, a breast cancer classification network named CancerDNet based on Multi-instance Learning is proposed. CancerDNet successfully solves the problem that the features of lesions are complex in whole image classification utilizing the lesion detection results from the previous chapters. Whole Case Bag Learning is proposed to combined the features extracted from four-view, which works like a radiologist to realize the classification of each case. Low-capacity Instance Learning and High-capacity Instance Learning successfully integrate the detections of multi-type lesions into the CancerDNet, so that the model can fully consider lesions with complex features in the classification task. CancerDNet achieves the AUC of 0.907 and AUC of 0.925 on the in-house and the public datasets, respectively, which is better than current methods. The results show that CancerDNet achieves a high-performance cancer diagnosis. In the works of the above three parts, this thesis fully considers the characteristics of mammograms and proposes methods based on deep learning for lesions detection and cancer diagnosis. The results of experiments on in-house and public datasets show that the methods proposed in this thesis achieve the state-of-the-art in the microcalcifications detection, masses detection, and the case-level classification of cancer and have a strong ability of multicenter generalization. The results also prove that the methods proposed in this thesis can effectively assist radiologists in making the diagnosis while saving labor costs.
In our increasingly digitized world, the value of data is clear and proved, and many solutions and businesses have been developed to harness it. In particular, personal data (such as health-related data) is highly valuable, but it is also sensitive and could harm the owners if misused. In this context, data marketplaces could enhance the circulation of data and enable new businesses and solutions. However, in the case of personal data, marketplaces would necessarily have to comply with existing regulations, and they would also need to make users privacy protection a priority. In particular, privacy protection has been only partially accomplished by existing datamarkets, as they themselves can gather information about the individuals connected with the datasets they handle. In this thesis is presented an architecture proposal for KRAKEN, a new datamarket that provides privacy guarantees at every step in the data exchange and analytics pipeline. This is accomplished through the use of multi-party computation, blockchain and self-sovereign identity technologies. In addition to that, the thesis presents also a privacy analysis of the entire system. The analysis indicated that KRAKEN is safe from possible data disclosures to the buyers. On the other hand, some potential threats regarding the disclosure of data to the datamarket itself were identified, although posing a low-priority risk, given their rare chance of occurrence. Moreover the author of this thesis elaborated remarks on the decentralisation of the architecture and possible improvements to increase the security. These improvements are accompanied by the solutions identified in the paper that proposes the adoption of a trust measure for the MPC nodes. The work on the paper and the thesis contributed to the personal growth of the author, specifically improving his knowledge of cryptography by learning new schemes such as group signatures, zero knowledge proof of knowledge and multi-party computation. He improved his skills in writing academic papers and in working in a team of researchers leading a research area.
Cloud storage has become one of the most efficient and economical ways to store data over the web. Although most organizations have adopted cloud storage, there are numerous privacy and security concerns about cloud storage and collaboration. Furthermore, adopting public cloud storage may be costly for many enterprises. An open-source cloud storage solution for cloud file sharing is a possible alternative in this instance. There is limited information on system architecture, security measures, and overall throughput consequences when selecting open-source cloud storage solutions despite widespread awareness. There are no comprehensive comparisons available to evaluate open-source cloud storage solutions (specifically owncloud, nextcloud, and seafile) and analyze the impact of platform selections. This thesis will present the concept of cloud storage, a comprehensive understanding of three popular open-source features, architecture, security features, vulnerabilities, and other angles in detail. The goal of the study is to conduct a comparison of these cloud solutions so that users may better understand the various open-source cloud storage solutions and make more knowledgeable selections. The author has focused on four attributes: features, architecture, security, and vulnerabilities of three cloud storage solutions ("ownCloud," "Nextcloud," and "Seafile") since most of the critical issues fall into one of these classifications. The findings show that, while the three services take slightly different approaches to confidentiality, integrity, and availability, they all achieve the same purpose. As a result of this research, the user will have a better understanding of the factors and will be able to make a more informed decision on cloud storage options.
The GDPR is the current data protection regulation in Europe. A significant market demand has been created ever since GDPR came into force. This is mostly due to the fact that it can go outside of European borders if the data processed belongs to European citizens. The number of companies who require some type of regulation or standard compliance is ever-increasing and the need for cyber security and privacy specialists has never been greater. Moreover, the GDPR has inspired a series of similar regulations all over the world. This further increases the market demand and makes the work of companies who work internationally more complicated and difficult to scale. The purpose of this thesis is to help consultancy companies to automate their work by using semantic structures known as ontologies. By doing this, they can increase productivity and reduce costs. Ontologies can store data and their semantics (meaning) in a machine-readable format. In this thesis, an ontology has been designed which is meant to help consultants generate checklists (or runbooks) which they are required to deliver to their clients. The ontology is designed to handle concepts such as security measures, company information, company architecture, data sensitivity, privacy mechanisms, distinction between technical and organisational measures, and even conditionality. The ontology was evaluated using a litmus test. In the context of this ontology, the litmus test was composed of a collection of competency questions. Competency questions were collected based on the use-cases of the ontology. These questions were later translated to SPARQL queries which were run against a test ontology. The ontology has successfully passed the given litmus test. Thus, it can be concluded that the implemented functionality matches the proposed design.
The recent rise of IoT devices in commercial and industrial spaces has created a demand for energy-efficient and reliable communication solutions. Communication solutions used on IoT devices vary depending on the applications. Wireless Low Power Wide Area Network (LPWAN) technologies have proven benefits, including long-range, low power, and low-cost communication alternatives for IoT devices. These benefits come at the cost of limitations, such as lower data rates. At the same time, the demand for faster, cheaper, and more reliable software deployment is becoming more critical than ever before. This thesis aims to find a way of having an automated process where software could be remotely deployed into LoRa nodes and investigate whether it is possible to implement a DevOps pipeline with both Continuous Integration (CI) and Continuous Deployment (CD) over LoRaWAN. For this thesis, an IoT LoRaWAN Edge computing application was chosen to determine how to design and implement a CI/CD pipeline to ensure a dependable and a continuous software deployment to the LoRaWAN nodes. Designing and implementing a Continuous Deployment pipeline for this IoT application was made possible with the integration of DevOps tools like GitHub and a TeamCity automation server. Additionally, a series of scripts have been designed and developed for this case, including automated tests, integration to cloud services, and file fragmentation and defragmentation tools. For software deployment and verification to the LoRaWAN network, a program was designed to communicate with the LoRaWAN network server over the WebSocket communication protocol. The implementation of DevOps in LoRaWAN applications is affected by the limitations of the LoRaWAN protocol. This thesis argues that these limitations can be eliminated using modular software and file fragmentation techniques. The implementation presented in this work can be extended for various time-critical use cases. The solution presented in this thesis also opens the door to combining LoRaWAN with other LPWAN technologies, like NB-IoT, that can be activated on demand.
Impulse Radio Ultra-Wideband (IR-UWB) is a wireless carrier communication technology using nanosecond non-sinusoidal narrow pulses to transmit data. Therefore, the IR-UWB signal has a high resolution in the time domain and is suitable for high-precision positioning or sensing systems in IIoT scenarios. This thesis designs and implements a high-precision positioning system and a contactless sensing system based on the high temporal resolution characteristics of IR-UWB technology. The feasibility of the two applications in the IIoT is evaluated, which provides a reference for human-machine-thing positioning and human-machine interaction sensing technology in large smart factories. By analyzing the commonly used positioning algorithms in IR-UWB systems, this thesis designs an IRUWB relative positioning system based on the time of flight algorithm. The system uses the IR-UWB transceiver modules to obtain the distance data and calculates the relative position between the two individuals through the proposed relative positioning algorithm. An improved algorithm is proposed to simplify the system hardware, reducing the three serial port modules used in the positioning system to one. Based on the time of flight algorithm, this thesis also implements a contactless gesture sensing system with IR-UWB. The IR-UWB signal is sparsified by downsampling, and then the feature information of the signal is obtained by level-crossing sampling. Finally, a spiking neural network is used as the recognition algorithm to classify hand gestures.
It is more and more common to receive emails asking for credentials. They usually say that there is some kind of issue that must be solved by accessing the involved service using the link inside the message text. These emails are often malicious, thought to steal users' or employees' credentials and gain access to personal or corporate areas. This scenario is commonly known as phishing, and nowadays it is the most common cause of corporate data breaches. The attacker tries to exploit human vulnerabilities like fear, concern or carelessness to obtain what would be difficult to achieve otherwise. Even if it is easy from an expert point of view to recognize such attempts, it is not so simple to automatize their detection, due to the fact that there are various techniques to elude systematic checks. Nevertheless, Würth Phoenix wants to improve their cyber defense against any possible threat, and hence they assigned me the task of working on phishing emails detection. This thesis presents a novel program that can analyze all emails delivered to a specifically set up email server without any filtering on incoming traffic, which is then called a "spam-trap-box." Additionally, it is configured with accounts registered for domains owned by failed companies that used to operate in the same industry of Würth Phoenix customers. This way it is more probable to analyze traffic similar to the one in a real case scenario. The innovative part of the analysis implemented is the use of Open Source Intelligence (OSINT) to compare the most relevant parts of an email with evidence of other phishing attempts indexed on the web, which are generally known as Indicators of Compromise (IoCs). After the inspection, if an email is categorized as malicious, new IoCs are created to feed the Würth Phoenix Security Operation Center (SOC), which is the service responsible for the protection against cyber threats offered to their customers. The new indicators include more information than the ones used during the analysis, and the findings are inherent to clients' businesses, thus the SOC has more details to use while analyzing their email traffic.
Cloud computing has introduced numerous ways to build software systems in the cloud environment. The complexity of today’s system architectures require architecture evaluation in the designing phase of the system, in the implementation phase, and in the maintenance phase. There are many different architecture evaluation models. This thesis discusses three different evaluation models: architecture tradeoff analysis method, cost-benefit analysis method, and AWS Well-Architected framework. The AWS Well-Architected framework is deeply evaluated by performing an architectural evaluation for the case study software: Lixani 5. This thesis introduces and compares the opportunities for cloud architecture evaluation by literature review, case study, and interviews with experts. The thesis begins with introduction to cloud computing, cloud architecture models and architecture evaluation methods. An architecture evaluation for a case study software is then carried out. This thesis also contains interviews with experts, producing knowledge on how the system architecture is being evaluated in the field. The research methods used in the thesis are literature review, case study, and expert interviews. This thesis attempts to describe and assess the architecture evaluation models by using the research methods. In addition, this thesis introduces and discusses the case study software – Lixani 5 – and its architectural decisions. Based on research in the thesis it was noted that all three studied software architecture evaluation models are suitable options for reviewing software architecture. All models included positive and negative aspects and none of them was seen as superior compared to the others. Based on the interviews with experts it was noted that there are also multiple other efficient ways to evaluate the system architecture than the models discussed in the thesis. These ways included a technology audit template and a proof-of-concept culture.
Software-as-a-Service is a popular software delivery model that provides subscription-based services for customers. In this thesis, we identify key aspects of implementing a maintainable and secure tenancy model through analyzing research literature and focusing on a case study. We also study whether it is beneficial to change a single-tenant implementation to a multi-tenant implementation in terms of maintainability and security. We research common tenancy models and security issues in SaaS products. Based on these, we set out to analyze a case study product, identifying potential problems in its single-tenant implementation. We then decide on changing said model, and show the process of implementing a new hybrid model. Finally, we present validation methods on measuring the effectiveness of such implementation. We identified data security and isolation, efficiency and performance, administrative manageability, scalability and profitability to be the most important quality aspects to consider when choosing a maintainable and secure tenancy model. We also recognize that it is beneficial to change from a single-tenant implementation to a multi-tenant implementation in terms of these aspects.
The IoT industry is growing at an accelerating pace, and drones are one of the most significant emerging technologies of recent years and the fastest growing and developing IoT fields. Since drones are remotely controlled unmanned aerial vehicles, especially the protection of communication and the verification of the sender of the information are critical, so that an attacker cannot hijack the drone. For this reason, efforts have been made to develop effective authentication schemes for drones, so that an attacker cannot access the drone's data or misuse it. The aim of this thesis is to familiarize with authentication schemes used in IoT devices and especially in drones and to improve an already existing authentication scheme. The literature review surveys what drones are in general and what they can be used for. In addition, identity and access management, the functionality and features of blockchains, and related work on IoT authentication schemes utilizing blockchain are introduced. In the case study, a blockchain-based access control and authentication system for the IoD network was designed, developed, and analyzed. In this system, a smart contract manages the whitelist of drones that can register in the IoD network's authentication system that utilizes a blockchain. The smart contract was written in Solidity and tested in a simulation environment using dynamic and static analysis. In addition, a security analysis was performed on the developed access control and authentication system, which shows that the solution achieves the security goals of confidentiality, integrity, and availability.
According to American Psychological Association (APA) more than 9 in 10 (94 percent) adults believe that stress can contribute to the development of major health problems, such as heart disease, depression, and obesity. Due to the subjective nature of stress, and anxiety, it has been demanding to measure these psychological issues accurately by only relying on objective means. In recent years, researchers have increasingly utilized computer vision techniques and machine learning algorithms to develop scalable and accessible solutions for remote mental health monitoring via web and mobile applications. To further enhance accuracy in the field of digital health and precision diagnostics, there is a need for personalized machine-learning approaches that focus on recognizing mental states based on individual characteristics, rather than relying solely on general-purpose solutions. This thesis focuses on conducting experiments aimed at recognizing and assessing levels of stress and anxiety in participants. In the initial phase of the study, a mobile application with broad applicability (compatible with both Android and iPhone platforms) is introduced (we called it STAND). This application serves the purpose of Ecological Momentary Assessment (EMA). Participants receive daily notifications through this smartphone-based app, which redirects them to a screen consisting of three components. These components include a question that prompts participants to indicate their current levels of stress and anxiety, a rating scale ranging from 1 to 10 for quantifying their response, and the ability to capture a selfie. The responses to the stress and anxiety questions, along with the corresponding selfie photographs, are then analyzed on an individual basis. This analysis focuses on exploring the relationships between self-reported stress and anxiety levels and potential facial expressions indicative of stress and anxiety, eye features such as pupil size variation and eye closure, and specific action units (AUs) observed in the frames over time. In addition to its primary functions, the mobile app also gathers sensor data, including accelerometer and gyroscope readings, on a daily basis. This data holds potential for further analysis related to stress and anxiety. Furthermore, apart from capturing selfie photographs, participants have the option to upload video recordings of themselves while engaging in two neuropsychological games. These recorded videos are then subjected to analysis in order to extract pertinent features that can be utilized for binary classification of stress and anxiety (i.e., stress and anxiety recognition). The participants that will be selected for this phase are students aged between 18 and 38, who have received recent clinical diagnoses indicating specific stress and anxiety levels. In order to enhance user engagement in the intervention, gamified elements - an emerging trend to influence user behavior and lifestyle - has been utilized. Incorporating gamified elements into non-game contexts (e.g., health-related) has gained overwhelming popularity during the last few years which has made the interventions more delightful, engaging, and motivating. In the subsequent phase of this research, we conducted an AI experiment employing a personalized machine learning approach to perform emotion recognition on an established dataset called Emognition. This experiment served as a simulation of the future analysis that will be conducted as part of a more comprehensive study focusing on stress and anxiety recognition. The outcomes of the emotion recognition experiment in this study highlight the effectiveness of personalized machine learning techniques and bear significance for the development of future diagnostic endeavors. For training purposes, we selected three models, namely KNN, Random Forest, and MLP. The preliminary performance accuracy results for the experiment were 93%, 95%, and 87% respectively for these models.
The aim of this thesis is to understand and reduce repeated development of software artefacts, mainly in terms of the software components that are produced in the organization of Natural Resources Institute Finland. The thesis consists of a literature review around code reuse and the software product line method of building software. In the case study of the thesis, a visual model is added to the Jira environment of the organization of Natural Resources Institute Finland DIGI unit. The project workers associated with each project are interviewed both before and after the visual model was added. Through the results of the interviews, the study identifies improvements to understandability and presentability of projects. Suggesting that the addition of a graphical model for epics on a high abstraction level helps project workers increase information sharing within an organization, but more research is needed to understand and enable the technical impact of the model.
The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV). Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform. Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future.
Interactions between a user and information systems are based on an inescapable architectural pattern: user data is integrated into requests whose analysis is carried out by an interpreter that drives the system’s activity. Attacks targeting this architecture (known as injection attacks) are very frequent and particularly severe. Most often, this detection is based only on the syntax of this data (e.g. the presence of keywords or sub-strings typical of attacks), with limited knowledge of their semantics (i.e. the effects of the query on the information system). The automatic extraction of these semantics is, therefore, a major challenge, as it would significantly improve the performance of Intrusion Detection Systems (IDS). By leveraging the novel advancement in Natural Language Processing (NLP) it appears feasible to automatically and transparently infer the semantics of user inputs. This Master Thesis provides a framework centred on the instrumentalization of parsers. We focused on parsers for their pivotal role as the first layer of interaction with user inputs and their responsibility for the performed operation on an information system. Our research findings indicate the possibility of constructing an intrusion detection system based on this framework. Moreover, the focus on parser technologies demonstrates the potential for dynamically preventing the processing of malicious input (i.e. creating Intrusion Prevention Systems).