Kaikki aineistot
Lisää
Short-term outreach interventions are conducted to raise young students’ awareness of the computer science (CS) field. Typically, these interventions are targeted at K–12 students, attempting to encourage to study CS in higher education. This study is based on a series of extra-curricular outreach events that introduced students to the discipline of computing, nurturing creative computational thinking through problem solving and game programming. To assess the long-term impact of this campaign, the participants were contacted and interviewed two to five years after they had attended an outreach event. We studied how participating in the outreach program affected the students’ perceptions of CS as a field and, more importantly, how it affected their educational choices. We found that the outreach program generally had a positive effect on the students’ educational choices. The most prominent finding was that students who already possessed a “maintained situational interest” in CS found that the event strengthened their confidence in studying CS. However, many students were not affected by attending the program, but their perceptions of CS did change. Our results emphasize the need to provide continuing possibilities for interested students to experiment with computing-related activities and hence maintain their emerging individual interests.
Amazon was the world's top Research and Development (R&D) firm in 2017. Its R&D investment was double that of 2015, five times that of 2012, and ten times that of 2011. Such a rapid and notable increase in R&D investment has raised the question of a new R&D definition and focus in the digital economy, which Amazon insists includes both “routine or periodic alterations” (traditionally classified as non-R&D) and “significant improvement” (classified as R&D). Using an empirical analysis of Amazon's R&D model as a system, this paper attempts to provide a convincing answer to this question. It has been identified that Amazon, which is based on R&D as a culture, has been promoting companywide experimentation to cause customers obsessed with making purchase decisions. This obsession has enabled Amazon to deploy an architecture for participation that makes the most of digital technologies by harnessing the power of users. Such user-driven innovation has accelerated a dramatic advancement of the Internet that, in turn, has accelerated the co-emergence of soft innovation resources in the marketplace. This emergence has activated a self-propagating function that has induced functionality development, leading to supra-functionality beyond an economic value that satisfies a shift in customers’ preferences. While this system depends on the assimilation capacity of soft innovation resources, Amazon has developed a high level of capacity supported by a rapid and notable increase in R&D investment. The above efforts function in a virtuous cycle leading to the transformation of “routine or periodic alterations” into “significant improvement.” These findings give rise to insightful suggestions regarding a new concept of R&D in neo open innovation in the digital economy.
We formulate and solve a real-world shape design optimization problem of an air intake ventilation system in a tractor cabin by using a preference-based surrogate-assisted evolutionary multiobjective optimization algorithm. We are motivated by practical applicability and focus on two main challenges faced by practitioners in industry: 1) meaningful formulation of the optimization problem reflecting the needs of a decision maker and 2) finding a desirable solution based on a decision maker’s preferences when solving a problem with computationally expensive function evaluations. For the first challenge, we describe the procedure of modelling a component in the air intake ventilation system with commercial simulation tools. The problem to be solved involves time consuming computational fluid dynamics simulations. Therefore, for the second challenge, we extend a recently proposed Kriging-assisted evolutionary algorithm K-RVEA to incorporate a decision maker’s preferences. Our numerical results indicate efficiency in using the computing resources available and the solutions obtained reflect the decision maker’s preferences well. Actually, two of the solutions dominate the baseline design (the design provided by the decision maker before the optimization process). The decision maker was satisfied with the results and eventually selected one as the final solution.
Background. Stability of spatial components is frequently used as a post-hoc selection criteria for choosing the dimensionality of an independent component analysis (ICA) of functional magnetic resonance imaging (fMRI) data. Although the stability of the ICA temporal courses differs from that of spatial components, temporal stability has not been considered during dimensionality decisions. New method. The current study aims to (1) develop an algorithm to incorporate temporal course stability into dimensionality selection and (2) test the impact of temporal course on the stability of the ICA decomposition of fMRI data via tensor clustering. Resting state fMRI data were analyzed with two popular ICA algorithms, InfomaxICA and FastICA, using our new method and results were compared with model order selection based on spatial or temporal criteria alone. Results. Hierarchical clustering indicated that the stability of the ICA decomposition incorporating spatiotemporal tensor information performed similarly when compared to current best practice. However, we found that component spatiotemporal stability and convergence of the model varied significantly with model order. Considering both may lead to methodological improvements for determining ICA model order. Selected components were also significantly associated with relevant behavioral variables. Comparison with Existing Method: The Kullback–Leibler information criterion algorithm suggests the optimal model order for group ICA is 40, compared to the proposed method with an optimal model order of 20. Conclusion. The current study sheds new light on the importance of temporal course variability in ICA of fMRI data.
We deduce a posteriori error estimates of functional type for the stationary Stokes problem with slip and leak boundary conditions. The derived error majorants do not contain mesh dependent constants and are valid for a wide class of energy admissible approximations that satisfy the Dirichlet boundary condition on a part of the boundary. Different forms of error majorants contain global constants associated with Poincaré type inequalities or the stability (LBB) condition for the Stokes problem or constants associated with subdomains (if a domain decomposition is applied). It is proved that the majorants are guaranteed and vanish if and only if the functions entering them coincide with the respective exact solutions.
With the advent of the mobile industry, we face new security challenges. ARM architecture is deployed in most mobile phones, homeland security, IoT, autonomous cars and other industries, providing a hypervisor API (via virtualization extension technology). To research the applicability of this virtualization technology for security in this platform is an interesting endeavor. The hypervisor API is an addition available for some ARMv7-a and is available with any ARMv8-a processor. Some ARM platforms also offer TrustZone, which is a separate exception level designed for trusted computing. However, TrustZone may not be available to engineers as some vendors lock it. We present a method of applying a thin hypervisor technology as a generic security solution for the most common operating system on the ARM architecture. Furthermore, we discuss implementation alternatives and differences, especially in comparison with the Intel architecture and hypervisor with TrustZone approaches. We provide performance benchmarks for using hypervisors for reverse engineering protection.
Non-orthogonal multiple access (NOMA) holds the promise to be a key enabler of 5G communication. However, the existing design of NOMA systems must be optimized to achieve maximum rate while using minimum transmit power. To do so, this paper provides a novel technique based on multi-objective optimization to efficiently allocate resources in the multi-user NOMA systems supporting downlink transmission. Specifically, our unique optimization technique jointly improves spectrum and energy efficiency while satisfying the constraints on users quality of services (QoS) requirements, transmit power budget and successive interference cancellation. We first formulate a joint problem for spectrum and energy optimization and then employ dual decomposition technique to obtain an efficient solution. For the sake of comparison, a low complexity single-objective NOMA optimization scheme is also provided as a benchmark scheme. The simulation results show that the proposed joint approach not only performs better than the traditional benchmark NOMA scheme but also significantly outperforms its counterpart orthogonal multiple access (OMA) schemes in terms of both energy and spectral efficiency.
We study the splitting dynamics of giant vortices in dilute Bose-Einstein condensates by numerically integrating the three-dimensional Gross-Pitaevskii equation in time. By taking advantage of tetrahedral tiling in the spatial discretization, we decrease the error and increase the reliability of the numerical method. An extensive survey of vortex splitting patterns is presented for different aspect ratios of the harmonic trapping potential. The discrete rotational symmetries of the splitting patterns that emerge in the time evolution are in good agreement with predictions obtained by solving the prevailing dynamical instabilities from the Bogoliubov equations. Furthermore, we observe intertwining of the split vortices in prolate condensates and a split-and-revival phenomenon in a spherical condensate.
Social-anxiety disorder involves a fear of embarrassing oneself in the presence of others. Taijin-kyofusho (TKS), a subtype common in East Asia, additionally includes a fear of embarrassing others. TKS individuals are hypersensitive to others' feelings and worry that their physical or behavioral defects humiliate others. To explore the underlying neurocognitive mechanisms, we compared TKS ratings with questionnaire-based empathic disposition, cognitive flexibility (set-shifting), and empathy-associated brain activity in 23 Japanese adults. During 3-tesla functional MRI, subjects watched video clips of badly singing people who expressed either authentic embarrassment (EMBAR) or hubristic pride (PRIDE). We expected the EMBAR singers to embarrass the viewers via emotion-sharing involving affective empathy (affEMP), and the PRIDE singers to embarrass via perspective-taking involving cognitive empathy (cogEMP). During affEMP (EMBAR > PRIDE), TKS scores correlated positively with dispositional affEMP (personal-distress dimension) and with amygdala activity. During cogEMP (EMBAR < PRIDE), TKS scores correlated negatively with cognitive flexibility and with activity of the posterior superior temporal sulcus/temporoparietal junction (pSTS/TPJ). Intersubject correlation analysis implied stronger involvement of the anterior insula, inferior frontal gyrus, and premotor cortex during affEMP than cogEMP and stronger involvement of the medial prefrontal cortex, posterior cingulate cortex, and pSTS/TPJ during cogEMP than affEMP. During cogEMP, the whole-brain functional connectivity was weaker the higher the TKS scores. The observed imbalance between affEMP and cogEMP, and the disruption of functional brain connectivity, likely deteriorate cognitive processing during embarrassing situations in persons who suffer from other-oriented social anxiety dominated by empathic embarrassment.
In this work, a mixed-integer binary non-linear two-echelon inventory problem is formulated for a vendor-buyer supply chain network in which lead times are constant and the demands of buyers follow a normal distribution. In this formulation, the problem is a combination of an (r, Q) and periodic review policies based on which an order of size Q is placed by a buyer in each fixed period once his/her on hand inventory reaches the reorder point r in that period. The constraints are the vendors’ warehouse spaces, production restrictions, and total budget. The aim is to find the optimal order quantities of the buyers placed for each vendor in each period alongside the optimal placement of the vendors among the buyers such that the total supply chain cost is minimized. Due to the complexity of the problem, a Modified Genetic Algorithm (MGA) and a Particle Swarm Optimization (PSO) are used to find optimal and near-optimum solutions. In order to assess the quality of the solutions obtained by the algorithms, a mixed integer nonlinear program (MINLP) of the problem is coded in GAMS. A design of experiment approach named Taguchi is utilized to adjust the parameters of the algorithms. Finally, a wide range of numerical illustrations is generated and solved to evaluate the performances of the algorithms. The results show that the MGA outperforms the PSO in terms of the fitness function in most of the problems and also is faster than the PSO in terms of CPU time in all the numerical examples.
Purpose: The purpose of present study was to investigate the impact of sport experience on response inhibition and response re-engagement in expert badminton athletes during the stop-signal task and change-signal task.Methods: A total of 19 badminton athletes and 20 nonathletes performed both the stop-signal task and change-signal task. Reaction times (RTs)and event-related potentials were recorded and analyzed. Results: Behavioral results indicated that badminton athletes responded faster than nonathletes to go stimuli and to change signals, with faster change RTs and change-signal RTs, which take into consideration the variable stimulus onset time mean. During successful change trials in the change-signal task, the amplitudes of the event-related potential components N2 and P3 were smaller for badminton athletes than for nonathletes. Moreover, change-signal RTs and N2 amplitudes as well as change RTs and P3 amplitudes were significantly correlated in badminton athletes. A significant correlation was also found between the amplitude of the event-related potential component N1 and response accuracy to change signals in badminton athletes. Conclusion: Moderation of brain cortical activity in badminton athletes was more associated with their ability to rapidly inhibit a planned movement and reengage with a new movement compared with nonathletes. The superior inhibitory control and more efficient neural mechanisms in badminton athletes compared with nonathletes might be a result of badminton athletes’ professional training experience.
Cooperative vehicular networks will play a vital role in the coming years to implement various intelligent transportation related applications. Both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications will be needed to reliably disseminate information in a vehicular network. In this regard, a roadside unit (RSU) equipped with multiple antennas can improve the network capacity. While the traditional approaches assume antennas to experience independent fading, we consider a more practical uplink scenario where antennas at the RSU experience correlated fading. In particular, we evaluate the packet error probability for two renowned antenna correlation models, i.e., constant correlation (CC) and exponential correlation (EC). We also consider intermediate cooperative vehicles for reliable communication between the source vehicle and the RSU. Here, we derive closed-form expressions for packet error probability, which help to quantify the performance variations due to fading parameter, correlation coefficients, and the number of intermediate helper vehicles. To evaluate the optimal transmit power in this network scenario, we formulate a Stackelberg game, wherein, the source vehicle is treated as a buyer and the helper vehicles are the sellers. The optimal solutions for the asking price and the transmit power are devised which maximize the utility functions of helper vehicles and the source vehicle, respectively. We verify our mathematical derivations by extensive simulations in MATLAB.
This paper investigates the energy efficiency (EE) optimization in downlink multi-cell massive multiple-input multiple-output (MIMO). In our research, the statistical channel state information (CSI) is exploited to reduce the signaling overhead. To maximize the minimum EE among the neighbouring cells, we design the transmit covariance matrices for each base station (BS). Specifically, optimization schemes for this max-min EE problem are developed, in the centralized and distributed ways, respectively. To obtain the transmit covariance matrices, we first find out the closed-form optimal transmit eigenmatrices for the BS in each cell, and convert the original transmit covariance matrices designing problem into a power allocation one. Then, to lower the computational complexity, we utilize an asymptotic approximation expression for the problem objective. Moreover, for the power allocation design, we adopt the minorization maximization method to address the non-convexity of the ergodic rate, and use Dinkelbach’s transform to convert the max-min fractional problem into a series of convex optimization subproblems. To tackle the transformed subproblems, we propose a centralized iterative water-filling scheme. For reducing the backhaul burden, we further develop a distributed algorithm for the power allocation problem, which requires limited inter-cell information sharing. Finally, the performance of the proposed algorithms are demonstrated by extensive numerical results.
Background. Athletic performance is affected by emotional state. Athletes may underperform in competition due to poor emotion regulation. Movement speed plays an important role in many competition events. Flexible control of movement speed is critical for effective athletic performance. Although behavioral evidence showed that negative emotion can influence movement speed, the nature of the relationship remains controversial. Thus, the present study investigated how negative emotion affects movement speed and the neural mechanism underlying the interaction between emotion processing and movement control. Methods. The present study combined electroencephalography (EEG) technology with a cued-action task to investigate the effect of negative emotion on movement speed. In total, 21 undergraduate students were recruited for this study. Participants were asked to perform six consecutive action tasks after viewing an emotional picture. Pictures were presented in two blocks (one negative and one neutral). After the participants completed a set of tasks (neutral of negative), they were subjected to complete a 9-point self-assessment manikin scale. Participants underwent EEG while performing the tasks. Results. At the behavior level, there was a significant main effect of emotional valence on movement speed, with participants exhibiting significantly slower movements in the negative emotional condition than in the neutral condition. EEG data showed increased theta oscillation and larger P1 amplitude in response to negative than to neural images suggesting that more cognitive resources were required to process negative than neutral images. EEG data also showed a larger late CNV area in the neutral condition than in the negative condition, which suggested that there was a significant decrease in brain activation during action tasks in negative emotional condition than in the neural. While the early CNV did not reveal a significant main effect of emotional valence. Conclusion. The present results indicate that a negative emotion can slow movement, which is largely due to negative emotional processing consuming more resources than non-emotional processing and this interference effect mainly occurred in the late movement preparation phase.
Concept-based image search is an emerging search paradigm that utilizes a set of concepts as intermediate semantic descriptors of images to bridge the semantic gap. Typically, a user query is rather complex and cannot be well described using a single concept. However, it is less effective to tackle such complex queries by simply aggregating the individual search results for the constituent concepts. In this paper, we propose to introduce the learning to rank techniques to concept-based image search for complex queries. With freely available social tagged images, we first build concept detectors by jointly leveraging the heterogeneous visual features. Then, to formulate the image relevance, we explicitly model the individual weight of each constituent concept in a complex query. The dependence among constituent concepts, as well as the relatedness between query and non-query concepts, are also considered through modeling the pairwise concept correlations in a factorization way. Finally, we train our model to directly optimize the image ranking performance for complex queries under a pairwise learning to rank framework. Extensive experiments on two benchmark datasets well verified the promise of our approach.
Image annotation and large annotated datasets are crucial parts within the Computer Vision and Artificial Intelligence fields. At the same time, it is well-known and acknowledged by the research community that the image annotation process is challenging, time-consuming and hard to scale. Therefore, the researchers and practitioners are always seeking ways to perform the annotations easier, faster, and at higher quality. Even though several widely used tools exist and the tools’ landscape evolved considerably, most of the tools still require intricate technical setups and high levels of technical savviness from its operators and crowdsource contributors.In order to address such challenges, we develop and present BRIMA – a flexible and open-source browser extension that allows BRowser-only IMage Annotation at considerably lower overheads. Once added to the browser, it instantly allows the user to annotate images easily and efficiently directly from the browser without any installation or setup on the client-side. It also features cross-browser and cross-platform functionality thus presenting itself as a neat tool for researchers within the Computer Vision, Artificial Intelligence, and privacy-related fields.
Structured Query Language (SQL) skills are crucial in software engineering and computer science. However, teaching SQL effectively requires both pedagogical skill and considerable knowledge of the language. Educators and scholars have proposed numerous considerations for the betterment of SQL education, yet these considerations may be too numerous and scattered among different fora for educators to find and internalize, as no systematic mappings or literature reviews regarding SQL education have been conducted. The two main goals of this mapping study are to provide an overview of educational SQL research topics, research types and publication fora, and to collect and propagate SQL teaching practices for educators to utilize. Additionally, we present a short future research agenda based on insights from the mapping process. We conducted a systematic mapping study complemented by snowballing techniques to identify applicable primary studies. We classified the primary studies according to research type, and utilized directed content analysis to classify the primary studies by their topic. Out of our selected 89 primary studies, we identified six recurring topics: (i) student errors in query formulation; (ii) characteristics and presentation of the exercise database; (iii) specific and (iv) non-specific teaching approach suggestions (v) patterns and visualization; and (vi) easing teacher workload. We list 66 teaching approaches the primary studies argued for (and in some cases against). For researchers, we provide a systematic map of educational SQL research, and future research agenda. For educators, we present an aggregated body of knowledge on teaching practices in SQL education over a time frame of 30 years. In conclusion, we suggest that replication studies, studies on advanced SQL concepts, and studies on aspects other than data retrieval are needed to further educational SQL research.
[Johdanto] Summary submission is electronic only. The submission process consists of entering the paper title, author(s) and affiliation(s), and an abstract no longer than 35 words. Authors are prompted to state their preference for presentation type (oral presentation, poster or data workshop poster) and for session. Details for the submission process will be provided later on. The final category of all papers will be determined by the Technical Program Committee, which is responsible for selecting final papers from initial submissions. Papers accepted for oral or poster presentation at the technical program will be eligible for publication in the IEEE Transactions on Nuclear Science. Selection for this issue will be based on a separate submission of a complete paper. These papers will be subject to the standard full peer review given all papers submitted to the IEEE Transactions on Nuclear Science. Further information will be sent to prospective authors upon acceptance of their RADECS summary. [Jatkuu artikkelissa]
Passwords are the most frequently used authentication mechanism. However, due to increased password numbers, there has been an increase in insecure password behaviors (e.g., password reuse). Therefore, new and innovative ways are needed to increase password memorability and security. Typically, users are asked to input their passwords once in order to access the system, and twice to verify the password, when they create a new account. But what if users were asked to input their passwords three or four times when they create new accounts? In this study, three groups of participants were asked to verify their passwords once (control group), twice, and three times (two experimental groups). Psychological literature suggests that applying repetition in learning to the password process has significant effects on password memorability. However, previous password research has found a tradeoff between password security and memorability, and more recently, user convenience. Our results suggest that verifying passwords three times can increase password memorability from 42% (verifying passwords just once as with current practices) to 70%. Even by increasing the verification to just two times can increase password memorability by 17%. However, we found that through increasing the number of verifications did not equate to a decrease in user convenience. What this means is that small changes to the password verification stage can have significant results on password memorability while not necessarily inconveniencing the user. The implications of these results could ultimately have a positive effect on password security, and the consequences of forgetting passwords.
Amazon demonstrated a conspicuous increase in R&D and became the world's top R&D firm in 2017 with a skyrocketing increase in market capitalization, making it close to being the world's biggest company. Such a remarkable accomplishment can be attributed to Amazon's institutional systems, which orchestrate techno-financing systems that fuse a unique R&D transformation system and a sophisticated financing system centered on the cash conversion cycle (CCC). These institutional systems support and endorse aggressive investment in R&D that incorporates the characteristics of uncertainty, a long lead time, and successive inflows of very large amounts of funding without interruption. While some of this investment can be endorsed by Amazon's positive business results in terms of a sustained increase in sales and free cash flow, such a large amount of aggressive investment is beyond endorsement. In addition to actual economic performance, investors have been betting on a high level of risky investment with the expectation of Amazon's future success by trusting its R&D-inducing institutional systems. While the former can be considered to be a general reaction to a producer surplus, the latter can be postulated as an investor surplus in which investors bet on overly optimistic future prospects instead of actual accomplishments. This is similar to a consumer surplus in which consumers pay more than the actual market price for attractive goods and services. By introducing a concept of gross market value consisting of a producer surplus and an investor surplus, this paper attempts to elucidate the institutional systems that enable Amazon to invest a very large amount of financing resources in aggressive R&D. An intensive empirical analysis focusing on the development trajectory of Amazon's techno-financing system over the last two decades was conducted, together with comparative analyses of the performance of the big four online service companies, Google, Apple, Facebook, and Amazon (GAFA). It was identified that among GAFA, Amazon demonstrated the highest dependence on an investor surplus, which suggests that investors are betting on the continuation of Amazon's solid growth by means of its aggressive investment in R&D, supported and endorsed by its institutional systems. This idea is supported by the high elasticity of its investor surplus to R&D investment. Noteworthy is that investors incorporate not only shareholders but also broad stakeholders centered on users, and that they expect not only economic value but also supra-functionality beyond such value. A broadly applicable practical approach for measuring an investor surplus and an insightful suggestion highlighting the significance of an investor surplus toward stakeholder capitalism are thus provided.
Literature on global employability signifies “enabling” learning environments where students encounter ill-formed and open-ended problems and are required to adapt and be creative. Varying forms of “projects,” co-located and distributed, have populated computing curricula for decades and are generally deemed an answer to this call. We performed a qualitative study to describe how project course students are able to capitalize on the promise of enabling learning environments. This critical perspective was motivated by the circumstance of the present-day education systems being heavily regulated for the precipitated production of human capital. The students involved in our study described education system-imposed and group-imposed narratives of narrowed opportunities, as well as many self-related challenges. However, students welcomed autonomy as an enjoyable condition and linked it with motivation. Whole-group commitment and self-related attributes such as taking care of one’s own learning appeared as important conditions. The results highlight targets for interventions that can counteract constraining study conditions and continue the march of projects as a means to foster complex learning for the benefit of students and professionalism in global software engineering.
Deficiency of correctly implemented and robust defence leaves Internet of Things devices vulnerable to cyber threats, such as adversarial attacks. A perpetrator can utilize adversarial examples when attacking Machine Learning models used in a cloud data platform service. Adversarial examples are malicious inputs to ML-models that provide erroneous model outputs while appearing to be unmodified. This kind of attack can fool the classifier and can prevent ML-models from generalizing well and from learning high-level representation; instead, the ML-model learns superficial dataset regularity. This study focuses on investigating, detecting, and preventing adversarial attacks towards a cloud data platform in the cyber-physical context.
The recommendation systems plays an important role in today’s life as it assist in reliable selection of common utilities. The code recommendation system is being used by the code databases (GitHub, source frog etc.) aiming to recommend the more appropriate code to the users. There are several factors that could negatively impact the performance of code recommendation systems (CRS). This study aims to empirically explore the challenges that could have critical impact on the performance of the CRS. Using systematic literature review and questionnaire survey approaches, 19 challenges were identified. Secondly, the investigated challenges were further prioritized using fuzzy-AHP analysis. The identification of challenges, their categorization and the fuzzy-AHP analysis provides the prioritization-based taxonomy of explored challenges. The study findings will assist the real-world industry experts and to academic researchers to improve and develop the new techniques for the improvement of CRS.
Immersive virtual reality applications aim at providing an all-encompassing spatial experience where a user can feel like being in another world or dimension. The systems are inherently designed for individual use as the devices disconnect the user from the physical environment. However, the applications are seldom used alone. Specifically, when used for sales and marketing, the user often needs help from other people but also benefits from social interaction as a part of the experience. Design research methodology is applied to three iterative development versions of a virtual-reality application. The focus of the evaluation of the artifacts is in the social use emphasizing three sociability factors: shared knowledge, mutual trust and influence. According to the findings, the users prefer personal social interaction as a part of the experience. Thus, the social aspect should be emphasized in the service design.
We examined the sustainability of the KiVa antibullying program in Finland from its nationwide roll‐out in 2009 to 2016. Using latent class analyses, we identified four different patterns of implementation. The persistent schools (43%) maintained a high likelihood of participation throughout the study period. The awakened (14%) had a decreasing trend during the first years, but then increased the likelihood of program participation. The tail‐offs (20%) decreased in the likelihood of participating after the third year, and the drop‐offs (23%) already after the first year. The findings suggest that many schools need support during the initial years to launch and maintain the implementation of evidence‐based programs; yet a large proportion of schools manage to sustain the program implementation for several years. The logistic regression analyses showed that large schools persisted more likely than small schools. Lower initial level of victimization was also related to the sustainability of the program. Finally, persistent program participation was predicted by several school‐level actions during the initial years of implementing the program. These results imply that the sustainability of evidence‐based programs could be enhanced by supporting and guiding schools when setting up the program during the initial implementation.
In cognitive radio networks (CRN), secondary users (SUs) are required to detect the presence of the licensed users, known as primary users (PUs), and to find spectrum holes for opportunistic spectrum access without causing harmful interference to PUs. However, due to complicated data processing, non-real-time information exchange and limited memory, SUs often suffer from imperfect sensing and unreliable spectrum access. Cloud computing can solve this problem by allowing the data to be stored and processed in a shared environment. Furthermore, the information from a massive number of SUs allows for more comprehensive information exchanges to assist the resource allocation and interference management at the cloud center while relieving the stringent capacity demands in fronthaul links. Moreover, spectrum resources should be made available to more users, especially when the spectrum is underutilized but occupies a large band. Hence, cloud-based CRN can generate massive sensing samples that will benefit the applications of big data algorithms. The approaches to spectrum sensing and spectrum management can be greatly improved with decision-making capabilities of spectral big data.
Life‑long learning is currently being embraced as a central process that could disrupt traditional educational paths. Apparently, the (ideal) type of learning often promoted is deep and meaningful learning, though it is not always required to be so. Deep learning goes beyond superficial knowledge assimilation of unlinked facts; it aims at developing deep disciplinary understanding, transformative knowledge, personal meaning, emotional intelligence, critical thinking, creativity and metacognitive skills. Meaningful learning occurs when learning is active, constructive, intentional, authentic, and cooperative. Technology enhanced teaching and learning methods should prove their potential to transform life‑long learning provision and facilitate the achievement of deep and meaningful learning. In the context of distance education in life‑long learning, one important challenge is the design of versatile quality assurance strategies for e‑training. Based on the experiences in distance lifelong learning programmes in the University of Patras’ Educational Center for Life‑Long Learning (KEDIVIM) the authors present how the principles and attributes of deep and meaningful learning can be combined with project management in practice and be incorporated in an e‑Learning quality strategy. We present i) the methods used to assess the quality of the e‑Learning programmes, ii) key findings of the evaluation process and iii) first research evaluation results on the quality of learning. This research study on learning process quality was conducted by using an online questionnaire, which aimed at estimating the level of participants’ satisfaction while using interactive learning methods such as collaborative learning. Some results of the evaluation indicate that the e‑Learning quality strategy led to e‑Learning programmes that used active learning methods to achieve high learners’ satisfaction towards deep and meaningful learning
The paper is concerned with an elliptic variational inequality associated with a free boundary obstacle problem for the biharmonic operator. We study the bounds of the difference between the exact solution (minimizer) of the corresponding variational problem and any function (approximation) from the energy class satisfying the prescribed boundary conditions and the restrictions stipulated by the obstacle. Using the general theory developed for a wide class of convex variational problems we deduce the error identity. One part of this identity characterizes the deviation of the function (approximation) from the exact solution, whereas the other is a fully computed value (it depends only on the data of the problem and known functions). In real life computations, this identity can be used to control the accuracy of approximate solutions. The measure of deviation from the exact solution used in the error identity contains terms of different nature. Two of them are the norms of the difference between the exact solutions (of the direct and dual variational problems) and corresponding approximations. Two others are not representable as norms. These are nonlinear measures vanishing if the coincidence set defined by means of an approximate solution satisfies certain conditions (for example, coincides with the exact coincidence set). The error identity is true for any admissible (conforming) approximations of the direct variable, but it imposes some restrictions on the dual variable. We show that these restrictions can be removed, but in this case the identity is replaced by an inequality. For any approximations of the direct and dual variational problems, the latter gives an explicitly computable majorant of the deviation from the exact solution. Several examples illustrating the established identities and inequalities are presented.
The purpose of this research was to evaluate the performances of female middle- and long-distance runners before and after the implementation of a new antidoping strategy (the Athlete Biological Passport [ABP]) in a country accused of systematic doping. A retrospective analysis of the results of Russian National Championships from 2008 to 2017 was performed. The 8 best female performances for the 800-m, 1500-m, 3000-m steeplechase, 5000-m, and 10,000-m events from the semifinals and finals were analyzed. The yearly number of athletes fulfilling standard qualifications for international competitions was also evaluated. Overall, numbers of athletes banned for doping in 2008–2017 were calculated. As a result, 4 events (800, 1500, 5000 [all P < .001], and 10,000 m [P < .01]) out of 5 showed statistically significant deterioration in the performances when comparing before and after the introduction of the ABP. The 3000-m steeplechase was the only event that did not show statistically significant change. The highest relative decrease in the number of runners who met standard qualification for international competition was for the 5000-m event (46%), followed by 1500-m (42%), 800-m (38%), 10,000-m (17%), and 3000-m steeplechase (1%). In conclusion, implementation of the ABP was followed by a significant reduction in the performance of female runners in a country accused of systematic doping. It can be reasonably speculated that more stringent antidoping testing, more specifically the introduction of the ABP, is a key reason for this reduction.
Many real life problems can be modelled as multiobjective optimization problems. Such problems often consist of multiple conflicting objectives to be optimized simultaneously. Multiple optimal solutions exist to these problems, and a single solution cannot be said to be the best without preferences given by a domain expert. Preferences can be used to find satisfying solutions: optimal solutions, which best match the expert’s preferences. To model the preferences of the expert, and aid him/her in finding satisfying solutions, a novel method is proposed. The method utilizes machine learning combined with belief-rule based systems to adaptively train a belief rule based system to learn a domain expert’s preferences using preference information gathered during an interactive process. Belief-rule based systems are explainable generalized expert systems, which have not been used before in the manner described in this paper to model preferences of a domain expert for a multi-objective optimization problem. In the case study conducted, the satisfying solutions found using learned preferences are concluded to be compatible with the preferences of the expert, which support the proposed method’s viability as a decision making support tool.
This work-in-progress research investigates teacher-student communication via Learning Management Systems (LMS) in highly populated courses. An LMS called TIM (The Interactive Material) includes a specific commenting technology that attempts to make teacher-student dialog effortless. The research goal is to explore students’ willingness to use the technology and identify patterns of usage. To these ends, a survey with both Likert and open-ended questions was issued to CS1 and CS2 students. A favorable student evaluation was observed while several critical viewpoints that inform technology development were revealed. We noticed that besides appreciating the possibility of making comments, many students found benefit from peripheral participation without being active in commenting themselves. Informal communication appared to be preferred, and the commenting technology was considered second to best channel in this regard, following face-to-face interaction. The results are discussed in the light of Transactional Distance Theory and related literature to inform basic research.
This work-in-progress paper in research category reports preliminary findings on how students taking introductory computing courses develop identity from the perspective of study difficulties. The motivation was that students identified lack of meaning and prospects (cf. identity) as a study difficulty in a previous qualitative study. The present study further explores this finding by issuing both an identity development and a self-efficacy scale to a larger first-year student cohort. The aim is to characterize the study cohort by the aspects included in the identity development scale, and thereby increase understandings of students’ challenges. Moreover, a correlation analysis between identity development and self-efficacy was performed to explore if, for instance, low self-efficacy related to yet a loose identity choice. We also examined the effect of age. Main observations included that many students showed ruminative exploration of identity, which was negatively associated with self-efficacy. Altogether, a rather high number of negative and neutral responses with respect to the identity choice was observed. An initial look at self-efficacy distributions suggests that students related to challenge positively, while a large number of neutral answers were found with respect to the dimension of Effort. This might be indicative of uncertainty about doing the work. Regarding the effect of age, younger students were observed to worry more about the future compared to older students.
Defining and finding robust efficient solutions to uncertain multiobjective optimization problems has been an issue of growing interest recently. Different concepts have been published defining what a “robust efficient” solution is. Each of these concepts leads to a different set of solutions, but it is difficult to visualize and understand the differences between these sets. In this paper we develop an approach for comparing such sets of robust efficient solutions, namely we analyze their outcomes under the nominal scenario and in the worst case using the upper set-less order from set-valued optimization. Analyzing the set of nominal efficient solutions, the set of minmax robust efficient solutions and different sets of lightly robust efficient solutions gives insight into robustness and nominal objective function values of these sets of solutions. Among others we can formally prove that lightly robust efficient solutions are good compromises between nominal efficient solutions and minmax robust efficient solutions. In addition, we also propose a measure to quantify the price of robustness of a single solution. Based on the measure, we propose two strategies which can be used to support a decision maker to find solutions to a multiobjective optimization problem under uncertainty. All our results are illustrated by examples.
We present H-KPP, hypervisor-based protection for kernel code and data structures. H-KPP prevents the execution of unauthorized code in kernel mode. In addition, H-KPP protects certain object fields from malicious modifications. H-KPP can protect modern kernels equipped with BPF facilities and loadable kernel modules. H-KPP does not require modifying or recompiling the kernel. Unlike many other systems, H-KPP is based on a thin hypervisor and includes a novel SLAT switching mechanism, which allows H-KPP to achieve very low (≈6%) performance overhead compared to baseline Linux.
In this paper, we investigate a resource allocation and computation offloading problem in a heterogeneous mobile edge computing (MEC) system. In the considered system, a wireless power transfer (WPT) base station (BS) with an MEC sever is able to deliver wireless energy to the mobile devices (MDs), and the MDs can utilize the harvested energy for local computing or task offloading to the WPT BS or a Macro BS (MBS) with a stronger computing server. In particular, we consider that the WPT BS can utilize full- or half-duplex wireless energy transmission mode to empower the MDs. The aim of this work focuses on optimizing the offloading decision, full/half-duplex energy harvesting mode and energy harvesting (EH) time allocation with the objective of minimizing the energy consumption of the MDs. As the formulate problem has a non-convex mixed integer programming structure, we use the quadratically constrained quadratic program (QCQP) and semi-definite relaxation (SDR) methods to solve it. The simulation results demonstrate the effectiveness of the proposed scheme.
Continuous software engineering has become commonplace in numerous fields. However, in regulating intensive sectors, where additional concerns need to be taken into account, it is often considered difficult to apply continuous development approaches, such as devops. In this paper, we present an approach for using pull requests as design controls, and apply this approach to machine learning in certified medical systems leveraging model cards, a novel technique developed to add explainability to machine learning systems, as a regulatory audit trail. The approach is demonstrated with an industrial system that we have used previously to show how medical systems can be developed in a continuous fashion.
Feature selection (FS) may improve the performance, cost-efficiency, and understandability of supervised machine learning models. In this paper, FS for the recently introduced distance-based supervised machine learning model is considered for regression problems. The study is contextualized by first providing an umbrella review (review of reviews) of recent development in the research field. We then propose a saliency-based one-shot wrapper algorithm for FS, which is called MAS-FS. The algorithm is compared with a set of other popular FS algorithms, using a versatile set of simulated and benchmark datasets. Finally, experimental results underline the usefulness of FS for regression, confirming the utility of certain filter algorithms and particularly the proposed wrapper algorithm.
Various biotic and abiotic stresses are causing decline in forest health globally. Presently, one of the major biotic stress agents in Europe is the European spruce bark beetle (Ips typographus L.) which is increasingly causing widespread tree mortality in northern latitudes as a consequence of the warming climate. Remote sensing using unoccupied aerial systems (UAS) together with evolving machine learning techniques provide a powerful tool for fast-response monitoring of forest health. The aim of this study was to investigate the performance of a deep one-stage object detection neural network in the detection of damage by I. typographus in Norway spruce trees using UAS RGB images. A Scaled-YOLOv4 (You Only Look Once) network was implemented and trained for tree health analysis. Datasets for model training were collected during 2013–2020 from three different areas, using four different RGB cameras, and under varying weather conditions. Different model training options were evaluated, including two different symptom rules, different partitions of the dataset, fine-tuning, and hyperparameter optimization. Our study showed that the network was able to detect and classify spruce trees that had visually separable crown symptoms, but it failed to separate spruce trees with stem symptoms and a green crown from healthy spruce trees. For the best model, the overall F-score was 89%, and the F-scores for the healthy, infested, and dead trees were 90%, 79%, and 98%, respectively. The method adapted well to the diverse dataset, and the processing results with different options were consistent. The results indicated that the proposed method could enable implementation of low-cost tools for management of I. typographus outbreaks.
Solving real-life data-driven multiobjective optimization problems involves many complicated challenges. These challenges include preprocessing the data, modelling the objective functions, getting a meaningful formulation of the problem, and supporting decision makers to find preferred solutions in the existence of conflicting objective functions. In this paper, we tackle the problem of optimizing the composition of microalloyed steels to get good mechanical properties such as yield strength, percentage elongation, and Charpy energy. We formulate a problem with six objective functions based on data available and support two decision makers in finding a solution that satisfies them both. To enable two decision makers to make meaningful decisions for a problem with many objectives, we create the so-called MultiDM/IOPIS algorithm, which combines multiobjective evolutionary algorithms and scalarization functions from interactive multiobjective optimization methods in novel ways. We use the software framework called DESDEO, an open-source Python framework for interactively solving multiobjective optimization problems, to create the MultiDM/IOPIS algorithm. We provide a detailed account of all the challenges faced while formulating and solving the problem. We discuss and use many strategies to overcome those challenges. Overall, we propose a methodology to solve real-life data-driven problems with multiple objective functions and decision makers. With this methodology, we successfully obtained microalloyed steel compositions with mechanical properties that satisfied both decision makers.
Current real-time technologies for Linux require partitioning for running RTOS alongside Linux or extensive kernel patching. The offline nanovisor is a minimal real-time library OS in a lightweight hypervisor under Linux. We describe a nanovisor that executes in an offline processor. An offline processor is a processor core removed from the running operating system. The offline processor executes userspace code through the use of a hyplet. The hyplet is a nanovisor that allows the kernel to execute userspace programs without delays. Combining these two technologies offers a way to achieve hard real-time in standard Linux. We demonstrate high-speed access in various use cases using a userspace timer in frequencies up to 20 kHz, with a jitter of a few hundred nanoseconds. We performed this on a relatively slow ARM processor.
Most recently, there has been a growing need for developing very-large-scale integration (VLSI) circuits with low energy consumption and high speed for use in fast transmission systems. In addition, the main challenge in designing irreversible integrated circuits is heat generation due to data loss. Thus, in recent years, reversible design has been preferred for low-power VLSI circuits because the data are not lost. In this article, a new design of parity-preserving-reversible (PPR) floating-point divider is suggested. A floating-point divider structure includes parallel adder, multiplexer, register, and left-shift register. To optimize these circuits, first, we propose a 5 × 5 PPR block and a PPR D-latch. Second, using the proposed circuits, a ripple-carry-adder, a register, and an efficient parallel-input-parallel-output-left-shift register, rounding-register, and normalization register circuits are introduced in PPR logic. The comparisons illustrate that the suggested circuits are preferable to the circuits presented in previous works in terms of various criteria such as quantum cost, constant inputs, and garbage outputs.
Information loss is generally related to power consumption. Therefore, reducing information loss is an interesting challenge in designing digital systems. Quaternary reversible circuits have received significant attention due to their low-power design applications and attractive advantages over binary reversible logic. Multiplexer and demultiplexer circuits are crucial parts of computing circuits in ALU, and their efficient design can significantly affect the processors’ performance. A new scalable realization of quaternary reversible 4×1 multiplexer and 1×4 demultiplexer, based on quaternary 1-qudit Shift, 2-qudit Controlled Feynman, and 2-qudit Muthukrishnan-Stroud gates, is presented in this paper. Moreover, the corresponding generalized quaternary reversible n ×1 multiplexer and 1× n demultiplexer circuits are proposed. The comparison, with respect to the current literature, shows that the proposed circuits are more efficient in terms of quantum cost, the number of garbage outputs, and the number of constant inputs.
Dimension reduction is one of the key data transformation techniques in machine learning and knowledge discovery. It can be realized by using linear and nonlinear transformation techniques. An additive autoencoder for dimension reduction, which is composed of a serially performed bias estimation, linear trend estimation, and nonlinear residual estimation, is proposed and analyzed. Compared to the classical model, adding an explicit linear operator to the overall transformation and considering the nonlinear residual estimation in the original data dimension significantly improves the data reproduction capabilities of the proposed model. The computational experiments confirm that an autoencoder of this form, with only a shallow network to encapsulate the nonlinear behavior, is able to identify an intrinsic dimension of a dataset with low autoencoding error. This observation leads to an investigation in which shallow and deep network structures, and how they are trained, are compared. We conclude that the deeper network structures obtain lower autoencoding errors during the identification of the intrinsic dimension. However, the detected dimension does not change compared to a shallow network. As far as we know, this is the first experimental result concluding no benefit from a deep architecture compared to its shallow counterpart.
Many libraries of open-source implementations of multiobjective optimization problems (MOPs) and evolutionary algorithms (MOEAs) have been developed in recent years. These libraries enable researchers to solve their MOPs using diverse MOEAs. Some libraries also implement interactive MOEAs, which enable decision-makers (experts in the domain of the MOP) to provide their preferences and guide the optimization process toward their region of interest. These libraries also provide access to visualization methods and benchmarking tools. However, they do not currently implement a database to store and utilize the data generated while running MOEAs. We propose the creation of SIVA DB, a database designed to be easily incorporated into existing libraries as a modular addition. SIVA DB provides a standard way to archive an MOEA's population and the metadata associated with each population member. Such metadata can include, e.g., the parameters and state of the MOEA and the preferences the decision-maker gives (in the case of interactive MOEAs). The database can store data from multiple runs of any number of MOEAs, and even data from different MOPs. SIVA DB provides easy access to the contained data to analyze the optimization process or create efficient MOEAs. We demonstrate the latter in this paper with experiments.
The strategic environment is evolving rapidly with the recognition of cyberspace as a domain of warfare. The increased interest in cyber as a part of defence has heightened the need for theoretical tools suitable to assess cyber threat perceptions and responses to these threats. Drawing from previous research, we will formulate an analytical framework to study the formation of Russian thinking on cyber threats as a part of Russian strategic culture. This article identifies a sense of vulnerability, the narrative of Russia as a besieged fortress and the technological inferiority of Russia as specific factors influencing Russian cyber threat perception.
Cognitive automation powered by advanced intelligent technologies enables organizations to automate more and more of their knowledge-work tasks. Though offering higher efficiency and lower costs, cognitive automation exacerbates erosion of humans’ skills and expertise on the automated tasks. Letting go of obsolete skills is necessary to reap the technology’s benefits – however, erosion of essential human expertise is problematic if workers remain accountable for tasks where they lack sufficient understanding, rendering them incapable to respond if the automation fails. Though the phenomenon is widely acknowledged, the dynamics behind such undesired skill erosion are poorly understood. Thus, taking the perspective of sociotechnical systems, we conduct a case study of an accounting firm that had experienced skill erosion over years of reliance on their software’s automated functions. We synthesize our findings using causal loop modeling based on system dynamics. The resulting dynamic model explains skill erosion via an interplay between humans’ automation reliance, complacency, and mindful conduction. It shows how increasing reliance on automation fosters complacency at both individual and organizational levels, weakening workers’ mindfulness across three work task facets (activity-awareness, competence maintenance, and output assessment), resulting in skill erosion. Such skill erosion may remain obscure, acknowledged by neither workers nor managers. We discuss these implications for theory and practice, also identifying directions for future research.
Multiobjective optimization problems have multiple conflicting objective functions to be optimized simultaneously. They have many Pareto optimal solutions representing different trade-offs, and a decision-maker needs to find the most preferred one. Although most multiobjective evolutionary algorithms approximate the Pareto optimal set, their variants incorporate preference information to focus on a subset of solutions that interest the decision-maker. Interactive methods allow decision-makers to provide preference information iteratively during the solution process, enabling them to learn about available solutions and their preferences' feasibility. Nevertheless, most interactive evolutionary methods do not sufficiently support the decision-maker in finding the most preferred solution and may be cognitively too demanding. We propose a framework for designing and implementing interactive evolutionary methods. It contains algorithmic components based on similarities in the structure of existing preference-based evolutionary algorithms and decision-makers' needs during interaction. The components can be combined in different ways to create new interactive methods or to instantiate the existing ones. We show an example of the implementation of the proposed framework composed of three elements: a graphical user interface, a database, and a set of algorithmic components. The resulting software can be utilized to develop new methods and increase their usability in real-world applications.