Kaikki aineistot
Lisää
Many-objective optimization problems (MaOPs) contain four or more conflicting objectives to be optimized. A number of efficient decomposition-based evolutionary algorithms have been developed in the recent years to solve them. However, computationally expensive MaOPs have been scarcely investigated. Typically, surrogate-assisted methods have been used in the literature to tackle computationally expensive problems, but such studies have largely focused on problems with 1–3 objectives. In this paper, we present an approach called hybrid surrogate-assisted many-objective evolutionary algorithm to solve computationally expensive MaOPs. The key features of the approach include: 1) the use of multiple surrogates to effectively approximate a wide range of objective functions; 2) use of two sets of reference vectors for improved performance on irregular Pareto fronts (PFs); 3) effective use of archive solutions during offspring generation; and 4) a local improvement scheme for generating high quality infill solutions. Furthermore, the approach includes constraint handling which is often overlooked in contemporary algorithms. The performance of the approach is benchmarked extensively on a set of unconstrained and constrained problems with regular and irregular PFs. A statistical comparison with the existing techniques highlights the efficacy and potential of the approach.
The minimal learning machine (MLM) training procedure consists in solving a linear system with multiple measurement vectors (MMV) created between the geometric congurations of points in the input and output spaces. Such geometric congurations are built upon two matrices created using subsets of input and output points, named reference points (RPs). The present paper considers an extension of the focal underdetermined system solver (FOCUSS) for MMV linear systems problems with additive noise, named regularized MMV FOCUSS (regularized M-FOCUSS), and evaluates it in the task of selecting input reference points for regression settings. Experiments were carried out using UCI datasets, where the proposal was able to produce sparser models and achieve competitive performance when compared to the regular strategy of selecting MLM input RPs.
Fog computing system is an emergent architecture for providing computing, storage, control, and networking capabilities for realizing Internet of Things. In the fog computing system, the mobile devices (MDs) can offload its data or computational expensive tasks to the fog node within its proximity, instead of distant cloud. Although offloading can reduce energy consumption at the MDs, it may also incur a larger execution delay including transmission time between the MDs and the fog/cloud servers, and waiting and execution time at the servers. Therefore, how to balance the energy consumption and delay performance is of research importance. Moreover, based on the energy consumption and delay, how to design a cost model for the MDs to enjoy the fog and cloud services is also important. In this paper, we utilize queuing theory to bring a thorough study on the energy consumption, execution delay, and payment cost of offloading processes in a fog computing system. Specifically, three queuing models are applied, respectively, to the MD, fog, and cloud centers, and the data rate and power consumption of the wireless link are explicitly considered. Based on the theoretical analysis, a multiobjective optimization problem is formulated with a joint objective to minimize the energy consumption, execution delay, and payment cost by finding the optimal offloading probability and transmit power for each MD. Extensive simulation studies are conducted to demonstrate the effectiveness of the proposed scheme and the superior performance over several existed schemes are observed.
Background In the photodynamic therapy (PDT) of non‐aggressive basal cell carcinomas (BCCs), 5‐aminolevulinic acid nanoemulsion (BF‐200ALA) has shown non‐inferior efficacy when compared with methyl aminolevulinate (MAL), a widely used photosensitizer. Hexyl aminolevulinate (HAL) is an interesting alternative photosensitizer. To our knowledge, this is the first study using HAL‐PDT in the treatment of BCCs. Objectives To compare the histological clearance, tolerability (pain and post‐treatment reaction), and cosmetic outcome of MAL, BF‐200 ALA, and low‐concentration HAL in the PDT of non‐aggressive BCCs. Methods Ninety‐eight histologically verified non‐aggressive BCCs met the inclusion criteria, and 54 patients with 95 lesions completed the study. The lesions were randomized to receive LED‐PDT in two repeated treatments with MAL, BF‐200 ALA, or HAL. Efficacy was assessed both clinically and confirmed histologically at three months by blinded observers. Furthermore, cosmetic outcome, pain, post‐treatment reactions fluorescence, and photobleaching were evaluated. Results According to intention‐to‐treat analyses, the histologically confirmed lesion clearance was 93.8% (95% confidence interval [CI] = 79.9–98.3) for MAL, 90.9% (95% CI = 76.4–96.9) for BF‐200 ALA, and 87.9% (95% CI = 72.7–95.2) for HAL, with no differences between the arms (p=0.84). There were no differences between the arms as regards pain, post‐treatment reactions, or cosmetic outcome. Conclusions PDT with low‐concentration HAL and BF‐200 ALA have a similar efficacy, tolerability, and cosmetic outcome compared to MAL. HAL is an interesting new option in dermatological PDT, since good efficacy is achieved with a low concentration.
Harnessing the vigor of women’s potential is essential for inclusive economic growth in a digital economy moving toward aging society. This can be a soft engine for sustainable growth substitutable for costly hard investment. While there exists explicit evidence of a virtual cycle between economic growth and gender balance improvement, emerging countries cannot afford to overcome the constraints of low income. Given the foregoing, this paper analyzed possible co-evolution between economic growth, gender balance improvement and digital innovation initiated by information and communication technology (ICT) advancement. Using a unique dataset representing the state of gender balance improvement in the function of economic growth and ICT advancement, an empirical numerical analysis of 44 countries was attempted. These countries were classified as emerging, industrialized and with a specific culture. It was found that while industrialized countries, typically Finland, have realized high performance in co-evolution, emerging countries have been constrained by low ICT advancement, and countries with a specific culture have, notwithstanding their high economic level, also been constrained by a traditional male-dominated culture. Japan is a typical case. Based on these findings, lessons from contrasting trajectories between Finland and Japan for emerging countries were analyzed. It is suggested that advancement of ICT, not only quantitatively but also qualitatively in such a way as constructing a self-propagating system, is crucial for emerging countries. A new practical approach for harnessing the potential resources for sustainable growth was thus explored.
The publication indicator of the Finnish research funding system is based on a manual ranking of scholarly publication channels. These ranks, which represent the evaluated quality of the channels, are continuously kept up to date and thoroughly reevaluated every four years by groups of nominated scholars belonging to different disciplinary panels. This expert-based decision-making process is informed by available citation-based metrics and other relevant metadata characterizing the publication channels. The purpose of this paper is to introduce various approaches that can explain the basis and evolution of the quality of publication channels, i.e., ranks. This is important for the academic community, whose research work is being governed using the system. Data-based models that, with sufficient accuracy, explain the level of or changes in ranks provide assistance to the panels in their multi-objective decision making, thus suggesting and supporting the need to use more cost-effective, automated ranking mechanisms. The analysis relies on novel advances in machine learning systems for classification and predictive analysis, with special emphasis on local and global feature importance techniques.
Recognition of tree species and geospatial information on tree species composition is essential for forest management. In this study, tree species recognition was examined using hyperspectral imagery from visible to near-infrared (VNIR) and short-wave infrared (SWIR) camera sensors in combination with a 3D photogrammetric canopy surface model based on RGB camera stereo-imagery. An arboretum with a diverse selection of 26 tree species from 14 genera was used as a test area. Aerial hyperspectral imagery and high spatial resolution photogrammetric color imagery were acquired from the test area using unmanned aerial vehicle (UAV) borne sensors. Hyperspectral imagery was processed to calibrated reflectance mosaics and was tested along with the mosaics based on original image digital number values (DN). Two alternative classifiers, a k nearest neighbor method (k-nn), combined with a genetic algorithm and a random forest method, were tested for predicting the tree species and genus, as well as for selecting an optimal set of remote sensing features for this task. The combination of VNIR, SWIR, and 3D features performed better than any of the data sets individually. Furthermore, the calibrated reflectance values performed better compared to uncorrected DN values. These trends were similar with both tested classifiers. Of the classifiers, the k-nn combined with the genetic algorithm provided consistently better results than the random forest algorithm. The best result was thus achieved using calibrated reflectance features from VNIR and SWIR imagery together with 3D point cloud features; the proportion of correctly-classified trees was 0.823 for tree species and 0.869 for tree genus.
Successive increases in R&D that creates new functionality are essential for global competitiveness. However, unexpectedly, as a consequence of the two-faced nature of information and communication technology (ICT), excessive R&D results in a marginal productivity decline leading to a decrease in digital value creation. In order to overcome such a dilemma, global ICT firms have been endeavoring to transform themselves into disruptive business model. Neo open innovation that harnesses soft innovation resources may be a solution to this critical question. On the basis of an empirical analysis focusing on forefront endeavors to this dilemma by global ICT firms, this paper attempted to demonstrate the above hypothetical view. Noteworthy findings suggestive to transforming the traditional business model into disruptive innovation that satisfies people’s demand corresponding to their shift in preferences in the digital economy is thus provided. In addition, a new concept for R&D resources in the digital economy is postulated.
Wireless Local Area Network (WLAN) positioning has become a popular localization system due to its low-cost installation and widespread availability of WLAN access points. Traditional grid-based radio frequency (RF) fingerprinting (GRFF) suffers from two drawbacks. First it requires costly and non-efficient data collection and updating procedure; secondly the method goes through time-consuming data pre-processing before it outputs user position. This paper proposes Cluster-based RF Fingerprinting (CRFF) to overcome these limitations by using modified Minimization of Drive Tests data which can be autonomously collected by cellular operators from their subscribers. The effect of environmental changes and device variation on positioning accuracy has been carried out. Experimental results show that even under these variations CRFF can improve positioning accuracy by 15.46 and 22.30% in 95 percentile of positioning error as compared to that of GRFF and K-nearest neighbour methods respectively.
Driven by digital solutions, the bioeconomy is taking major steps forward in recent years toward achievement of the long-lasting goal of transition from a traditional fossil economy to a bioeconomy-based circular economy. The coupling of digitalization and bioeconomy is leading towards a digitalized bioeconomy that can satisfy the shift in consumers’ preferences for eco-consciousness, which in turn induces coupling of up-down stream operation in the value chain. Thus, the co-evolution of the coupling of digitalization and bioeconomy and of upstream and downstream operations is transforming the forest-based bioeconomy into a digital platform industry. Aiming at addressing this transformation, a model was developed that explains above mentioned dynamism and demonstrated its reliability through an empirical analysis focusing on the development trajectory of UPM (forest-based ecosystem leader in Europe and a world pioneer in the circular economy) over the last quarter century, highlighting its efforts towards planned obsolescence-driven circular economy. It was comprehended that with the advancement of digital innovations, UPM has incorporated a self-propagating function that accelerates digital solution. Furthermore, this self-propagating function was triggered by coupling with a downstream leader, Amazon, in the United States. The dynamism in transforming a forest-based bioeconomy into a digital platform industry is thus clarified, and new insights common to all industries in the digital economy are provided.
Remote sensing using unmanned aerial vehicle (UAV) -borne sensors is currently a highly interesting approach for the estimation of forest characteristics. 3D remote sensing data from airborne laser scanning or digital stereo photogrammetry enable highly accurate estimation of forest variables related to the volume of growing stock and dimension of the trees, whereas recognition of tree species dominance and proportion of different tree species has been a major complication in remote sensing-based estimation of stand variables. In this study the use of UAV-borne hyperspectral imagery was examined in combination with a high-resolution photogrammetric canopy height model in estimating forest variables of 298 sample plots. Data were captured from eleven separate test sites under weather conditions varying from sunny to cloudy and partially cloudy. Both calibrated hyperspectral reflectance images and uncalibrated imagery were tested in combination with a canopy height model based on RGB camera imagery using the k-nearest neighbour estimation method. The results indicate that this data combination allows accurate estimation of stand volume, mean height and diameter: the best relative RMSE values for those variables were 22.7%, 7.4% and 14.7%, respectively. In estimating volume and dimension-related variables, the use of a calibrated image mosaic did not bring significant improvement in the results. In estimating the volumes of individual tree species, the use of calibrated hyperspectral imagery generally brought marked improvement in the estimation accuracy; the best relative RMSE values for the volumes for pine, spruce, larch and broadleaved trees were 34.5%, 57.2%, 45.7% and 42.0%, respectively.
This work is aimed at the derivation of reliable and efficient a posteriori error estimates for convection-dominated diffusion problems motivated by a linear Fokker–Planck problem appearing in computational neuroscience. We obtain computable error bounds of functional type for the static and time-dependent case and for different boundary conditions (mixed and pure Neumann boundary conditions). Finally, we present a set of various numerical examples including discussions on mesh adaptivity and space-time discretisation. The numerical results confirm the reliability and efficiency of the error estimates derived.
Partial solution variant of the cyclic reduction (PSCR) method is a direct solver that can be applied to certain types of separable block tridiagonal linear systems. Such linear systems arise, e.g., from the Poisson and the Helmholtz equations discretized with bilinear finite-elements. Furthermore, the separability of the linear system entails that the discretization domain has to be rectangular and the discretization mesh orthogonal. A generalized graphics processing unit (GPU) implementation of the PSCR method is presented. The numerical results indicate up to 24-fold speedups when compared to an equivalent CPU implementation that utilizes a single CPU core. Attained floating point performance is analyzed using roofline performance analysis model and the resulting models show that the attained floating point performance is mainly limited by the off-chip memory bandwidth and the effectiveness of a tridiagonal solver used to solve arising tridiagonal subproblems. The performance is accelerated using off-line autotuning techniques.
Mental fatigue is a common phenomenon with implicit and multidimensional properties. It brings dynamic changes of functional brain networks. However, the challenging problem of false positives appears when the connectivity is estimated by Electroencephalography (EEG). In this paper, we propose a novel framework based on spatial clustering to explore the sources of mental fatigue and functional activity changes caused by them. To suppress the false positive observations, spatial clustering is implemented in brain networks. The nodes extracted by spatial clustering are registered back to functional magnetic resonance imaging (fMRI) source space to determined the sources of mental fatigue. The wavelet entropy of EEG in a sliding window is calculated to find the temporal features of mental fatigue. Our experimental results show that the extracted nodes correspond to the fMRI sources across different subjects and different tasks. The entropy values on the extracted nodes demonstrate clearer staged decreasing changes (deactivation). Additionally, the synchronization among the extracted nodes is stronger than that among all the nodes in the deactivation stage. The initial time of the strong synchronized deactivation is consistent with the subjective fatigue time reported by the subjects themselves. It means the synchronization and deactivation corresponds to the subjective feelings of fatigue. Therefore, this functional activity pattern may be caused by the sources of mental fatigue. The proposed framework is useful for a wide range of prolonged functional imaging and fatigue detection studies.
Nowadays, although the data processing capabilities of the modern mobile devices are developed in a fast speed, the resources are still limited in terms of processing capacity and battery lifetime. Some applications, in particular the computationally intensive ones, such as multimedia and gaming, often require more computational resources than a mobile device can afford. One way to address such a problem is that the mobile device can offload those tasks to the centralized cloud with data centers, the nearby cloudlet or ad hoc mobile cloud. In this paper, we propose a data offloading and task allocation scheme for a cloudlet-assisted ad hoc mobile cloud in which the master device (MD) who has computational tasks can access resources from nearby slave devices (SDs) or the cloudlet, instead of the centralized cloud, to share the workload, in order to reduce the energy consumption and computational cost. A two-stage Stackelberg game is then formulated where the SDs determine the amount of data execution units that they are willing to provide, while the MD who has the data and tasks to offload sets the price strategies for different SDs accordingly. By using the backward induction method, the Stackelberg equilibrium is derived. Extensive simulations are conducted to demonstrate the effectiveness of the proposed scheme.
With the increasing popularity of the smart grid, huge volumes of data are gathered from numerous sensors. How to classify, store, and analyze massive datasets to facilitate the development of the smart grid has recently attracted much attention. In particular, with the popularity of household smart meters and electricity monitoring sensors, a large amount of data can be obtained to analyze household electricity usage so as to better diagnose the leakage and theft behaviors, identify man-made tampering and data fraud, and detect powerline loss. In this paper, the time window method is first proposed to obtain the features and potential periodicity of household electricity data. Combining the denoising ability of the autoencoder and the induction ability of the feedforward neural network, a multilayer hierarchical network (MLHN) is then established to detect anomalies in single sensor data and classify multiple groups of sensor data, respectively. The experimental results show that the accuracy of detecting abnormal data and data classification is significantly improved compared with the presented scheme.
In this study, an energy-efficient optimisation scheme for a large-scale multiple-antenna system with wireless power transfer (WPT) is presented. In the considered system, the user is charged by a base station with a large number of antennas via downlink WPT and then utilises the received power to carry out uplink data transmission. Novel antenna selection, time allocation and power allocation schemes are presented to optimise the energy efficiency of the overall system. In addition, the authors also consider channel state information cannot be perfectly obtained when designing the resource allocation schemes. The non-linear fractional programming-based algorithm is utilised to address the formulated problem. Their proposed schemes are validated by extensive simulations and it shows superior performance over the existing schemes.
This work-in-progress research article presents an introductory qualitative study on students' perceptions of a flexibly delivered, modular computer science course. Many contemporary approaches to education rely in various ways on flexible delivery of course content. This is often done to capitalize on modern technology and the web, and to put the student ‘in the center.' However, it is becoming manifest that these approaches may challenge both the students and the equity between them, making it important to understand the effects of flexible delivery in terms of the students. In the voice of our students, flexible delivery was seen as a largely positive approach, reducing stress, promoting true learning, and allowing students to better manage their workloads. We also see the effect of the learning environment (teacher, LMS, materials, activities) on a flexible course. Although this qualitative study cannot foreground the extent of typical self-regulation challenges with flexibility, we argue that the observations made precipitate discussion on flexible delivery in curriculum planning from the students' perspective.
During recent years it has been shown that hidden oscillations, whose basin of attraction does not overlap with small neighborhoods of equilibria, may significantly complicate simulation of dynamical models, lead to unreliable results and wrong conclusions, and cause serious damage in drilling systems, aircrafts control systems, electromechanical systems, and other applications. This article provides a survey of various phase-locked loop based circuits (used in satellite navigation systems, optical, and digital communication), where such difficulties take place in MATLAB and SPICE. Considered examples can be used for testing other phase-locked loop based circuits and simulation tools, and motivate the development and application of rigorous analytical methods for the global analysis of phase-locked loop based circuits.
In addition to comforting passengers' journey, the modern railway system is responsible to support a variety of on-board Internet services to meet the passenger's demands on seamless service provisioning. In order to provide wireless access to the train, one idea attracting increasing attention is to deploy a series of track-side access points (TAPs) with high-speed data rates along the rail lines dedicated to the broadband mobile service provisioning on board. Due to the heavy data traffic flushing into the base stations (BSs) of the cellular networks, TAPs act as a complement to the BSs in data delivery. In this paper, we focus on the TAP association problem for service provisioning in a heterogeneous wireless railway network, where the TAP and BS coexist by applying a queueing game theoretic approach. Specifically, we present comprehensive theoretical analysis of the delay performance on the circumstances of partially observed, totally unobserved, and totally observed state of the system. Moreover, based on the considered payoff model and the derived association delay time, the passenger's equilibrium strategies on association behaviors, i.e., whether to associate with a TAP or not, are studied. Finally, performance evaluations and discussions are provided to illustrate our proposed passenger-TAP association scheme for the heterogeneous wireless railway communication system.
We analyze the impact of different app description characteristics on app demand on the basis of panel data for six months and 1081 distinct apps. We use several text mining techniques in order to operationalize the descriptions’ textual characteristics. The extracted variables are then used in an econometric investigation to examine their impact on apps’ downloads. Our results provide evidence that app descriptions have an effect on demand. Apps with upfront price should be described in a neutral tone. Apps without an upfront price but with in-app purchase option should be offered with rather short descriptions that are written in a formal and subjective style.
An important aspect of protecting software from attack, theft of algorithms, or illegal software use, is eliminating the possibility of performing reverse engineering. One common method to deal with these issues is code obfuscation. However, in most case it was shown to be ineffective. Code encryption is a much more effective means of defying reverse engineering, but it requires managing a secret key available to none but the permissible users. The authors propose a new and innovative solution. Critical functions in protected software are encrypted using well-known encryption algorithms. Following verification by external attestation, a thin hypervisor is used as the basis of an eco-system that manages just-in-time decryption, inside the CPU, where decrypted instructions are then executed and finally discarded, while keeping the secret key and the decrypted instructions absolutely safe. The paper presents and compares two methodologies that perform just-in-time decryption: in-place and buffered execution. The former being safer, while the latter boasts better performance.
The Rabinovich system, describing the process of interaction between waves in plasma, is considered. It is shown that the Rabinovich system can exhibit a hidden attractor in the case of multistability as well as a classical self-excited attractor. The hidden attractor in this system can be localized by analytical/numerical methods based on the continuation and perpetual points. The concept of finite-time Lyapunov dimension is developed for numerical study of the dimension of attractors. A conjecture on the Lyapunov dimension of self-excited attractors and the notion of exact Lyapunov dimension are discussed. A comparative survey on the computation of the finite-time Lyapunov exponents and dimension by different algorithms is presented. An adaptive algorithm for studying the dynamics of the finite-time Lyapunov dimension is suggested. Various estimates of the finite-time Lyapunov dimension for the hidden attractor and hidden transient chaotic set in the case of multistability are given.
To smartly utilize a huge and constantly growing volume of data, improve productivity and increase competitiveness in various fields of life; human requires decision making support systems that efficiently process and analyze the data, and, as a result, significantly speed up the process. Similarly to all other areas of human life, healthcare domain also is lacking Artificial Intelligence (AI) based solution. A number of supervised and unsupervised Machine Learning and Data Mining techniques exist to help us to deal with structured data. However, in a real life, we pretty much deal with unstructured data that hides useful knowledge and valuable information inside human-readable plain texts, images, audio and video. Therefore, such IT giants as IBM, Google, Microsoft, Intel, Facebook, etc., as well as variety of SMEs are actively elaborating different Cognitive Computing services and tools to get a value from unstructured data. Thus, the paper presents feasibility study of IBM Watson cognitive computing services and tools to address the issue of automated health records processing to support doctor’s decision for patient’s driving assessment.
Most evolutionary optimization algorithms assume that the evaluation of the objective and constraint functions is straightforward. In solving many real-world optimization problems, however, such objective functions may not exist, instead computationally expensive numerical simulations or costly physical experiments must be performed for fitness evaluations. In more extreme cases, only historical data are available for performing optimization and no new data can be generated during optimization. Solving evolutionary optimization problems driven by data collected in simulations, physical experiments, production processes, or daily life are termed data-driven evolutionary optimization. In this paper, we provide a taxonomy of different data driven evolutionary optimization problems, discuss main challenges in data-driven evolutionary optimization with respect to the nature and amount of data, and the availability of new data during optimization. Real-world application examples are given to illustrate different model management strategies for different categories of data-driven optimization problems.
Reducing the energy consumption of the wireless network is significantly important to the economic and ecological sustainability of the ICT industry, as high energy consumption may limit the performance of wireless networks and is one of the main network costs. To solve energy consumption problem, especially on the terminal side, a scheme known as distributed mobile cloud (DMC) is considered to be a potential solution. Multiple mobile terminals (MTs) can cooperative to take advantage of good quality links among the MTs to save energy when receiving from the Base Station (BS). In this paper, we aim to find the optimal transmit power to further reduce the energy consumption of DMC. From simulation studies, it is shown that up to 80% energy savings can be accomplished when using optimal transmit power, compared to use the standard DMC without exploring the optimal transmit power.
Extreme Learning Machine (ELM) and Minimal Learning Machine (MLM) are nonlinear and scalable machine learning techniques with randomly generated basis. Both techniques share a step where a matrix of weights for the linear combination of the basis is recovered. In MLM, the kernel in this step corresponds to distance calculations between the training data and a set of reference points, whereas in ELM transformation with a sigmoidal activation function is most commonly used. MLM then needs additional interpolation step to estimate the actual distance-regression based output. A natural combination of these two techniques is proposed here, i.e., to use a distance-based kernel characteristic in MLM in ELM. The experimental results show promising potential of the proposed technique.Extreme Learning Machine (ELM) and Minimal Learning Machine (MLM) are nonlinear and scalable machine learning techniques with randomly generated basis. Both techniques share a step where a matrix of weights for the linear combination of the basis is recovered. In MLM, the kernel in this step corresponds to distance calculations between the training data and a set of reference points, whereas in ELM transformation with a sigmoidal activation function is most commonly used. MLM then needs additional interpolation step to estimate the actual distance-regression based output. A natural combination of these two techniques is proposed here, i.e., to use a distance-based kernel characteristic in MLM in ELM. The experimental results show promising potential of the proposed technique.
Tensor decomposition is a powerful tool for analyzing multiway data. Nowadays, with the fast development of multisensor technology, more and more data appear in higherorder (order > 4) and nonnegative form. However, the decomposition of higher-order nonnegative tensor suffers from poor convergence and low speed. In this study, we propose a new nonnegative CANDECOM/PARAFAC (NCP) model using proximal algorithm. The block principal pivoting method in alternating nonnegative least squares (ANLS) framework is employed to minimize the objective function. Our method can guarantee the convergence and accelerate the computation. The results of experiments on both synthetic and real data demonstrate the efficiency and superiority of our method.
Short-term outreach interventions are conducted to raise young students’ awareness of the computer science (CS) field. Typically, these interventions are targeted at K–12 students, attempting to encourage to study CS in higher education. This study is based on a series of extra-curricular outreach events that introduced students to the discipline of computing, nurturing creative computational thinking through problem solving and game programming. To assess the long-term impact of this campaign, the participants were contacted and interviewed two to five years after they had attended an outreach event. We studied how participating in the outreach program affected the students’ perceptions of CS as a field and, more importantly, how it affected their educational choices. We found that the outreach program generally had a positive effect on the students’ educational choices. The most prominent finding was that students who already possessed a “maintained situational interest” in CS found that the event strengthened their confidence in studying CS. However, many students were not affected by attending the program, but their perceptions of CS did change. Our results emphasize the need to provide continuing possibilities for interested students to experiment with computing-related activities and hence maintain their emerging individual interests.
Amazon was the world's top Research and Development (R&D) firm in 2017. Its R&D investment was double that of 2015, five times that of 2012, and ten times that of 2011. Such a rapid and notable increase in R&D investment has raised the question of a new R&D definition and focus in the digital economy, which Amazon insists includes both “routine or periodic alterations” (traditionally classified as non-R&D) and “significant improvement” (classified as R&D). Using an empirical analysis of Amazon's R&D model as a system, this paper attempts to provide a convincing answer to this question. It has been identified that Amazon, which is based on R&D as a culture, has been promoting companywide experimentation to cause customers obsessed with making purchase decisions. This obsession has enabled Amazon to deploy an architecture for participation that makes the most of digital technologies by harnessing the power of users. Such user-driven innovation has accelerated a dramatic advancement of the Internet that, in turn, has accelerated the co-emergence of soft innovation resources in the marketplace. This emergence has activated a self-propagating function that has induced functionality development, leading to supra-functionality beyond an economic value that satisfies a shift in customers’ preferences. While this system depends on the assimilation capacity of soft innovation resources, Amazon has developed a high level of capacity supported by a rapid and notable increase in R&D investment. The above efforts function in a virtuous cycle leading to the transformation of “routine or periodic alterations” into “significant improvement.” These findings give rise to insightful suggestions regarding a new concept of R&D in neo open innovation in the digital economy.
We formulate and solve a real-world shape design optimization problem of an air intake ventilation system in a tractor cabin by using a preference-based surrogate-assisted evolutionary multiobjective optimization algorithm. We are motivated by practical applicability and focus on two main challenges faced by practitioners in industry: 1) meaningful formulation of the optimization problem reflecting the needs of a decision maker and 2) finding a desirable solution based on a decision maker’s preferences when solving a problem with computationally expensive function evaluations. For the first challenge, we describe the procedure of modelling a component in the air intake ventilation system with commercial simulation tools. The problem to be solved involves time consuming computational fluid dynamics simulations. Therefore, for the second challenge, we extend a recently proposed Kriging-assisted evolutionary algorithm K-RVEA to incorporate a decision maker’s preferences. Our numerical results indicate efficiency in using the computing resources available and the solutions obtained reflect the decision maker’s preferences well. Actually, two of the solutions dominate the baseline design (the design provided by the decision maker before the optimization process). The decision maker was satisfied with the results and eventually selected one as the final solution.
Background. Stability of spatial components is frequently used as a post-hoc selection criteria for choosing the dimensionality of an independent component analysis (ICA) of functional magnetic resonance imaging (fMRI) data. Although the stability of the ICA temporal courses differs from that of spatial components, temporal stability has not been considered during dimensionality decisions. New method. The current study aims to (1) develop an algorithm to incorporate temporal course stability into dimensionality selection and (2) test the impact of temporal course on the stability of the ICA decomposition of fMRI data via tensor clustering. Resting state fMRI data were analyzed with two popular ICA algorithms, InfomaxICA and FastICA, using our new method and results were compared with model order selection based on spatial or temporal criteria alone. Results. Hierarchical clustering indicated that the stability of the ICA decomposition incorporating spatiotemporal tensor information performed similarly when compared to current best practice. However, we found that component spatiotemporal stability and convergence of the model varied significantly with model order. Considering both may lead to methodological improvements for determining ICA model order. Selected components were also significantly associated with relevant behavioral variables. Comparison with Existing Method: The Kullback–Leibler information criterion algorithm suggests the optimal model order for group ICA is 40, compared to the proposed method with an optimal model order of 20. Conclusion. The current study sheds new light on the importance of temporal course variability in ICA of fMRI data.
We deduce a posteriori error estimates of functional type for the stationary Stokes problem with slip and leak boundary conditions. The derived error majorants do not contain mesh dependent constants and are valid for a wide class of energy admissible approximations that satisfy the Dirichlet boundary condition on a part of the boundary. Different forms of error majorants contain global constants associated with Poincaré type inequalities or the stability (LBB) condition for the Stokes problem or constants associated with subdomains (if a domain decomposition is applied). It is proved that the majorants are guaranteed and vanish if and only if the functions entering them coincide with the respective exact solutions.
With the advent of the mobile industry, we face new security challenges. ARM architecture is deployed in most mobile phones, homeland security, IoT, autonomous cars and other industries, providing a hypervisor API (via virtualization extension technology). To research the applicability of this virtualization technology for security in this platform is an interesting endeavor. The hypervisor API is an addition available for some ARMv7-a and is available with any ARMv8-a processor. Some ARM platforms also offer TrustZone, which is a separate exception level designed for trusted computing. However, TrustZone may not be available to engineers as some vendors lock it. We present a method of applying a thin hypervisor technology as a generic security solution for the most common operating system on the ARM architecture. Furthermore, we discuss implementation alternatives and differences, especially in comparison with the Intel architecture and hypervisor with TrustZone approaches. We provide performance benchmarks for using hypervisors for reverse engineering protection.
Background: Sleep scoring is an essential but time-consuming process, and therefore automatic sleep scoring is crucial and urgent to help address the growing unmet needs for sleep research. This paper aims to develop a versatile deep-learning architecture to automate sleep scoring using raw polysomnography recordings. Method: The model adopts a linear function to address different numbers of inputs, thereby extending model applications. Two-dimensional convolution neural networks are used to learn features from multi-modality polysomnographic signals, a “squeeze and excitation” block to recalibrate channel-wise features, together with a long short-term memory module to exploit long-range contextual relation. The learnt features are finally fed to the decision layer to generate predictions for sleep stages. Result: Model performance is evaluated on three public datasets. For all tasks with different available channels, our model achieves outstanding performance not only on healthy subjects but even on patients with sleep disorders (SHHS: Acc-0.87, K-0.81; ISRUC: Acc-0.86, K-0.82; Sleep-EDF: Acc-0.86, K-0.81). The highest classification accuracy is achieved by a fusion of multiple polysomnographic signals. Comparison: Compared to state-of-the-art methods that use the same dataset, the proposed model achieves a comparable or better performance, and exhibits low computational cost. Conclusions: The model demonstrates its transferability among different datasets, without changing model architecture or hyper-parameters across tasks. Good model transferability promotes the application of transfer learning on small group studies with mismatched channels. Due to demonstrated availability and versatility, the proposed method can be integrated with diverse polysomnography systems, thereby facilitating sleep monitoring in clinical or routine care.
The problem of design and analysis of synchronization control circuits is a challenging task for many applications: satellite navigation, digital communication, wireless networks, and others. In this article the Charge-Pump Phase-Locked Loop (CP-PLL) electronic circuit, which is used for frequency synthesis and clock generation in computer architectures, is studied. Analysis of CP-PLL is not trivial: full mathematical model, rigorous definitions, and analysis still remain open issues in many respects. This article is devoted to development of a mathematical model, taking into account engineering aspects of the circuit, interpretation of core engineering problems, definition in relation to mathematical model, and rigorous analysis.
In vehicular networks, in-vehicle user equipment (UE) with limited battery capacity can achieve opportunistic energy saving by offloading energy-hungry workloads to vehicular edge computing nodes via vehicle-to-infrastructure links. However, how to determine the optimal portion of workload to be offloaded based on the dynamic states of energy consumption and latency in local computing, data transmission, workload execution and handover, is still an open issue. In this paper, we study the energy-efficient workload offloading problem and propose a low-complexity distributed solution based on consensus alternating direction method of multipliers. By incorporating a set of local variables for each UE, the original problem, in which the optimization variables of UEs are coupled together, is transformed into an equivalent general consensus problem with separable objectives and constraints. The consensus problem can be further decomposed into a bunch of subproblems, which are distributed across UEs and solved in parallel simultaneously. Finally, the proposed solution is validated based on a realistic road topology of Beijing, China. Simulation results have demonstrated that significant energy saving gain can be achieved by the proposed algorithm.
Forecasting and analyses of the dynamics of financial and economic processes such as deviations of macroeconomic aggregates (GDP, unemployment, and inflation) from their long-term trends, asset markets volatility, etc., are challenging because of the complexity of these processes. Important related research questions include, first, how to determine the qualitative properties of the dynamics of these processes, namely, whether the process is stable, unstable, chaotic (deterministic), or stochastic; and second, how best to estimate its quantitative indicators including dimension, entropy, and correlation characteristics. These questions can be studied both empirically and theoretically. In the empirical approach, researchers consider real data expressed in terms of time series, identify the patterns of their dynamics, and then forecast the short- and long-term behavior of the process. The second approach is based on postulating the laws of dynamics for the process, deriving mathematical dynamical models based on these laws, and conducting subsequent analytical investigation of the dynamics generated by the models. To implement these approaches, either numerical or analytical methods can be used. While numerical methods make it possible to study dynamical models, the possibility of obtaining reliable results using them is significantly limited due to the necessity of performing calculations only over finite time intervals, rounding-off errors in numerical methods, and the unbounded space of initial data sets. Analytical methods allow researchers to overcome these limitations and to identify the exact qualitative and quantitative characteristics of the dynamics of the process. However, effective analytical applications are often limited to low-dimensional models (in the literature, two-dimensional dynamical systems are most often studied). In this paper, we develop analytical methods for the study of deterministic dynamical systems based on the Lyapunov stability theory and on chaos theory. These methods make it possible not only to obtain analytical stability criteria and to estimate limiting behavior (to localize self-excited and hidden attractors and identify multistability), but also to overcome difficulties related to implementing reliable numerical analysis of quantitative indicators such as Lyapunov exponents and the Lyapunov dimension. We demonstrate the effectiveness of the proposed methods using the mid-size firm model suggested by Shapovalov.
Non-orthogonal multiple access (NOMA) is considered to be one of the best candidates for future networks due to its ability to serve multiple users using the same resource block. Although early studies have focused on transmission reliability and energy efficiency, recent works are considering cooperation among the nodes. The cooperative NOMA techniques allow the user with a better channel (near user) to act as a relay between the source and the user experiencing poor channel (far user). This paper considers the link security aspect of energy harvesting cooperative NOMA users. In particular, the near user applies the decode-and-forward (DF) protocol for relaying the message of the source node to the far user in the presence of an eavesdropper. Moreover, we consider that all the devices use power-splitting architecture for energy harvesting and information decoding. We derive the analytical expression of intercept probability. Next, we employ deep learning based optimization to find the optimal power allocation factor. The results show the robustness and superiority of deep learning optimization over conventional iterative search algorithm.
The current cloud-based Internet-of-Things (IoT) model has revealed great potential in offering storage and computing services to the IoT users. Fog computing, as an emerging paradigm to complement the cloud computing platform, has been proposed to extend the IoT role to the edge of the network. With fog computing, service providers can exchange the control signals with the users for specific task requirements, and offload users’ delay-sensitive tasks directly to the widely distributed fog nodes at the network edge, and thus improving user experience. So far, most existing works have focused on either the radio or computational resource allocation in the fog computing. In this work, we investigate a joint radio and computational resource allocation problem to optimize the system performance and improve user satisfaction. Important factors, such as service delay, link quality, mandatory benefit, and so on, are taken into consideration. Instead of the conventional centralized optimization, we propose to use a matching game framework, in particular, student project allocation (SPA) game, to provide a distributed solution for the formulated joint resource allocation problem. The efficient SPA-(S,P) algorithm is implemented to find a stable result for the SPA problem. In addition, the instability caused by the external effect, i.e., the interindependence between matching players, is removed by the proposed user-oriented cooperation (UOC) strategy. The system performance is also further improved by adopting the UOC strategy.
In recent times, major cybersecurity breaches and cyber fraud had huge negative impact on victim organisations. The biggest impact made on major areas of business activities. Majority of organisations facing cybersecurity adversity and advanced threats suffers from huge financial and reputation loss. The current security technologies, policies and processes are providing necessary capabilities and cybersecurity mechanism to solve cyber threats and risks. However, current solutions are not providing required mechanism for decision making on impact of cybersecurity breaches and fraud. In this paper, we are reporting initial findings and proposing conceptual solution. The paper is aiming to provide a novel model for Cybersecurity Economics and Analysis (CEA). We propose an innovative model for an optimal cybersecurity cost-benefit framework to help decision-making based on a combination of qualitative and quantitative analysis of the cybersecurity risks and their impact on organizational tangible and intangible assets. Cybersecurity Economics and Analysis utilizes a holistic approach to cybersecurity, proposing a model based on a deep and comprehensive analysis of organisations’ security – considering not only technological perspectives, but institutional, economic, governance and human dimensions – taking forward existing ‘best’ and effective practices from national audit frameworks, sectoral guidelines and organisational policies. This new solution will account for the wants and needs of various stakeholder groups and existing sectoral requirements. We will contribute to increasing harmonization of European cybersecurity initiatives and reducing fragmented practices of cybersecurity solutions and also helping to reach EU Digital Single Market goal. By introducing Cybersecurity Readiness Level Metrics the project will measure and increase effectiveness of cybersecurity programs, while the cost-benefit framework will help to increase the economic and financial viability, effectiveness and value generation of cybersecurity solutions for organisation’s strategic, tactical and operational imperative. The ambition of the research development and innovation (RDI) is to increase and re-establish the trust of the European citizens in European digital environments through practical solutions.
In independent component analysis (ICA), the selection of model order (i.e., number of components to be extracted) has crucial effects on functional magnetic resonance imaging (fMRI) brain network analysis. Model order selection (MOS) algorithms have been used to determine the number of estimated components. However, simulations show that even when the model order equals the number of simulated signal sources, traditional ICA algorithms may misestimate the spatial maps of the signal sources. In principle, increasing model order will consider more potential information in the estimation, and should therefore produce more accurate results. However, this strategy may not work for fMRI because large-scale networks are widely spatially distributed and thus have increased mutual information with noise. As such, conventional ICA algorithms with high model orders may not extract these components at all. This conflict makes the selection of model order a problem. We present a new strategy for model order free ICA, called Snowball ICA, that obviates these issues. The algorithm collects all information for each network from fMRI data without the limitations of network scale. Using simulations and in vivo resting-state fMRI data, our results show that component estimation using Snowball ICA is more accurate than traditional ICA. The Snowball ICA software is available at https://github.com/GHu-DUT/Snowball-ICA.
We describe an efficient system for ensuring code integrity of an operating system (OS), both its own code and application code. The proposed system can protect from an attacker who has full control over the OS kernel. An evaluation of the system's performance suggests the induced overhead is negligible.
Control and stabilization of irregular and unstable behavior of dynamic systems (including chaotic processes) are interdisciplinary problems of interest to a variety of scientific fields and applications. Using the control methods allows improvements in forecasting the dynamics of unstable economic processes and offers opportunities for governments, central banks, and other policy makers to modify the behaviour of the economic system to achieve its best performance. One effective method for control of chaos and computation of unstable periodic orbits (UPOs) is the unstable delay feedback control (UDFC) approach, suggested by K. Pyragas. This paper proposes the application of the Pyragas’ method within framework of economic models. We consider this method through the example of the Shapovalov model, by describing the dynamics of a mid-size firm. The results demonstrate that suppressing chaos is capable in the Shapovalov model, using the UDFC method.
This report shows the possibilities of solving the Gardner problem of determining the lock-in range for multidimensional phase-locked loops systems. The development of analogs of classical stability criteria for the cylindrical phase space made it possible to obtain analytical estimates of the lock-in range for third-order system.
Industry 4.0 and highly automated critical infrastructure can be seen as cyber‐physical‐social systems controlled by the Collective Intelligence. Such systems are essential for the functioning of the society and economy. On one hand, they have flexible infrastructure of heterogeneous systems and assets. On the other hand, they are social systems, which include collaborating humans and artificial decision makers. Such (human plus machine) resources must be pre‐trained to perform their mission with high efficiency. Both human and machine learning approaches must be bridged to enable such training. The importance of these systems requires the anticipation of the potential and previously unknown worst‐case scenarios during training. In this paper, we provide an adversarial training framework for the collective intelligence. We show how cognitive capabilities can be copied (“cloned”) from humans and trained as a (responsible) collective intelligence. We made some modifications to the Generative Adversarial Networks architectures and adapted them for the cloning and training tasks. We modified the Discriminator component to a so‐called “Turing Discriminator”, which includes one or several human and artificial discriminators working together. We also discussed the concept of cellular intelligence, where a person can act and collaborate in a group together with their own cognitive clones.
In 1981, famous engineer William F. Egan conjectured that a higher-order type 2 PLL with an infinite hold-in range also has an infinite pull-in range, and supported his conjecture with some third-order PLL implementations. Although it is known that for the second-order type 2 PLLs the hold-in range and the pull-in range are both infinite, the present paper shows that the Egan conjecture may be not valid in general. We provide an implementation of the third-order type 2 PLL, which has an infinite hold-in range and experiences stable oscillations. This implementation and the Egan conjecture naturally pose a problem, which we will call the Egan problem: to determine a class of type 2 PLLs for which an infinite hold-in range implies an infinite pull-in range. Using the direct Lyapunov method for the cylindrical phase space we suggest a sufficient condition of the pull-in range infiniteness, which provides a solution to the Egan problem.
Time is an essential dimension in cross-cultural e-collaboration among research project teams. Understanding temporal aspects and project dynamics in cross-cultural research e-collaboration and related processes can improve team members' skills in cross-cultural communication and increase their cultural competence. The present case cultures are Finnish and Japanese, and the case universities are the University of Jyväskylä (Finland) and Keio University (Japan). Three issues are addressed in this article. First, cultural dimensions and time models in the cross-cultural e-collaboration context are discussed. Second, temporal aspects related to e-collaboration activities are introduced. Third, formal, ontological approaches for identifying and describing temporal entities in cross-cultural e-collaboration are presented and examples of applications are given. The objectives of this article are (1) to deepen the knowledge and understanding of temporal aspects (informal and formal) in a cross-cultural e-collaboration environment (CCeCE) and (2) to create know-how for designing CCeCE-like systems.
Recent studies show that the dynamics of electrophysiological functional connectivity is attracting more and more interest since it is considered as a better representation of functional brain networks than static network analysis. It is believed that the dynamic electrophysiological brain networks with specific frequency modes, transiently form and dissolve to support ongoing cognitive function during continuous task performance. Here, we propose a novel method based on tensor component analysis (TCA), to characterize the spatial, temporal, and spectral signatures of dynamic electrophysiological brain networks in electroencephalography (EEG) data recorded during free music-listening. A three-way tensor containing time-frequency phase-coupling between pairs of parcellated brain regions is constructed. Nonnegative CANDECOMP/PARAFAC (CP) decomposition is then applied to extract three interconnected, low-dimensional descriptions of data including temporal, spectral, and spatial connection factors. Musical features are also extracted from stimuli using acoustic feature extraction. Correlation analysis is then conducted between temporal courses of musical features and TCA components to examine the modulation of brain patterns. We derive several brain networks with distinct spectral modes (described by TCA components) significantly modulated by musical features, including higher-order cognitive, sensorimotor, and auditory networks. The results demonstrate that brain networks during music listening in EEG are well characterized by TCA components, with spatial patterns of oscillatory phase-synchronization in specific spectral modes. The proposed method provides evidence for the time-frequency dynamics of brain networks during free music listening through TCA, which allows us to better understand the reorganization of electrophysiological networks.