id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
-csTwaC
Single-allocation ordered median hub location problems
The discrete ordered median location model is a powerful tool in modeling classic and alternative location problems that have been applied with success to a large variety of discrete location problems. Nevertheless, although hub location models have been analyzed from the sum, maximum and coverage point of views, as far as we know, they have never been considered under an alternative unifying point of view. In this paper we consider new formulations, based on the ordered median objective function, for hub location problems with new distribution patterns induced by the different users' roles within the supply chain network. This approach introduces some penalty factors associated with the position of an allocation cost with respect to the sorted sequence of these costs. First we present basic formulations for this problem, and then develop stronger formulations by exploiting properties of the model. The performance of all these formulations is compared by means of a computational analysis. (C) 2010 Elsevier Ltd. All rights reserved.
[ "hub location problems", "ordered median function" ]
[ "P", "R" ]
4JfCq1y
How measuring student performances allows for measuring blended extreme apprenticeship for learning Bash programming
Many small exercises and few lectures can teach all programming. Measuring student behavior in exercises assesses how they learn. The reported study logged student performances in programming exercises. Metrics were defined for assessing overall programming performances. Data show that all students tend to learn basic programming skills.
[ "performance", "extreme apprenticeship", "behavior", "metrics", "blended learning", "learner experience design and evaluation" ]
[ "P", "P", "P", "P", "R", "M" ]
541uMG4
Application of 3D-wavelet statistics to video analysis
Video activity analysis is used in various video applications such as human action recognition, video retrieval, video archiving. In this paper, we propose to apply 3D wavelet transform statistics to natural video signals and employ the resulting statistical attributes for video modeling and analysis. From the 3D wavelet transform, we investigate the marginal and joint statistics as well as the Mutual Information (MI) estimates. We show that marginal histograms are approximated quite well by Generalized Gaussian Density (GGD) functions; and the MI between coefficients decreases when the activity level increases in videos. Joint statistics attributes are applied to scene activity grouping, leading to 87.3% accurate grouping of videos. Also, marginal and joint statistics features extracted from the video are used for human action classification employing Support Vector Machine (SVM) classifiers and 93.4% of the human activities are properly classified.
[ "video analysis", "human action recognition", "3d wavelet transform statistics" ]
[ "P", "P", "P" ]
2AUaB5R
Modeling electrokinetic flows in microchannels using coupled lattice Boltzmann methods
We present a numerical framework to solve the dynamic model for electrokinetic flows in microchannels using coupled lattice Boltzmann methods. The governing equation for each transport process is solved by a lattice Boltzmann model and the entire process is simulated through an iteration procedure. After validation, the present method is used to study the applicability of the PoissonBoltzmann model for electrokinetic flows in microchannels. Our results show that for homogeneously charged long channels, the PoissonBoltzmann model is applicable for a wide range of electric double layer thickness. For the electric potential distribution, the PoissonBoltzmann model can provide good predictions until the electric double layers fully overlap, meaning that the thickness of the double layer equals the channel width. For the electroosmotic velocity, the PoissonBoltzmann model is valid even when the thickness of the double layer is 10 times of the channel width. For heterogeneously charged microchannels, a higher zeta potential and an enhanced velocity field may cause the PoissonBoltzmann model to fail to provide accurate predictions. The ionic diffusion coefficients have little effect on the steady flows for either homogeneously or heterogeneously charged channels. However the ionic valence of solvent has remarkable influences on both the electric potential distribution and the flow velocity even in homogeneously charged microchannels. Both theoretical analyses and numerical results indicate that the valence and the concentration of the counter-ions dominate the Debye length, the electrical potential distribution, and the ions transport. The present results may improve the understanding of the electrokinetic transport characteristics in microchannels.
[ "electrokinetic flows", "lattice boltzmann method", "dynamic model", "poissonboltzmann model", "multiphysical transport", "microfluidics and nanofluidics" ]
[ "P", "P", "P", "P", "M", "M" ]
1vpmUm&
A non-iterative continuous model for switching window computation with crosstalk noise
Proper modeling of switching windows leads to a better estimate of the noise-induced delay variations. In this paper, we propose a new non-iterative continuous switching model. The proposed new model employs an ordering technique combined with the principle of superposition of linear circuits. The principle of superposition considers the impact of aggressors one after the other. The ordering technique avoids convergence and multiple solution issues in many practical cases. Our model surpasses the accuracy of the traditional discrete model and the speed of fixed point iteration method.
[ "non-iterative", "switch window", "crosstalk noise", "deep submicron" ]
[ "P", "P", "P", "U" ]
2n3v-Eq
Vibrational analysis of curved single-walled carbon nanotube on a Pasternak elastic foundation
Continuum mechanics and an elastic beam model were employed in the nonlinear force vibrational analysis of an embedded, curved, single-walled carbon nanotube. The analysis considered the effects of the curvature or waviness and midplane stretching of the nanotube on the nonlinear frequency. By utilizing Hes Energy Balance Method (HEBM), the relationships of the nonlinear amplitude and frequency were expressed for a curved, single-walled carbon nanotube. The amplitude frequency response curves of the nonlinear free vibration were obtained for a curved, single-walled carbon nanotube embedded in a Pasternak elastic foundation. Finally, the influence of the amplitude of the waviness, midplane stretching nonlinearity, shear foundation modulus, surrounding elastic medium, radius, and length of the curved carbon nanotube on the amplitude frequency response characteristics are discussed. As a result, the combination effects of waviness and stretching nonlinearity on the nonlinear frequency of the curved SWCNT with a small outer radius were larger than the straight one.
[ "elastic foundation", "midplane stretching", "energy balance method", "curved carbon nanotube", "nonlinear vibration", "pasternak foundation" ]
[ "P", "P", "P", "P", "R", "R" ]
2qXEDFM
Fast Bokeh effects using low-rank linear filters
We present a method for faster and more flexible approximation of camera defocus effects given a focused image of a virtual scene and depth map. Our method leverages the advantages of low-rank linear filtering by reducing the problem of 2D convolution to multiple 1D convolutions, which significantly reduces the computational complexity of the filtering operation. In the case of rank 1 filters (e.g., the box filter and Gaussian filter), the kernel is described as separable since it can be implemented as a horizontal 1D convolution followed by a 1D vertical convolution. While many filter kernels which result in bokeh effects cannot be approximated closely by separable kernels, they can be effectively approximated by low-rank kernels. We demonstrate the speed and flexibility of low-rank filters by applying them to image blurring, tilt-shift postprocessing, and depth-of-field simulation, and also analyze the approximation error for several aperture shapes.
[ "bokeh", "filter", "blur", "depth-of-field" ]
[ "P", "P", "P", "P" ]
3eio4Zv
A novel method for cross-species gene expression analysis
Analysis of gene expression from different species is a powerful way to identify evolutionarily conserved transcriptional responses. However, due to evolutionary events such as gene duplication, there is no one-to-one correspondence between genes from different species which makes comparison of their expression profiles complex.
[ "gene expression", "evolution", "meta-analysis", "orthologs", "paralogs", "microarray", "rna-seq" ]
[ "P", "U", "U", "U", "U", "U", "U" ]
1eSnzTF
MIMO radar signal design to improve the MIMO ambiguity function via maximizing its peak ?
Transmit signals are designed to maximize the ambiguity function?s peak of a WS-MIMO radar. Signal design is done for three cases of single target, multi-target, and prioritized ambiguity function. It is shown that in spite of increasing the number of antennas of MIMO radar, signal design does not provide diversity gain. Through simulations, it is shown that better performance can be achieved by the proposed signal design to maximize the AF?s peak.
[ "ambiguity function", "multiple-input multiple-output radar", "waveform design", "power allocation", "waterfilling" ]
[ "P", "M", "M", "U", "U" ]
A&R5aHZ
Optimal design of radial basis function neural networks for fuzzy-rule extraction in high dimensional data
The design of an optimal radial basis function neural network (RBFNF) is not a straightforward procedure. In this paper we take advantage of the functional equivalence between RBFN and fuzzy inference systems to propose a novel efficient approach to RBFN design for fuzzy rule extraction. The method is based on advanced fuzzy clustering techniques. Solutions to practical problems are proposed. By combining these different solutions, a general methodology is derived. The efficiency of our method is demonstrated on challenging synthetic and real world data sets.
[ "fuzzy rule extraction", "fuzzy clustering", "radial basis function networks", "neuro-fuzzy models", "adaptive network based fuzzy inference systems" ]
[ "P", "P", "R", "U", "M" ]
-d6Ui2-
Polarization Properties of a Turnstile Antenna in the Vicinity of the Human Body
Polarization of a simple turnstile antenna situated close to the human body, for potential WBAN applications at 2.45GHz band, is studied in detail by the use of electromagnetic simulator WIPL-D Pro. Circular polarization of the antenna (when isolated) is provided by adjusting the dipole impedances. Full-size, 3-dimensional simplified homogeneous model of a human body is applied. Polarization of both far and near field is studied, with various positions of the antenna and with/without metallic reflector. In the far field significant degradation of the circular polarization, due to the vicinity of the body, was observed. In the near field, at points close to the surface of the torso, polarization (of vector E) was found to significantly deviate from circular. Obtained results can be useful in designing on-body sensor networks in which circularly polarized antennas are applied, for both far field communication between sensor nodes and the gateway and near field communication between sensors.
[ "turnstile antenna", "wban", "circular polarization", "on-body sensors", "full-size human model" ]
[ "P", "P", "P", "P", "M" ]
-jdJFFr
Enforcing and defying associativity, commutativity, totality, and strong noninvertibility for worst-case one-way functions
Rabi and Sherman [M. Rabi, A. Sherman, An observation on associative one-way functions in complexity theory, Information Processing Letters 64 (5) (1997) 239-244; M. Rabi, A. Sherman, Associative one-way functions: A new paradigm for secret-key agreement and digital signatures, Tech. Rep. CS-TR-3183/UMIACS-TR-93-124, Department of Computer Science, University of Maryland, College Park, MD, 1993] proved that the hardness of factoring is a sufficient condition for there to exist one-way functions (i.e., p-time computable, honest, p-time noninvertible functions; this paper is in the worst-case model, not the average-case model) that are total, commutative, and associative but not strongly noninvertible. In this paper we improve the sufficient condition to P not equal NP. More generally, in this paper we completely characterize which types of one-way functions stand or fall together with (plain) one-way functions-equivalently, stand or fall together with P 54 NP. We look at the four attributes used in Rabi and Sherman's seminal work on algebraic properties of one-way functions (see [M. Rabi, A. Sherman, An observation on associative one-way functions in complexity theory, Information Processing Letters 64 (5) (1997) 239-244: M. Rabi, A. Sherman, Associative one-way functions: A new paradigm for secret-key agreement and digital signatures, Tech. Rep. CS-TR-3183/UMIACS-TR-93-124, Department of Computer Science, University of Maryland, College Park, MD, 1993]) and subsequent papers - strongness (of noninvertibility), totality, commutativity, and associativity - and for each attribute, we allow it to be required to hold, required to fail, or "don't care". In this categorization there are 3(4) = 81 potential types of one-way functions. We prove that each of these 81 feature-laden types stands or falls together with the existence of (plain) one-way functions. (c) 2008 Elsevier B.V. All rights reserved.
[ "associativity", "commutativity", "strong noninvertibility", "worst-case one-way functions", "computational complexity" ]
[ "P", "P", "P", "P", "R" ]
5&6zUwM
An efficient algorithm for constrained global optimization and application to mechanical engineering design: League championship algorithm (LCA) ?
The league championship algorithm (LCA) is a new algorithm originally proposed for unconstrained optimization which tries to metaphorically model a League championship environment wherein artificial teams play in an artificial league for several weeks (iterations). Given the league schedule, a number of individuals, as sport teams, play in pairs and their game outcome is determined given known the playing strength (fitness value) along with the team formation (solution). Modelling an artificial match analysis, each team devises the required changes in its formation (a new solution) for the next week contest and the championship goes for a number of seasons. In this paper, we adapt LCA for constrained optimization. In particular: (1) a feasibility criterion to bias the search toward feasible regions is included besides the objective value criterion; (2) generation of multiple offspring is allowed to increase the probability of an individual to generate a better solution; (3) a diversity mechanism is adopted, which allows infeasible solutions with a promising objective value precede the feasible solutions. Performance of LCA is compared with comparator algorithms on benchmark problems where the experimental results indicate that LCA is a very competitive algorithm. Performance of LCA is also evaluated on well-studied mechanical design problems and results are compared with the results of 21 constrained optimization algorithms. Computational results signify that with a smaller number of evaluations, LCA ensures finding the true optimum of these problems. These results encourage that further developments and applications of LCA would be worth investigating in the future studies.
[ "league championship algorithm", "constrained optimization", "constraint-handling techniques", "engineering design optimization" ]
[ "P", "P", "U", "R" ]
712hstb
Statistical model training technique based on speaker clustering approach for HMM-based speech synthesis
We propose an average voice model training technique using speaker class. The speaker class is obtained on the basis of speaker clustering. The average voice model is trained using the conventional contextual factors and the speaker class. In the speaker adaptation process, the target speakers speaker class is estimated. Our proposal can synthesize speech with better similarity and naturalness.
[ "speaker clustering", "hmm-based speech synthesis", "average voice model", "speaker adaptation" ]
[ "P", "P", "P", "P" ]
rGe3rv-
a study of gradual transition detection in historic film material
The detection of gradual transitions focuses on two types of approaches: unified approaches, i.e. one detector for all gradual transition types, and approaches that use specialized detectors for each gradual transition type. We present an overview on existing methods and extend an existing unified approach for the detection of gradual transitions in historic material. In an experimental study we evaluate our approach on complex and low quality historic material as well as on contemporary material from the TRECVid evaluation. Additionally we investigate different features, feature combinations and fusion strategies. We observe that the historic material requires the use of texture features in contrast to the contemporary material that in most of the cases requires the use of colour and luminance features.
[ "gradual transition detection", "cultural heritage", "shot boundary detection" ]
[ "P", "U", "M" ]
3D4zWNH
Electronic retention: what does your mobile phone reveal about you?
The global information rich society is increasingly dependent on mobile phone technology for daily activities. A substantial secondary market in mobile phones has developed as a result of a relatively short life-cycle and recent regulatory measures on electronics recycling. These developments are, however, a cause for concern regarding privacy, since it is unclear how much information is retained on a device when it is re-sold. The crucial question is: what, despite your best efforts, does your mobile phone reveal about you?. This research investigates the extent to which personal information continues to reside on mobile phones even when users have attempted to remove the information; hence, passing the information into the secondary market. A total of 49 re-sold mobile devices were acquired from two secondary markets: a local pawn shop and an online auction site. These devices were examined using three industry standard mobile forensic toolkits. Data were extracted from the devices via both physical and logical acquisitions and the resulting information artifacts categorized by type and sensitivity. All mobile devices examined yielded some user information and in total 11,135 artifacts were recovered. The findings confirm that substantial personal information is retained on a typical mobile device when it is re-sold. The results highlight several areas of potential future work necessary to ensure the confidentially of personal data stored on mobile devices.
[ "privacy", "mobile devices", "forensics" ]
[ "P", "P", "P" ]
1:hRotR
reasoning about digital artifacts with acl2
ACL2 is both a programming language in which computing systems can be modeled and a tool to help a designer prove properties of such models. ACL2 stands for A C omputational L ogic for A pplicative C ommon L isp'' and provides mechanized reasoning support for a first-order axiomatization of an extended subset of functional Common Lisp. Most often, ACL2 is used to produce operational semantic models of artifacts. Such models can be executed as functional Lisp programs and so have dual use as both pre-fabrication simulation engines and as analyzable mathematical models of intended (or at least designed) behavior. This project had its start 40 years ago in Edinburgh with the first Boyer-Moore Pure Lisp theorem prover and has evolved proofs about list concatenation and reverse to proofs about industrial models. Industrial use of theorem provers to answer design questions of critical importance is so surprising to people outside of the theorem proving community that it bears emphasis. In the 1980s, the earlier Boyer-Moore theorem prover, Nqthm, was used to verify the ``Computational Logic stack'' -- a hardware/software stack starting with the NDL description of the netlist for a microprocessor and ascending through a machine code ISA, an assembler, linker, and loader, two compilers (for subsets of Pascal and Lisp), an operating system, and some simple applications. The system components were proved to compose so that properties proved of high-level software were guaranteed by the binary image produced by the composition. At around the same time, Nqthm was used to verify 21 of the 22 subroutines in the MC68020 binary machine code produced from the Berkeley C String Library by gcc -o, identifying bugs in the library as a result. Applications like these convinced us that (a) industrial scale formal methods was practical and (b) Nqthm's Pure Lisp produced uncompetitive results compared to C when used for simulation engines. We therefore designed ACL2, which initially was Nqthm recoded to support applicative Common Lisp. The 1990s saw the first industrial application of ACL2, to verify the correspondence between a gate-level description of the Motorola CAP DSP and its microcode engine. The Lisp model of the microcode engine was proved to be bit- and cycle-accurate but operated several times faster than the gate-level simulator in C because of the competitive execution speed of Lisp and the higher level of trusted abstraction. Furthermore, it was used to discover previously unknown microcode hazards. An executable Lisp predicate was verified to detect all hazards and subsequently used by microcode programmers to check code. This project and a subsequent one at AMD to verify the floating point division operation on the AMD K5 microprocessor demonstrated the practicality of ACL2 but also highlighted the need to develop better Lisp system programming tools wedded to formal methods, formal modeling, proof development, and ``proof maintenance'' in the face of evolution of the modeled artifacts. Much ACL2 development in first decade of the 21st century was therefore dedicated to such tools and we have witnessed a cor-responding increase in the use of ACL2 to construct and reason about commercial artifacts. ACL2 has been involved in the design of all AMD desktop microprocessors since the Athlon; specifically, ACL2 is used to verify floating-point operations on those micro-processors. Centaur Technology (chipmaker for VIA Technologies) uses ACL2 extensively in verifying its media unit and other parts of its x86 designs. Researchers at Rockwell-Collins have shown that ACL2 models of microprocessors can run at 90\% of the speed of C models of those microprocessors. Rockwell-Collins has also used ACL2 to do information flow proofs to establish process separation for the AAMP7G cryptoprocessor and, on the basis of those proofs, obtained MILS certification using Formal Methods techniques as specified by EAL-7 of the Common Criteria. IBM has used ACL2 to verify floating point operations on the Power 4 and other chips. ACL2 was also used to verify key properties of the Sun Java Virtual Machine's class loader. In this talk I will sketch the 40 year history of this project, showing how the techniques and applications have grown over the years. I will demonstrate ACL2 on both some simple prob-lems and a complicated one, and I will deal briefly with the question of how -- and with what tool -- one verifies a verifier. For scholarly details of some of how to use ACL2 and some of its industrial applications see [1, 2]. For source code, lemma li-braries, and an online user's manual, see the ACL2 home page, http://www.cs.utexas.edu/users/moore/acl2.
[ "operational semantics", "software stack", "hardware verification", "virtual machine verification", "jvm", "microprocessor verification", "automatic theorem proving" ]
[ "P", "P", "M", "M", "U", "M", "M" ]
2-7fopr
Deformation and fracturing using adaptive shape matching with stiffness adjustment
This paper presents; a fast method that computes deformations with fracturing of an object using a hierarchical lattice. Our method allows numerically stable computation based on so-called shape matching. During the simulation, the deformed shape of the object and the condition of fracturing are used to determine the appropriate detail level in the hierarchy of the lattices. Our method modifies the computation of the stiffness of the object in different levels of the hierarchy so that the stiffness is maintained uniform by introducing a stiffness parameter that does not depend on the hierarchy. By merging the subdivided lattices, our method minimizes the increase of computational cost. Copyright (C) 2009 John Wiley & Sons, Ltd.
[ "fracturing", "shape matching", "interactive deformation", "soft body" ]
[ "P", "P", "M", "U" ]
4uBUpHA
Using BP network for ultrasonic inspection of flip chip solder joints
Flip-chip technology has been used extensively in microelectronic packaging, where defect inspection for solder joints plays an extremely important role. In this paper, ultrasonic inspection, one of the non-destructive methods, was used for inspection of flip chip solder joints. The image of the flip chip was captured by scanning acoustic microscope and segmented based on the flip chip structure information. Then a back-propagation network was adopted, and the geometric features extracted from the image were fed to the network for classification and recognition. The results demonstrate the high recognition rate and feasibility of the approach. Therefore, this approach has high potentiality for solder joint defect inspection in flip chip packaging.
[ "ultrasonic inspection", "flip chip", "defect inspection", "back-propagation network" ]
[ "P", "P", "P", "P" ]
2nDDjYR
augmenting reflective middleware with an aspect orientation support layer
Reflective middleware provides an effective way to support adaptation in distributed systems. However, as distributed systems become increasingly complex, certain drawbacks of the reflective middleware approach are becoming evident. In particular, reflective APIs are found to impose a steep learning curve, and to place too much expressive power in the hands of developers. Recently, researchers in the field of Aspect-Oriented Programming (AOP) have argued that 'dynamic aspects' show promise in alleviating these drawbacks. In this paper, we report on work that attempts to combine the reflective middleware and AOP approaches. We build an AOP support layer on top of an underlying reflective middleware substrate in such a way that it can be dynamically deployed/undeployed where and when required, and imposes no overhead when it is not used. Our AOP approach involves aspects that can be dynamically (un)weaved across a distributed system on the basis of pointcut expressions that are inherently distributed in nature, and it supports the composition of advice that is remote from the advised joinpoint. An overall goal of the work is to effectively combine reflective middleware and AOP in a way that maximises the benefits and minimises the drawbacks of each.
[ "reflective middleware", "middleware", "aspect", "aspect orientation", "support", "layer", "effect", "adapt", "distributed", "distributed systems", "complexity", "learning", "place", "express", "power", "developer", "research", "aspect-oriented programming", "dynamic", "paper", "compositing", "components", "dynamic adaptation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "R" ]
214z7J8
Band-pass filtering of the time sequences of spectral parameters for robust wireless speech recognition
In this paper we address the problem of automatic speech recognition when wireless speech communication systems are involved. In this context, three main sources of distortion should be considered: acoustic environment, speech coding and transmission errors. Whilst the first one has already received a lot of attention, the last two deserve further investigation in our opinion. We have found out that band-pass filtering of the recognition features improves ASR performance when distortions due to these particular communication systems are present. Furthermore, we have evaluated two alternative configurations at different bit error rates (BER) typical of these channels: band-pass filtering the LP-MFCC parameters or a modification of the RASTA-PLP using a sharper low-pass section perform consistently better than LP-MFCC and RASTA-PLP, respectively.
[ "wireless speech recognition", "transmission errors", "rasta-plp", "robust speech recognition", "modulation spectrum" ]
[ "P", "P", "P", "R", "U" ]
2kBKD7M
MSOAR: A high-throughput ortholog assignment system based on genome rearrangement
The assignment of orthologous genes between a pair of genomes is a fundamental and challenging problem in comparative genomics, since many computational methods for solving various biological problems critically rely on bona fide orthologs as input. While it is usually done using sequence similarity search, we recently proposed a new combinatorial approach that combines sequence similarity and genome rearrangement. This paper continues the development of the approach and unites genome rearrangement events and (post-speciation) duplication events in a single framework under the parsimony principle. In this framework, orthologous genes are assumed to correspond to each other in the most parsimonious evolutionary scenario involving both genome rearrangement and (post-speciation) gene duplication. Besides several original algorithmic contributions, the enhanced method allows for the detection of inparalogs. Following this approach, we have implemented a high-throughput system for ortholog assignment on a genome scale, called MSOAR, and applied it to human and mouse genomes. As the result will show, MSOAR is able to find 99 more true orthologs than the INPARANOID program did. In comparison to the iterated exemplar algorithm on simulated data, MSOAR performed favorably in terms of assignment accuracy. We also validated our predicted main ortholog pairs between human and mouse using public ortholog assignment datasets, synteny information, and gene function classification. These test results indiate that our approach is very promising for genome-wide ortholog assignment. Supplemental material and MSOAR program are available at http://msoar.cs.ucr.edu.
[ "ortholog", "genome rearrangement", "comparative genomics", "gene duplication", "inparalog" ]
[ "P", "P", "P", "P", "P" ]
1UzaLBH
MedLDA: Maximum Margin Supervised Topic Models
A supervised topic model can use side information such as ratings or labels associated with documents or images to discover more predictive low dimensional topical representations of the data. However, existing supervised topic models predominantly employ likelihood-driven objective functions for learning and inference, leaving the popular and potentially powerful max-margin principle unexploited for seeking predictive representations of data and more discriminative topic bases for the corpus. In this paper, we propose the maximum entropy discrimination latent Dirichlet allocation (MedLDA) model, which integrates the mechanism behind the max-margin prediction models (e.g., SVMs) with the mechanism behind the hierarchical Bayesian topic models (e.g., LDA) under a unified constrained optimization framework, and yields latent topical representations that are more discriminative and more suitable for prediction tasks such as document classification or regression. The principle underlying the MedLDA formalism is quite general and can be applied for jointly max-margin and maximum likelihood learning of directed or undirected topic models when supervising side information is available. Efficient variational methods for posterior inference and parameter estimation are derived and extensive empirical studies on several real data sets are also provided. Our experimental results demonstrate qualitatively and quantitatively that MedLDA could: 1) discover sparse and highly discriminative topical representations; 2) achieve state of the art prediction performance; and 3) be more efficient than existing supervised topic models, especially for classification.
[ "supervised topic models", "maximum entropy discrimination", "latent dirichlet allocation", "max-margin learning", "support vector machines" ]
[ "P", "P", "P", "R", "U" ]
-16FCzc
LS-SVM-based image segmentation using pixel color-texture descriptors
Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. Moreover, the increasing demands of image analysis tasks in terms of segmentation results quality introduce the necessity of employing multiple cues for improving image-segmentation results. In this paper, we present a least squares support vector machine (LS-SVM) based image segmentation using pixel color-texture descriptors, in which multiple cues such as edge saliency, color saliency, local maximum energy, and multiresolution texture gradient are incorporated. Firstly, the pixel-level edge saliency and color saliency are extracted based on the spatial relations between neighboring pixels in HSV color space. Secondly, the image pixels texture features, local maximum energy and multiresolution texture gradient, are represented via nonsubsampled contourlet transform. Then, both the pixel-level edge color saliency and texture features are used as input of LS-SVM model (classifier), and the LS-SVM model (classifier) is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained LS-SVM model (classifier). This image segmentation not only can fully take advantage of the human visual attention and local texture content of color image, but also the generalization ability of LS-SVM classifier. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature.
[ "image segmentation", "least squares support vector machine", "arimoto entropy thresholding", "human visual attention", "local texture content" ]
[ "P", "P", "P", "P", "P" ]
3yBXA6X
geometric verification of swirling features in flow fields
In this paper, we present a verification algorithm for swirling features in flow fields, based on the geometry of streamlines. The features of interest in this case are vortices. Without a formal definition, existing detection algorithms lack the ability to accurately identify these features, and the current method for verifying the accuracy of their results is by human visual inspection. Our verification algorithm addresses this issue by automating the visual inspection process. It is based on identifying the swirling streamlines that surround the candidate vortex cores. We apply our algorithm to both numerically simulated and procedurally generated datasets to illustrate the efficacy of our approach.
[ "feature verification", "flow field visualization", "vortex detection" ]
[ "R", "R", "R" ]
1HYkSvh
On topology and dynamics of consensus among linear high-order agents
Consensus of a group of agents in a multi-agent system with and without a leader is considered. All agents are modelled by identical linear n-th order dynamical systems while the leader, when it exists, may evolve according to a different linear model of the same order. The interconnection topology between the agents is modelled as a directed weighted graph. We provide answers to the questions of whether the group converges to consensus and what consensus value the group eventually reaches. To that end, we give a detailed analysis of relevant algebraic properties of the graph Laplacian. Furthermore, we propose an LMI-based design for group consensus in the general case.
[ "consensus", "multi-agent systems", "interconnection topology", "graphs" ]
[ "P", "P", "P", "P" ]
2Antiae
Human cognition in manual assembly: Theories and applications
Human cognition in production environments is analyzed with respect to various findings and theories in cognitive psychology. This theoretical overview describes effects of task complexity and attentional demands on both mental workload and task performance as well as presents experimental data on these topics. A review of two studies investigating the benefit of augmented reality and spatial cueing in an assembly task is given. Results demonstrate an improvement in task performance with attentional guidance while using contact analog highlighting. Improvements were obvious in reduced performance times and eye fixations as well as in increased velocity and acceleration of reaching and grasping movements. These results have various implications for the development of an assistive system. Future directions in this line of applied research are suggested. The introduced methodology illustrates how the analysis of human information processes and psychological experiments can contribute to the evaluation of engineering applications.
[ "human cognition", "task complexity", "mental workload", "information processing", "visual attention", "worker assistance" ]
[ "P", "P", "P", "P", "M", "M" ]
1b2qW8v
Enabling Warping on Stereoscopic Images
Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.
[ "disparity mapping", "stereoscopic image warping" ]
[ "P", "P" ]
466U&cN
Markov chain modeling of intermittency chaos and its application to Hopfield NN
In this study, a modeling method of the intermittency chaos using the Markov chain is proposed. The performances of the intermittency chaos and the Markov chain model are investigated when they are injected to the Hopfield Neural Network for a quadratic assignment problem or an associative memory. Computer simulated results show that the proposed modeling is good enough to gain similar performance of the intermittency chaos.
[ "markov chain", "intermittency chaos", "neural network", "associative memory", "burst noise", "qap" ]
[ "P", "P", "P", "P", "U", "U" ]
11yb8eW
Splitting Integrators for Nonlinear Schrodinger Equations Over Long Times
Conservation properties of a full discretization via a spectral semi-discretization in space and a Lie-Trotter splitting in time for cubic Schrodinger equations with small initial data (or small nonlinearity) are studied. The approximate conservation of the actions of the linear Schrodinger equation, energy, and momentum over long times is shown using modulated Fourier expansions. The results are valid in arbitrary spatial dimension.
[ "splitting integrators", "nonlinear schrodinger equation", "energy", "and momentum", "modulated fourier expansion", "split-step fourier method", "long-time behavior", "near-conservation of actions" ]
[ "P", "P", "P", "P", "P", "M", "U", "M" ]
-kCPvLo
Generalized scans and tridiagonal systems
Motivated by the analysis of known parallel techniques for the solution of linear tridiagonal system, we introduce generalized scans, a class of recursively defined length-preserving, sequence-to-sequence transformations that generalize the well-known prefix computations (scans). Generalized scan functions are described in terms of three algorithmic phases, the reduction phase that saves data for the third of expansion phase and prepares data for the second phase which is a recursive invocation of the same function on one fewer variable. Both the reduction and expansion phases operate on bounded number of variables, a key feature for their parallelization. Generalized scans enjoy a property, called here protoassociativity, that gives rise to ordinary associativity when generalized scans are specialized to ordinary scans. We show that the solution of positive-definite block tridiagonal linear systems can be cast as a generalized scan, thereby shedding light on the underlying structure enabling known parallelization schemes for this problem. We also describe a variety of parallel algorithms including some that are well known for tridiagonal systems and some that are much better suited to distributed computation. (C) 2001 Elsevier Science B.V. All rights reserved.
[ "scan", "prefix", "tridiagonal linear system", "parallel computation", "numerical computation" ]
[ "P", "P", "P", "R", "M" ]
fbtVNQX
From floated to conventional confidence intervals for the relative risks based on published dose-response data
A dose-response meta-analysis of epidemiological studies can encounter different types of confidence intervals (floated vs. conventional). This paper illustrates how to back calculate conventional confidence intervals from a set of relative risks reported with floated confidence intervals or floated standard errors. Furthermore, we provide an implementation of the formulas in a user-friendly program developed for Stata software. I exemplify the point using published data about alcohol intake and endometrial cancer incidence from the Million Women Study. (C) 2009 Elsevier Ireland Ltd. All rights reserved.
[ "confidence interval", "dose-response", "meta-analysis", "floating absolute risk" ]
[ "P", "P", "P", "M" ]
-Ec2NvK
Efficient normal basis multipliers in composite fields
It is well-known that a class of finite fields GF(2(n)) using an optimal normal basis is most suitable for a hardware implementation of arithmetic in finite fields. In this paper, we introduce composite fields of some hardware-applicable properties resulting from the normal basis representation and the optimal condition. We also present a hardware architecture of the proposed composite fields including a hit-parallel multiplier.
[ "composite field", "finite field", "optimal normal basis", "bit-parallel multiplier" ]
[ "P", "P", "P", "M" ]
-Vu-VLx
A stable second-order scheme for fluidstructure interaction with strong added-mass effects
In this paper, we present a stable second-order time accurate scheme for solving fluidstructure interaction problems. The scheme uses so-called Combined Field with Explicit Interface (CFEI) advancing formulation based on the Arbitrary LagrangianEulerian approach with finite element procedure. Although loosely-coupled partitioned schemes are often popular choices for simulating FSI problems, these schemes may suffer from inherent instability at low structure to fluid density ratios. We show that our second-order scheme is stable for any mass density ratio and hence is able to handle strong added-mass effects. Energy-based stability proof relies heavily on the connections among extrapolation formula, trapezoidal scheme for second-order equation, and backward difference method for first-order equation. Numerical accuracy and stability of the scheme is assessed with the aid of two-dimensional fluidstructure interaction problems of increasing complexity. We confirm second-order temporal accuracy by numerical experiments on an elastic semi-circular cylinder problem. We verify the accuracy of coupled solutions with respect to the benchmark solutions of a cylinder-elastic bar and the NavierStokes flow system. To study the stability of the proposed scheme for strong added-mass effects, we present new results using the combined field formulation for flexible flapping motion of a thin-membrane structure with low mass ratio and strong added-mass effects in a uniform axial flow. Using a systematic series of fluidstructure simulations, a detailed analysis of the coupled response as a function of mass ratio for the case of very low bending rigidity has been presented.
[ "fluidstructure interaction", "strong added-mass", "combined field with explicit interface", "stability proof", "low mass density ratio", "second order", "flapping dynamics" ]
[ "P", "P", "P", "P", "R", "U", "M" ]
2Wc7YAJ
Network information flow
A formal model for an analysis of an information flow in interconnection networks is presented. It is based on timed process algebra which can express also network properties. The information flow is based on a concept of deducibility on composition. Robustness of systems against network timing attacks is defined. A variety of different security properties which reflect different security requirements are defined and investigated.
[ "information flow", "interconnection network", "timing attack", "security" ]
[ "P", "P", "P", "P" ]
-fAKoU-
Interactive reduct evolutional computation for aesthetic design
We propose a method of evolving designs based on the user's personal preferences. The method works through an interaction between the user and a computer system. The method's objective is to help the customer to set design parameters via a simple evaluation of displayed samples. An important feature is that the design attributes to which the user pays more attention (favored features) are estimated using reducts in rough set theory and reflected when refining the design. New design candidates are generated by the user's evaluation of design samples generated at random. The values of attributes estimated as favored features are fixed in the refined samples, while other attributes are generated at random. This interaction continues until the samples converge to a satisfactory design. In this manner, the design process efficiently evaluates personal and subjective preferences. The method is applied to design a 3D cylinder model such as a cup or vase. The method is then compared with an Interactive GA.
[ "reduct", "aesthetics", "favored feature", "rough set theory", "conceptual design", "kansei", "human attention" ]
[ "P", "P", "P", "P", "M", "U", "M" ]
2WVWTAT
A multiple criteria sorting method where each category is characterized by several reference actions: The Electre Tri-nC method
This paper presents Electre Tri-nC, a new sorting method which takes into account several reference actions for characterizing each category. This new method gives a particular freedom to the decision maker in the co-construction decision aiding process with the analyst to characterize the set of categories, while there is no constraint for introducing only one reference action as typical of each category like in Electre Tri-C (Almeida-Dias et al., 2010). As in such a sorting method, this new sorting method is composed of two joint rules. Electre Tri-nC also fulfills a certain number of natural requirements. Additional results on the behavior of the new method are also provided in this paper, namely the ones with respect to the addition or removal of the reference actions used for characterizing a certain category. A numerical example illustrates the manner in which Electre Tri-nC can be used by a decision maker. A comparison with some related sorting procedures is presented and it allows to conclude that the new method is appropriate to deal with sorting problems.
[ "sorting", "multiple criteria decision aiding", "constructive approach", "electre tri-nc lectre ri n", "decision support" ]
[ "P", "R", "U", "M", "M" ]
55GtVzz
Estimating unique solutions of DC transistor circuits
For each natural n let F-n denote the collection of mappings of R-n onto itself defined by: F is an element of F-u if and only if there exist n strictly monotone increasing functions f(k) mapping R onto itself such that for each x = [x(1),...,x(n)](T) is an element of R-n, F(x) = [f(1)(x(1)),...,f(n)(x(n))](T). The following new property of the class P-0 of matices is proved: a real n x n matrix A belongs to P-0 if and only if for every G, H is an element of F-n the set S-0 = {x is an element of R-n : -G(x) less than or equal to Ax less than or equal to -H(x)} is bounded. As an illustration of this property a method of estimating the unique solution of the nonlinear equation F(x) + A(x) = b describing the large class of DC transistor circuits is developed. This can improve the efficiency of known computation algorithms. Numerical examples of transistor circuits illustrate in detail how the method works in practice.
[ "estimation", "electronics", "mathematical methods", "mathematics" ]
[ "P", "U", "M", "U" ]
3K64Zph
Multiple topic identification in human/human conversations ?
A multiple classification methods for multiple theme hypothesization is proposed. Four methods, one of which is new, are initially used and separately evaluated. A new sequential decision strategy for multiple theme hypothesization is introduced. A new hypothesis refinancing component is presented, based on ASR word lattice. Results show that the strategy makes it possible to obtain reliable service surveys.
[ "human/human conversation analysis", "multi-topic identification", "spoken language understanding", "interpretation strategies" ]
[ "M", "M", "U", "M" ]
VM1RQ&8
towards a documentation maturity model
This paper presents preliminary work towards a maturity model for system documentation. The Documentation Maturity Model (DMM) is specifically targeted towards assessing the quality of documentation used in aiding program understanding. Software engineers and technical writers produce such documentation during regular product development lifecycles. The documentation can also be recreated after the fact via reverse engineering. The DMM has both process and product components; this paper focuses on the product quality aspects.
[ "documentation", "maturity model", "quality", "reverse engineering" ]
[ "P", "P", "P", "P" ]
-DVa&yn
Numerical representation of product transitive complete fuzzy orderings
Let X be a space of alternatives with a preference relation in the form of product transitive complete fuzzy ordering R. We prove existence of continuous utility functions for R. (C) 2010 Elsevier Ltd. All rights reserved.
[ "product transitivity", "fuzzy orderings", "fuzzy utility function" ]
[ "P", "P", "R" ]
3Z9V-bB
Design of WDM RoF PON Based on OFDM and Optical Heterodyne
In this paper, we propose a WDM radio-over-fiber (RoF) passive optical network (PON) based on orthogonal frequency-division multiplexing (OFDM) and optical heterodyne. With OFDM and coherent receiving technology, the system achieves high, elastic bandwidth allocation and excellent transporting property. Using optical heterodyne, the network implements the wireless access without adding a radio source. We evaluate the performance of the system in terms of bit error rate, coverage area, and receiving eye diagram and obtain the network as an excellent wire/wireless access property.
[ "optical heterodyne", "radio-over-fiber (rof)", "passive optical network (pon)", "orthogonal frequency-division multiplexing (ofdm)" ]
[ "P", "P", "P", "P" ]
-n&sUma
Image retrieval via isotropic and anisotropic mappings ?
This paper presents an approach for content-based image retrieval via isotropic and anisotropic mappings. Isotropic mappings are defined as mappings invariant to the action of the planar Euclidean group on the image spaceinvariant to the translation, rotation and reflection of image data, and hence, invariant to orientation and position. Anisotropic mappings, on the other hand, are defined as those mappings that are correspondingly variant. Structure extraction (via a perceptual grouping process) and color histogram are shown to be representations of isotropic mappings. Texture analysis using a channel energy model comprised of even-symmetric Gabor filters is considered to be a representation of anisotropic mapping. An integration framework for these mappings is developed. Results of retrieval of outdoor images by query and by classification using a nearest neighbor classifier are presented.
[ "image retrieval", "euclidean group", "structure", "perceptual grouping", "color histogram", "texture", "gabor filter", "nearest neighbor classifier" ]
[ "P", "P", "P", "P", "P", "P", "P", "P" ]
2ZHnUg5
Physical gestures for abstract concepts: Inclusive design with primary metaphors
Designers in inclusive design are challenged to create interactive products that cater for a wide range of prior experiences and cognitive abilities of their users. But suitable design guidance for this task is rare. This paper proposes the theory of primary metaphor and explores its validity as a source of design guidance. Primary metaphor theory describes how basic mental representations of physical sensorimotor experiences are extended to understand abstract domains. As primary metaphors are subconscious mental representations that are highly automated, they should be robustly available to people with differing levels of cognitive ability. Their proposed universality should make them accessible to people with differing levels of prior experience with technology. These predictions were tested for 12 primary metaphors that predict relations between spatial gestures and abstract interactive content. In an empirical study, 65 participants from two age groups (young and old) were asked to produce two-dimensional touch and three-dimensional free-form gestures in response to given abstract keywords and spatial dimensions of movements. The results show that across age groups in 92% of all cases users choose gestures that confirmed the predictions of the theory. Although the two age groups differed in their cognitive abilities and prior experience with technology, overall they did not differ in the amount of metaphor-congruent gestures they made. As predicted, only small or zero correlations of metaphor-congruent gestures with prior experience or cognitive ability could be found. The results provide a promising step toward inclusive design guidelines for gesture interaction with abstract content on mobile multitouch devices. (C) 2010 Elsevier B.V. All rights reserved.
[ "inclusive design", "gesture interaction", "multi-touch interaction", "image schema", "conceptual metaphor", "older adults" ]
[ "P", "P", "M", "U", "M", "U" ]
2Dk4kDB
MBS zone configuration schemes for wireless multicast and broadcast service
The Multicast Broadcast Service (MBS) zone technology is proposed to provide MBS with high QoS on Mobile Communications Networks (MCNs). An MBS zone consists of a group of Base Stations (BSs) synchronized to transmit the same MBS content using the same multicasting channel, which potentially reduces the time delay for Mobile Stations (MSs) to handoff between different BSs in the same MBS zone. However, significant time delay still incurs while MSs handoff between different BSs belonging to different MBS zones (i.e., the inter-MBS zone handoff). To reduce the possibility for the inter-MBS zone handoff, we may increase the size of an MBS zone (i.e., more BSs contained in an MBS zone), which may result in poor multicasting channel utilization. This paper proposes the OverLapping Scheme (OLS) and the Enhanced OverLapping Scheme (EOLS) for more flexible MBS zone configuration to get better performance for MBS in terms of QoS and radio resource utilization. We propose the analytical models for the original MBS zone technology (namely the Basic scheme), and the OLS scheme, which are validated against the simulation experiments. Based on the simulation results, we investigate the performance for the Basic scheme, the OLS scheme, and the EOLS scheme. Copyright (C) 2010 John Wiley & Sons, Ltd.
[ "multicast broadcast service (mbs)", "mobile communications network (mcn)", "handoff delay" ]
[ "P", "P", "R" ]
SyQom84
Scheduling gain for frequency-selective Rayleigh-fading channels with application to self-organizing packet scheduling
This paper investigates packet scheduling in the context of Self-Optimizing Networks, and demonstrates how to improve coverage dynamically by adjusting the scheduling strategy. We focus on a-fair schedulers, and we provide methods for calculating the scheduling gain, including several closed form formulas. Scheduling gain is analyzed for different fading models, with a particular focus on the frequency-selective channel. We then propose a coverage-capacity self-optimization algorithm based on a-fair schedulers. A use case illustrates the implementation of the algorithm and simulation results show that important coverage gains are achieved at the expense of very little computing power. (C) 2011 Elsevier B.V. All rights reserved.
[ "scheduling", "self-optimizing networks", "wireless communication" ]
[ "P", "P", "U" ]
2SGqRFV
OLSR-aware channel access scheduling in wireless mesh networks
Wireless mesh networks (WMNs) have emerged as a key technology having various advantages, especially in providing cost-effective coverage and connectivity solutions in both rural and urban areas. WMNs are typically deployed as backbone networks, usually employing spatial TDMA (STDMA)-based access schemes which are suitable for the high traffic demands of WMNs. This paper aims to achieve higher utilization of the network capacity and thereby aims to increase the application layer throughput of STOMA-based WMNs. The central idea is to use optimized link state routing (OLSR)-specific routing layer information in link layer channel access schedule formation. This paper proposes two STDMA-based channel access scheduling schemes (one distributed, one centralized) that exploit OLSR-specific information to improve the application layer throughput without introducing any additional messaging overhead. To justify the contribution of using OLSR-specific information to the throughput, the proposed schemes are compared against one another and against their non-OLSR-aware versions via extensive ns-2 simulations. Our simulation results verify that utilizing OLSR-specific information significantly improves the overall network performance both in distributed and in centralized schemes. The simulation results further show that OLSR-aware scheduling algorithms attain higher end-to-end throughput although their non-OLSR-aware counterparts achieve higher concurrency in slot allocations. (C) 2010 Elsevier Inc. All rights reserved.
[ "spatial tdma", "cross-layer design", "olsr", "mac", "centralized channel access scheduling", "distributed channel access scheduling" ]
[ "P", "U", "U", "U", "R", "R" ]
-M9ptZH
Privacy-preserving indexing of documents on the network
With the ubiquitous collection of data and creation of large distributed repositories, enabling search over this data while respecting access control is critical. A related problem is that of ensuring privacy of the content owners while still maintaining an efficient index of distributed content. We address the problem of providing privacy-preserving search over distributed access-controlled content. Indexed documents can be easily reconstructed from conventional (inverted) indexes used in search. Currently, the need to avoid breaches of access-control through the index requires the index hosting site to be fully secured and trusted by all participating content providers. This level of trust is impractical in the increasingly common case where multiple competing organizations or individuals wish to selectively share content. We propose a solution that eliminates the need of such a trusted authority. The solution builds a centralized privacy-preserving index in conjunction with a distributed access-control enforcing search protocol. Two alternative methods to build the centralized index are proposed, allowing trade offs of efficiency and security. The new index provides strong and quantifiable privacy guarantees that hold even if the entire index is made public. Experiments on a real-life dataset validate performance of the scheme. The appeal of our solution is twofold: (a) content providers maintain complete control in defining access groups and ensuring its compliance, and (b) system implementors retain tunable knobs to balance privacy and efficiency concerns for their particular domains.
[ "indexing", "privacy", "distributed search" ]
[ "P", "P", "R" ]
-x7SuqS
cross layer optimization for efficient data aggregation in multi-hop wireless sensor networks
Wireless Sensor Networks (WSN) is the most promising technological paradigm to support the next generation highly efficient emergency management systems. Optimal design of WSN involves all the layers of the protocol stack: from the physical (PHY), the medium access layer (MAC) to the application layer. The design problem is conveniently cast in this paper for linear sensor network topologies where the terminals are equidistantly placed on the line between the source and the destination and are monitoring a correlated field. This simple topology can be adopted to provide insights to the performance of multihop networks used in several applications as monitoring systems, acoustic sensor arrays, seismic systems etc... The paper provides an analytical tool for performance analysis that takes into account both the statistical properties of the monitored field (spatial and temporal correlation), the PHY layer transceiver design (RF power allocation and modulation) and the medium access (duty cycle, routing).
[ "wireless sensor networks", "source coding", "cross-layer design", "linear network topology", "compress and forward" ]
[ "P", "M", "M", "R", "M" ]
1EjBMJM
Rank-order polynomial subband decomposition for medical image compression
In this paper, the problem of progressive lossless image coding is addressed, A nonlinear decomposition for progressive lossless compression is presented. The decomposition into subbands is called rank-order polynomial decomposition (ROPD) according to the polynomial prediction models used. The decomposition method presented here is a further development and generalization of the morphological subband decomposition (MSD) introduced earlier by the same research group. It is shown that ROPD provides similar or slightly better results than the compared coding schemes such as the codec based on set partitioning in hierarchical trees (SPIHT) and the codec based on wavelet/trellis-coded quantization (WTCQ). Our proposed method highly outperforms the standard JPEG. The proposed lossless compression scheme has the functionality of having a completely embedded bit stream, which allows for data browsing. It is shown that the ROPD has a better lossless rate than the MSD but it has also a much better browsing quality when only a part of the bit stream is decompressed. Finally, the possibility of hybrid lossy/lossless compression is presented using ultrasound images. As with other compression algorithms, considerable gain can be obtained if only the regions of interest are compressed losslessly.
[ "medical image compression", "progressive lossless image coding", "rank-order polynomial decomposition", "nonlinear subband decomposition" ]
[ "P", "P", "P", "R" ]
-w5A6ZJ
LiNearN: A new approach to nearest neighbour density estimator
Reject the premise that a NN algorithm must find the NN for every instance. The first NN density estimator that has O(n) O ( n ) time complexity and O(1) O ( 1 ) space complexity. These complexities are achieved without using any indexing scheme. Our asymptotic analysis reveals that it trades off between bias and variance. Easily scales up to large data sets in anomaly detection and clustering tasks.
[ "anomaly detection", "clustering", "k-nearest neighbour", "density-based" ]
[ "P", "P", "M", "U" ]
3kkSbxL
Beauty or realism: The dimensions of skin from cognitive sciences to computer graphics
As the most visible interface between the individual and the others, the skin is a key element of visually-carried inter-individual social information, since skin displays a wide array of information regarding gender, age, or health status. Adequate skin perception is central in individual identification and social interactions. This topic elicited marked interest in artists since the first development of visual arts in Antiquity. Often performed in order to identify the biological correlates of attractiveness, psychological research on skin perception made a jump forward with the development of virtual image synthesis. Here, we investigate how advances in both computer graphics and the psychology of skin perception may be turned to use in real-time virtual worlds. We propose a model of skin perception based both on purely physical dimensions such as color, texture, and symmetry, and on dimensions carrying socially-oriented information, such as perceived youth (information regarding putative fertility), markers of sexual dimorphism (information regarding hormonal status), and level of oxygenation (information regarding health status). It appears that for almost all of the dimensions of skin, maximal attractiveness and realism are the two opposite extremities of a single perceptive continuum.
[ "skin perception", "avatar", "humanmachine interactions", "uncanny valley", "synthesized skin", "virtual settings" ]
[ "P", "U", "M", "U", "M", "M" ]
56PQA4V
Fault diagnosis by Locality Preserving Discriminant Analysis and its kernel variation
Linear Discriminant Analysis (LDA) and its nonlinear kernel variation Generalized Discriminant Analysis (GDA) are the most popular supervised dimensionality reduction methods for fault diagnosis. However, we argue that they probably provide suboptimal results for fault diagnosis due to the Fisher's criterion they use. This paper proposes a new supervised dimensionality reduction method named Locality Preserving Discriminant Analysis (LPDA) and its kernel variation Kernel LPDA (KLPDA) for fault diagnosis. (K) LPDA maximizes a new criterion such that local discriminant structure and local geometric structure in data are optimally preserved simultaneously in each dimension of the reduced space. The criterion directly targets at minimizing local overlapping between different classes. Extensive simulations on the Tennessee Eastman (TE) benchmark simulation process and a waste water treatment plant (WWTP) clearly demonstrate the superiority of our methods in terms of misclassification rate and making use of extra training data. (C) 2012 Elsevier Ltd. All rights reserved.
[ "fault diagnosis", "multi-fault classification", "feature extraction", "local structure preserving", "kernel methods" ]
[ "P", "U", "U", "R", "R" ]
3rdUXbt
Wireless distributed computing in cognitive radio networks
Individual cognitive radio nodes in an ad-hoc cognitive radio network (CRN) have to perform complex data processing operations for several purposes, such as situational awareness and cognitive engine (CE) decision making. In an implementation point of view, each cognitive radio (CR) may not have the computational and power resources to perform these tasks by itself. In this paper, wireless distributed computing (WDC) is presented as a technology that enables multiple resource-constrained nodes to collaborate in computing complex tasks in a distributed manner. This approach has several benefits over the traditional approach of local computing, such as reduced energy and power consumption, reduced burden on the resources of individual nodes, and improved robustness. However, the benefits are negated by the communication overhead involved in WDC. This paper demonstrates the application of WDC to CRNs with the help of an example CE processing task. In addition, the paper analyzes the impact of the wireless environment on WDC scalability in homogeneous and heterogeneous environments. The paper also proposes a workload allocation scheme that utilizes a combination of stochastic optimization and decision-tree search approaches. The results show limitations in the scalability of WDC networks, mainly due to the communication overhead involved in sharing raw data pertaining to delegated computational tasks.
[ "distributed computing", "cognitive radio networks", "cognitive engine", "workload allocation", "power and energy consumption" ]
[ "P", "P", "P", "P", "R" ]
-GQy9Dn
Advertisement timeout driven bee's mating approach to maintain fair energy level in sensor networks
In wireless sensor network, dynamic cluster-based routing approach is widely used. Such practiced approach, quickly depletes the energy of cluster heads and induces the execution of frequent re-election algorithm. This repeated cluster head re-election algorithm increases the number of advertisement messages, which in turn depletes the energy of overall sensor network. Here, we proposed the Advertisement Timeout Driven Bee's Mating Approach (ATDBMA) that reduces the cluster set-up communication overhead and elects the standby node in advance for current cluster head, which has the capability to withstand for many rounds. Our proposed ATDBMA method uses the honeybee mating behaviour in electing the standby node for current cluster head. This approach really outperforms the other methods in achieving reduced number of re-election and maintaining fair energy nodes between the rounds.
[ "advertisement timeout", "bee's mating", "wireless sensor network" ]
[ "P", "P", "P" ]
4:iS1-k
Adaptive fuzzy decentralized output feedback control for stochastic nonlinear large-scale systems
In this paper, an adaptive fuzzy decentralized backstepping output feedback control approach is proposed for a class of uncertain large-scale stochastic nonlinear systems without the measurements of the states. The fuzzy logic systems are used to approximate the unknown nonlinear functions, and a fuzzy state observer is designed for estimating the unmeasured states. On the basis of the fuzzy state observer, and by combining the adaptive backstepping technique with decentralized control design, an adaptive fuzzy decentralized output feedback control approach is developed. It is proved that the proposed control approach can guarantee that all the signals of the resulting closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) in probability, and the observer errors and the output of the system converge to a small neighborhood of the origin by choosing appropriate design parameters. A simulation example is provided to show the effectiveness of the proposed approach.
[ "stochastic nonlinear systems", "fuzzy state observer", "backstepping technique", "fuzzy adaptive decentralized control", "stability analysis" ]
[ "P", "P", "P", "R", "U" ]
Q-cjTZ6
Protection against soft errors in the space environment: A finite impulse response (FIR) filter case study
The problem of radiation is a key issue in Space applications, since it produces several negative effects on digital circuits. Considering the high reliability expected in these systems, many techniques have been proposed to mitigate these effects. However, traditional protection techniques against soft errors, like Triple Modular Redundancy (TMR) or EDAC codes (for example Hamming), normally result in a significant area and power overhead. In this paper we propose a specific technique to protect digital finite impulse response (FIR) filters applying the system knowledge. This means to study and use the singularities in their structure in order to provide effective protection with minimal area and power. The results obtained in the experimental process have been compared with the protection offered by TMR and Hamming codes, in order to prove the quality of the proposed solution.
[ "soft errors", "radiation", "fault tolerance", "error detection and correction codes", "digital filters" ]
[ "P", "P", "U", "M", "R" ]
-heQ972
Minimizing the dynamic and sub-threshold leakage power consumption using least leakage vector-assisted technology mapping
Power consumption due to the temperature-dependent leakage current becomes a dominant part of the total power dissipation in systems using nanometer-scale process technology. To obtain the minimum power consumption for different operating conditions, logic synthesis tools are required to take into consideration the leakage power as well as the operating characteristics during the optimization. Conventional logic synthesis flows consider dynamic power only and use an over-simplified cost function in modeling the total power consumption of the logic network. In this paper, we propose a complete model of the total power consumption of the logic network, which includes both the active and standby sub-threshold leakage power, and the operating duty cycle of the applications. We also propose a least leakage vector (LLV) assisted technology mapping algorithm to optimize the total power of the final mapped network. Instead of finding the LLV after the logic network is synthesized and mapped, we use the LLV found in the technology-decomposed network to help in obtaining the lowest total power match during technology mapping. Experimental results on MCNC benchmarks show that on average more than 30% reduction in total power consumption is obtained comparing with the conventional low power technology mapping algorithm.
[ "least leakage vector", "technology mapping", "sub-threshold leakage power reduction" ]
[ "P", "P", "R" ]
33N5W53
Chaos breeds autonomy: connectionist design between bias and baby-sitting
In connectionism and its offshoots, models acquire functionality through externally controlled learning schedules. This undermines the claim of these models to autonomy. Providing these models with intrinsic biases is not a solution, as it makes their function dependent on design assumptions. Between these two alternatives, there is room for approaches based on spontaneous self-organization. Structural reorganization in adaptation to spontaneous activity is a well-known phenomenon in neural development. It is proposed here as a way to prepare connectionist models for learning and enhance the autonomy of these models.
[ "spontaneous activity", "small world", "non-linear dynamics", "perception", "complex systems", "evolving and growing neural networks", "cognitive modeling" ]
[ "P", "U", "U", "U", "U", "M", "M" ]
3XxNa7J
An improvement on the complexity of factoring read-once Boolean functions
Read-once functions have gained recent, renewed interest in the fields of theory and algorithms of Boolean functions, computational learning theory and logic design and verification. In an earlier paper [M.C. Golumbic, A. Mintz, U. Rotics, Factoring and recognition of read-once functions using cographs and normality, and the readability of functions associated with partial k -trees, Discrete Appl. Math. 154 (2006) 14651677], we presented the first polynomial-time algorithm for recognizing and factoring read-once functions, based on a classical characterization theorem of Gurvich which states that a positive Boolean function is read-once if and only if it is normal and its co-occurrence graph is P4 P 4 -free. In this note, we improve the complexity bound by showing that the method can be modified slightly, with two crucial observations, to obtain an O(n|f|) O ( n | f | ) implementation, where |f| | f | denotes the length of the DNF expression of a positive Boolean function f, and n is the number of variables in f . The previously stated bound was O(n2k) O ( n 2 k ) , where k is the number of prime implicants of the function. In both cases, f is assumed to be given as a DNF formula consisting entirely of the prime implicants of the function.
[ "boolean functions", "read-once functions", "logic", "cographs" ]
[ "P", "P", "P", "P" ]
2-yV1q4
A Dutch medical language processor: part II: evaluation
This paper provides a preliminary evaluation of a general Dutch medical language processor (DMLP). Four examples of different potential applications (based on different linguistic modules) are presented, each with its own evaluation method. Finally, a critical review of the used evaluation methods is offered according to the state of the art in medical language processing.
[ "medical language processing", "computational linguistics", "information processing", "automated encoding" ]
[ "P", "M", "M", "U" ]
2kok-k&
Privacy-Preserving Distributed Network Troubleshooting-Bridging the Gap between Theory and Practice
Today, there is a fundamental imbalance in cybersecurity. While attackers act more andmore globally and coordinated, network defense is limited to examine local information only due to privacy concerns. To overcome this privacy barrier, we use secure multiparty computation (MPC) for the problem of aggregating network data from multiple domains. We first optimize MPC comparison operations for processing high volume data in near real-time by not enforcing protocols to run in a constant number of synchronization rounds. We then implement a complete set of basic MPC primitives in the SEPIA library. For parallel invocations, SEPIA's basic operations are between 35 and several hundred times faster than those of comparable MPC frameworks. Using these operations, we develop four protocols tailored for distributed network monitoring and security applications: the entropy, distinct count, event correlation, and top-k protocols. Extensive evaluation shows that the protocols are suitable for near real-time data aggregation. For example, our top-k protocol PPTKS accurately aggregates counts for 180,000 distributed IP addresses in only a few minutes. Finally, we use SEPIA with real traffic data from 17 customers of a backbone network to collaboratively detect, analyze, and mitigate distributed anomalies. Our work follows a path starting from theory, going to system design, performance evaluation, and ending with measurement. Along this way, it makes a first effort to bridge two very disparate worlds: MPC theory and network monitoring and security practices.
[ "security", "secure multiparty computation", "aggregation", "design", "measurement", "algorithms", "experimentation", "applied cryptography", "collaborative network security", "anomaly detection", "network management", "root-cause analysis" ]
[ "P", "P", "P", "P", "P", "U", "U", "U", "R", "R", "M", "U" ]
4mXnkoS
Better GP benchmarks: community survey results and proposals
We present the results of a community survey regarding genetic programming benchmark practices. Analysis shows broad consensus that improvement is needed in problem selection and experimental rigor. While views expressed in the survey dissuade us from proposing a large-scale benchmark suite, we find community support for creating a blacklist of problems which are in common use but have important flaws, and whose use should therefore be discouraged. We propose a set of possible replacement problems.
[ "benchmarks", "community survey", "genetic programming" ]
[ "P", "P", "P" ]
-X3&skm
A Tabular Steganography Scheme for Graphical Password Authentication
Authentication, authorization and auditing are the most important issues of security on data communication. In particular, authentication is the life of every individual essential closest friend. The user authentication security is dependent on the strength of user password. A secure password is usually random, strange, very long and difficult to remember. For most users, remember these irregular passwords are very difficult. To easily remember and security are two sides of one coin. In this paper, we propose a new graphical password authentication protocol to solve this problem. Graphical password authentication technology is the use of click on the image to replace input some characters. The graphical user interface can help user easy to create and remember their secure passwords. However, in the graphical password system based on images can provide an alternative password, but too many images will be a large database to store issue. All the information can be steganography to achieve our scheme to solve the problem of database storage. Furthermore, tabular steganography technique can achieve our scheme to solve the information eavesdropping problem during data transmission. Our modified graphical password system can help user easily and friendly to memorize their password and without loss of any security of authentication. User's chosen input will be hidden into image using steganography technology, and will be transferred to server security without any hacker problem. And then, our authentication server only needs to store only a secret key for decryption instead of large password database.
[ "graphical password authentication", "security", "protocol", "teganography" ]
[ "P", "P", "P", "U" ]
1N8YCgN
A framework for optimal correction of inconsistent linear constraints
The problem of inconsistency between constraints often arises in practice as the result, among others, of the complexity of real models or due to unrealistic requirements and preferences. To overcome such inconsistency two major actions may be taken: removal of constraints or changes in the coefficients of the model. This last approach, that can be generically described as "model correction" is the problem we address in this paper in the context of linear constraints over the reals. The correction of the right hand side alone, which is very close to a fuzzy constraints approach, was one of the first proposals to deal with inconsistency, as it may be mapped into a linear problem. The correction of both the matrix of coefficients and the right hand side introduces non linearity in the constraints. The degree of difficulty in solving the problem of the optimal correction depends on the objective function, whose purpose is to measure the closeness between the original and corrected model. Contrary to other norms, that provide corrections with quite rigid patterns, the optimization of the important Frobenius norm was still an open problem. We have analyzed the problem using the KKT conditions and derived necessary and sufficient conditions which enabled us to unequivocally characterize local optima, in terms of the solution of the Total Least Squares and the set of active constraints. These conditions justify a set of pruning rules, which proved, in preliminary experimental results, quite successful in a tree search procedure for determining the global minimizer.
[ "optimal correction", "linear constraints", "infeasibility", "flexible constraints" ]
[ "P", "P", "U", "M" ]
3ziBVqd
User interface evaluation and empirically-based evolution of a prototype experience management tool
Experience management refers to the capture, structuring, analysis, synthesis, and reuse of an organization's experience in the form of documents, plans, templates, processes, data, etc. The problem of managing experience effectively is not unique to software development, but the field of software engineering has had a high-level approach to this problem for some time. The Experience Factory is an organizational infrastructure whose goal is to produce, store, and reuse experiences gained in a software development organization [6], [71, [8]. This paper describes The Q-Labs Experience Management System (Q-Labs EMS), which is based on the Experience Factory concept and was developed for use in a multinational software engineering consultancy [31]. A critical aspect of the Q-Labs EMS project is its emphasis on empirical evaluation as a major driver of its development and evolution. The initial prototype requirements were grounded in the organizational needs and vision of Q-Labs, as were the goals and evaluation criteria later used to evaluate the prototype. However, the Q-Labs EMS architecture, data model, and user interface were designed to evolve, based on evolving user needs. This paper describes this approach, including the evaluation that was conducted of the initial prototype and its implications for the further development of systems to support software experience management.
[ "user interface evaluation", "experience management", "knowledge management", "experience reuse", "empirical study" ]
[ "P", "P", "M", "R", "M" ]
2RHDAky
Energy-aware performance analysis methodologies for HPC architecturesAn exploratory study
Performance analysis is a crucial step in HPC architectures including clouds. Traditional performance analysis methodologies that were proposed, implemented, and enacted are functional with the objective of identifying bottlenecks or issues related to memory, programming languages, hardware, and virtualization aspects. However, the need for energy efficient architectures in highly scalable computing environments, such as, Grid or Cloud, has widened the research thrust on developing performance analysis methodologies that analyze the energy inefficiency of HPC applications or their associated hardware. This paper surveys the performance analysis methodologies that investigates into the available energy monitoring and energy awareness mechanisms for HPC architectures. In addition, the paper validates the existing tools in terms of overhead, portability, and user-friendly parameters by conducting experiments at HPCCLoud Research Laboratory at our premise. This research work will promote HPC application developers to select an apt monitoring mechanism and HPC tool developers to augment required energy monitoring mechanisms which fit well with their basic monitoring infrastructures.
[ "performance analysis", "hpc", "energy monitoring", "tools" ]
[ "P", "P", "P", "P" ]
4gzJ7xY
locating the tightest link of a network path
The tightest link of a network path is the link where the end-to-end available bandwidth is limited. We propose a new probe technique, called Dual Rate Periodic Streams (DRPS), for finding the location of the tightest link. A DRPS probe is a periodic stream with two rates. Initially, it goes through the path at a comparatively high rate. When arrived at a particular link, the probe shifts its rate to a lower level and keeps the rate. If proper rates are set to the probe, we can control whether the probe is congested or not by adjusting the shift time. When the point of rate shift is in front of the tightest link, the probe can go through the path without congestion, otherwise congestion occurs. Thus, we can find the location of the tightest link by congestion detection at the receiver.
[ "available bandwidth", "dual rate periodic streams (drps)", "network measurements", "tight link" ]
[ "P", "P", "M", "M" ]
45N5ZEc
Research methodology - Using online technology for secondary analysis of survey research data - "Act globally, think locally"
The purpose of the this article is to discuss the impact that online technologies are having and will continue to have on the way secondary analysis of survey research is performed. The authors discuss the validity of secondary analysis of survey research studies and the effect that online technology has on such analyses. Before reviewing current online public opinion sources, the authors make the argument that online services are becoming increasingly important for secondary analysis. Finally, the authors present a model indicating where online services can go in the future given the technology that is available today. Ultimately, it is believed that the Internet is currently underexploited for its capacity to aid secondary analysis. The authors advocate making survey data more easily available online to all potential users. This entails varying the format and depth of data so that users find sources suitable to their needs. It also entails the use of desktop technology to store and analyze survey research data and making that technology, or the applications that are developed through that technology, available to other users via computer networks, primarily via the Internet.
[ "online", "secondary analysis", "survey research", "unix", "pdf", "cgi" ]
[ "P", "P", "P", "U", "U", "U" ]
49ysZc:
Free vibration analysis of multiple-stepped beams by using Adomian decomposition method
The Adomian decomposition method (ADM) is employed in this paper to investigate the free vibrations of the Euler-Bernoulli beams with multiple cross-section steps. The proposed ADM method can be used to analyze the vibration of beams consisting of an arbitrary number of steps through a recursive way. The solution can be obtained by solving a set of algebraic equations with only three unknown parameters. Furthermore, the method can be extended to obtain an approximate solution to vibration problems of any type of non-uniform beams. Several numerical examples are presented and compared to those given in the paper. It is shown that the ADM offers an accurate and effective method of free vibration analysis of multiple-stepped beams with arbitrary boundary conditions. (C) 2011 Elsevier Ltd. All rights reserved.
[ "vibration analysis", "adomian decomposition method", "multiple stepped beam", "natural frequency", "mode shape" ]
[ "P", "P", "R", "U", "U" ]
3rB4SPC
Modelling and performance evaluation of mobile multimedia systems using QoS-GSPN
Quality of Service (QoS) measurement of multimedia applications is one of the most important issues for call handoff and call admission control in mobile networks. Based on the QoS measures, we propose a Generalized Stochastic Petri Net (GSPN) based model, called QoS-GSPN, which can express the real-time behavior of QoS measurement for mobile networks. QoS-GSPN performance analysis methodology includes the formal expression and performance analysis environment. It offers the promise of providing real-time behavior predictability for systems characterized by substantial stochastic behavior. With this methodology we model and analyze the call handoff and call admission control schemes in the different multimedia traffic environments of a mobile network. The results of simulation experiments are used to verify the optimal performance achievable for these schemes under the QoS constraints in the given setting of design parameters.
[ "multimedia system", "qos-gspn", "qos", "mobile system" ]
[ "P", "P", "P", "R" ]
2v6&7sb
watermarking of mpeg-2 video in compressed domain using vlc mapping
In this work we propose a new algorithm for fragile, high capacity yet file-size preserving watermarking of MPEG-2 streams. Watermarking is done entirely in the compressed domain, with no need for full or even partial decompression. The algorithm is based on a previously developed concept of VLC mapping for compressed domain watermarking. The entropy-coded segment of the video is first parsed out and then analyzed in pairs. It is recognized that there are VLC pairs that never appear together in any intra-coded block. The list of unused pairs is systematically generated by the intersection of "pair trees." One of the trees is generated from the main VLC table given in ISO/IEC 13818-2:2000 standard. The other trees are dynamically generated for each intra coded blocks. Forcing one VLC pairs in a block to one of the unused ones generates a watermark block. The change is done while maintaining run/level change to a minimum. At the decoder, the main pair tree is created offline using publicly available VLC tables. Through a secure key exchange, the indices to unused code pairs are communicated to the receiver. We show that the watermarked video is reasonably resistant to forgery attacks and remains secure to watermark detection attempts.
[ "mpeg-2", "compressed domain", "variable length code" ]
[ "P", "P", "M" ]
4UMooLc
Implementing monads for C plus plus template metaprograms
C++ template metaprogramming is used in various application areas, such as expression templates, static interface checking, active libraries, etc. Its recognized similarities to pure functional programming languages - like Haskell - make the adoption of advanced functional techniques possible. Such a technique is using monads, programming structures representing computations. Using them actions implementing domain logic can be chained together and decorated with custom code. C++ template metaprogramming could benefit from adopting monads in situations like advanced error propagation and parser construction. In this paper we present an approach for implementing monads in C++ template metaprograms. Based on this approach we have built a monadic framework for C++ template metaprogramming. As real world examples we present a generic error propagation solution for C++ template metaprograms and a technique for building compile-time parser generators. All solutions presented in this paper are implemented and available as an open source library. (C) 2013 Elsevier B.V. All rights reserved.
[ "monad", "c plus plus template metaprogram", "exception handling", "monoid", "typeclass" ]
[ "P", "P", "U", "U", "U" ]
3jAi2t3
Non-uniform data distribution for communication-efficient parallel clustering
Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operation which hinders the scalability of the approach. This work studies a different parallel formulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature of the centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
[ "clustering", "parallel data mining", "k-means", "group communication", "extreme-scale computing" ]
[ "P", "P", "P", "M", "R" ]
XoDTEuc
Cost-effective control of air quality and greenhouse gases in Europe: Modeling and policy applications
Environmental policies in Europe have successfully eliminated the most visible and immediate harmful effects of air pollution in the last decades. However, there is ample and robust scientific evidence that even at present rates Europes emissions to the atmosphere pose a significant threat to human health, ecosystems and the global climate, though in a less visible and immediate way. As many of the low hanging fruits have been harvested by now, further action will place higher demands on economic resources, especially at a time when resources are strained by an economic crisis. In addition, interactions and interdependencies of the various measures could even lead to counter-productive outcomes of strategies if they are ignored. Integrated assessment models, such as the GAINS (Greenhouse gas Air pollution Interactions and Synergies) model, have been developed to identify portfolios of measures that improve air quality and reduce greenhouse gas emissions at least cost. Such models bring together scientific knowledge and quality-controlled data on future socio-economic driving forces of emissions, on the technical and economic features of the available emission control options, on the chemical transformation and dispersion of pollutants in the atmosphere, and the resulting impacts on human health and the environment. The GAINS model and its predecessor have been used to inform the key negotiations on air pollution control agreements in Europe during the last two decades. This paper describes the methodological approach of the GAINS model and its components. It presents a recent policy analysis that explores the likely future development of emissions and air quality in Europe in the absence of further policy measures, and assesses the potential and costs for further environmental improvements. To inform the forthcoming negotiations on the revision of the Gothenburg Protocol of the Convention on Long-range Transboundary Air Pollution, the paper discusses the implications of alternative formulations of environmental policy targets on a cost-effective allocation of further mitigation measures.
[ "cost-effectiveness", "air pollution", "integrated assessment", "gains model", "convention on long-range transboundary air pollution", "sciencepolicy interface", "decision support" ]
[ "P", "P", "P", "P", "P", "U", "U" ]
3Eb3Yit
Costs assessments of European environmental policies
The evolution of energy production in the European Union (EU) is going through a big change in recent years: the incidence of traditional fuels is diminishing gradually for increasing renewable energy sources (RES), due to international concerns over climate change and for energy security reasons. The aim of this paper is to construct a simulation model that identifies and estimates costs that may arise for a community of negotiating countries from opportunistic behavior of some country when defining environmental policies. In this paper, the model is applied specifically to the new 2030 Framework for Climate and Energy Policies (COM(2014) 0015) (EC, 2014 [11]) on the promotion of RES that commits EU governments to a common goal to increase the share of RES in final consumption to 27% by 2030. Costs faced by EU countries to achieve the RES target are different due to their endowment heterogeneity, the availability of RES, the diffusion process of cost improvements and the different instruments to support the development of the RES technologies. Given the still undefined participation agreement to reach the new overall RES target by 2030, we want to assess the potential cost penalty induced by free riding behavior. This could stem from some EU country, which avoids complying with the RES Directive. Our policy simulation exercise shows that costs increase more than proportionally with the non-participating country size, measured with GDP and CO2 emissions. Furthermore, we provide a model to analytically assess the likelihood each EU country may have to behave opportunistically within the negotiation process of the new proposal on EU RES targets (COM(2014) 0015).
[ "renewable energy", "simulation model", "opportunistic behavior", "cost function" ]
[ "P", "P", "P", "M" ]
-skuhqT
Design methodology for battery powered embedded systems - In safety critical application
Battery powered embedded system can be considered as a power aware system for a safety critical application. There is a need of saving the battery power for such power aware system so that it can be used more efficiently, particularly in safety critical applications. Present paper describes power optimization procedure using real time scheduling technique having a specific dead line guided by the model based optimum current discharge profile of a battery. In any power aware system 'energy optimization' is one of the major issues for a faithful operation. (c) 2008 Elsevier B.V. All rights reserved.
[ "energy optimization", "task scheduling", "peukart's law", "power saving mode", "instruction based power optimization", "task mapping" ]
[ "P", "M", "U", "M", "M", "U" ]
bFSuUwv
Reliability measures for two-part partition of states for aggregated Markov repairable systems
Three models for the aggregated stochastic processes based on an underlying continuous-time Markov repairable system are developed in which two-part partition of states is used. Several availability measures such as interval availability, instantaneous availability and steady-state availability are presented. Some of these availabilities are derived by using Laplace transforms, which are more compact and concise. Other reliability-distributions for these three models are given as well.
[ "two-part partition", "aggregation", "repairable systems", "availability measures", "distributions" ]
[ "P", "P", "P", "P", "U" ]
2RCZo8F
an innovative architecture for context foraging
Nomadic computing is a term for describing computing environments where the nodes are mobile and have only ad hoc interactions with each other. Evidently, context aware applications are a key ingredient in such environments. However, nomadic nodes may not always have the capability to sense their environment and infer their exact context. Hence, applications carried by the nodes will not be able to execute properly. In this paper, we propose an architecture for collaborative exchange of contextual information in an ad hoc setting. This approach is called "context foraging" and is used for disseminating contextual information based on a publish/subscribe scheme. We present the algorithms required for such architecture along with the dynamic event indexing techniques used by the system. The efficiency of the suggested approach is assessed through simulation results. Our proposal is investigated and implemented in the context of the ICT IPAC Project.
[ "nomadic computing", "publish-subscribe", "collaborative sensing" ]
[ "P", "U", "R" ]
9QAu-CQ
On the estimation and correction of bias in local atrophy estimations using example atrophy simulations
Brain atrophy is considered an important marker of disease progression in many chronic neuro-degenerative diseases such as multiple sclerosis (MS). A great deal of attention is being paid toward developing tools that manipulate magnetic resonance (MR) images for obtaining an accurate estimate of atrophy. Nevertheless, artifacts in MR images, inaccuracies of intermediate steps and inadequacies of the mathematical model representing the physical brain volume change, make it rather difficult to obtain a precise and unbiased estimate. This work revolves around the nature and magnitude of bias in atrophy estimations as well as a potential way of correcting them. First, we demonstrate that for different atrophy estimation methods, bias estimates exhibit varying relations to the expected atrophy and these bias estimates are of the order of the expected atrophies for standard algorithms, stressing the need for bias correction procedures. Next, a framework for estimating uncertainty in longitudinal brain atrophy by means of constructing confidence intervals is developed. Errors arising from MRI artifacts and bias in estimations are learned from example atrophy simulations and anatomies. Results are discussed for three popular non-rigid registration approaches with the help of simulated localized brain atrophy in real MR images.
[ "uncertainty", "confidence intervals", "mri", "non-rigid registration", "brain atrophy estimation" ]
[ "P", "P", "P", "P", "R" ]
-higinU
Information technologies and intuitive expertise: a method for implementing complex organizational change among New York City Transit Authoritys Bus Maintainers
This paper describes an attempt to implement a complex information technology system with the New York City Transit Authoritys (NYCTA) Bus Maintainers intended to help better track and coordinate bus maintenance schedules. IT implementation is notorious for high failure rates among so-called low level workers. We believe that many IT implementation efforts make erroneous assumptions about front line workers expertise, which creates tension between the IT implementation effort and the cultures of practice among the front line workers. We designed an aggressive learning intervention to address this issue and called Operational Simulation. Rather than requiring the expected 12 months for implementation, the hourly staff reached independence with the new system in 2 weeks and line supervisors (who do more) managed in 6 weeks. Additionally, the NYCTA shifted from a reactive to a proactive maintenance approach, reduced cycle times, and increased the mean distance between failure, resulting in a estimated $40 million cost savings. Implications for cognition, expertise, and training are discussed.
[ "information technology", "intuitive expertise", "organizational change", "simulation-based training" ]
[ "P", "P", "P", "M" ]
4F9VYPc
Mirrored disk organization reliability analysis
Disk mirroring or RAID level 1 (RAID1) is a popular paradigm to achieve fault tolerance and a higher disk access bandwidth for read requests. We consider four RAID1 organizations: basic mirroring, group rotate declustering, interleaved declustering, and chained declustering, where the last three organizations attain a more balanced load than basic mirroring when disk failures occur. We first obtain the number of configurations, A(n, i), which do not result in data loss when i out of n disks have failed. The probability of no data loss in this case is A(n, i)/{n/i} The reliability of each RAID1 organization is the summation over 1 <= i <= n/2 of A(n, i)r(n-1) (1 - r)(i), where r denotes the reliability of each disk. A closed-form expression for A(n, i) is obtained easily for the first three organizations. We present a relatively simple derivation of the expression for A(n, i) for the chained declustering method, which includes a correctness proof. We also discuss the routing of read requests to balance disk loads, especially when there are disk failures, to maximize the attainable throughput.
[ "disk mirroring", "raid level 1", "group rotate declustering", "interleaved declustering", "chained declustering", "reliability modeling" ]
[ "P", "P", "P", "P", "P", "M" ]
1oUzdN2
Data processing in the early cosmic ray experiments in Sydney
The cosmic ray air shower experiment set up at the University of Sydney in the late 1950s was one of the first complex experiments in Australia to utilize the power of an electronic computer to process and analyse the experimental data. The paper provides a brief overview of the design and construction of the equipment for the experiment and the use of the computer SILLIAC in the processing and analysis of the data. The central role of Chris Wallace in this latter aspect is given special attention.
[ "data processing", "cosmic ray air showers" ]
[ "P", "P" ]
4rc5:Lh
The impact of metadata in web resources discovering
Purpose - To explore the impact of using metadata in finding and ranking web pages through 15 December 2005 search engines. Design/methodology/approach - The study has been divided into two phases. In phase one, the use of metadata schemes and the impact of overlapped documents have been examined by employing the usability technique. Phase two examined the impact of adding metadata elements to web pages in their original rank order, using the experimental method. This study focuses on indexing web pages using metadata. and its impact on search engine's rankings. Findings - Meta tags are more widely used than Dublin Core. The overlapped pages tend to include metadata. The second phase shows that by adding metadata. elements to web pages, it raises its rank order. However, this depends on the quality of the description and the metadata schemes. The study shows no great difference in page ranking between adding meta tags and Dublin Core. Practical implications - To maximize the impact: of metadata, more attention should be given to keyword and descriptive fields. Originality/value - The hypothetical relationship between overlapped pages and the inclusion of metadata and indexing by search engines had not been previously examined.
[ "search engines", "indexing", "optimization techniques", "hypertext markup language" ]
[ "P", "P", "M", "U" ]
2Uqg3qj
A new density-stiffness interpolation scheme for topology optimization of continuum structures
In this paper, a new density-stiffness interpolation scheme for topology optimization of continuum structures is proposed, Based on this new scheme, not only the so-caged checkerboard pattern can be eliminated from the final optimal topology, but also the boundary-smooth effect associated with the traditional sensitivity averaging approach can also be overcome. A proof of the existence of the solution of the optimization problem is also given, therefore mesh independent optimization results can be obtained Numerical examples illustrate the effectiveness and the advantage of the proposed interpolation scheme.
[ "topology", "meshes", "optimization techniques", "filtration" ]
[ "P", "P", "M", "U" ]
4BwqT-H
PETs and their users: a critical review of the potentials and limitations of the privacy as confidentiality paradigm
Privacy as confidentiality has been the dominant paradigm in computer science privacy research. Privacy Enhancing Technologies (PETs) that guarantee confidentiality of personal data or anonymous communication have resulted from such research. The objective of this paper is to show that such PETs are indispensable but are short of being the privacy solutions they sometimes claim to be given current day circumstances. Using perspectives from surveillance studies we will argue that the computer scientists conception of privacy through data or communication confidentiality is techno-centric and displaces end-user perspectives and needs in surveillance societies. We will further show that the perspectives from surveillance studies also demand a critical review for their human-centric conception of information systems. Last, we rethink the position of PETs in a surveillance society and argue for the necessity of multiple paradigms for addressing privacy concerns in information systems design.
[ "pets", "privacy", "confidentiality", "surveillance studies" ]
[ "P", "P", "P", "P" ]
292QcPw
An approach to automated decomposition of volumetric mesh
Mesh decomposition is critical for analyzing, understanding, editing and reusing of mesh models. Although there are many methods for mesh decomposition, most utilize only triangular meshes. In this paper, we present an automated method for decomposing a volumetric mesh into semantic components. Our method consists of three parts. First, the outer surface mesh of the volumetric mesh is decomposed into semantic features by applying existing surface mesh segmentation and feature recognition techniques. Then, for each recognized feature, its outer boundary lines are identified, and the corresponding splitter element groups are setup accordingly. The inner volumetric elements of the feature are then obtained based on the established splitter element groups. Finally, each splitter element group is decomposed into two parts using the graph cut algorithm; each group completely belongs to one feature adjacent to the splitter element group. In our graph cut algorithm, the weights of the edges in the dual graph are calculated based on the electric field, which is generated using the vertices of the boundary lines of the features. Experiments on both tetrahedral and hexahedral meshes demonstrate the effectiveness of our method.
[ "volumetric mesh", "mesh decomposition", "hexahedral mesh", "tetrahedral mesh", "electric flux" ]
[ "P", "P", "P", "R", "M" ]
SxDjh9H
Algorithms for storytelling
We formulate a new data mining problem called storytelling as a generalization of redescription mining. In traditional redescription mining, we are given a set of objects and a collection of subsets defined over these objects. The goal is to view the set system as a vocabulary and identify two expressions in this vocabulary that induce the same set of objects. Storytelling, on the other hand, aims to explicitly relate object sets that are disjoint (and, hence, maximally dissimilar) by finding a chain of (approximate) redescriptions between the sets. This problem finds applications in bioinformatics, for instance, where the biologist is trying to relate a set of genes expressed in one experiment to another set, implicated in a different pathway. We outline an efficient storytelling implementation that embeds the CARTwheels redescription mining algorithm in an A* search procedure, using the former to supply next move operators on search branches to the latter. This approach is practical and effective for mining large data sets and, at the same time, exploits the structure of partitions imposed by the given vocabulary. Three application case studies are presented: a study of word overlaps in large English dictionaries, exploring connections between gene sets in a bioinformatics data set, and relating publications in the PubMed index of abstracts.
[ "data mining", "mining methods and algorithms", "retrieval models", "graph and tree search strategies" ]
[ "P", "M", "U", "M" ]
-8X&WP&
Bifurcation study of a neural field competition model with an application to perceptual switching in motion integration
Perceptual multistability is a phenomenon in which alternate interpretations of a fixed stimulus are perceived intermittently. Although correlates between activity in specific cortical areas and perception have been found, the complex patterns of activity and the underlying mechanisms that gate multistable perception are little understood. Here, we present a neural field competition model in which competing states are represented in a continuous feature space. Bifurcation analysis is used to describe the different types of complex spatio-temporal dynamics produced by the model in terms of several parameters and for different inputs. The dynamics of the model was then compared to human perception investigated psychophysically during long presentations of an ambiguous, multistable motion pattern known as the barberpole illusion. In order to do this, the model is operated in a parameter range where known physiological response properties are reproduced whilst also working close to bifurcation. The model accounts for characteristic behaviour from the psychophysical experiments in terms of the type of switching observed and changes in the rate of switching with respect to contrast. In this way, the modelling study sheds light on the underlying mechanisms that drive perceptual switching in different contrast regimes. The general approach presented is applicable to a broad range of perceptual competition problems in which spatial interactions play a role.
[ "bifurcation", "neural fields", "competition", "perception", "motion", "multistability" ]
[ "P", "P", "P", "P", "P", "P" ]
hCcjjka
Integration of fuzzy spatial relations in deformable models - Application to brain MRI segmentation
This paper presents a general framework to integrate a new type of constraints, based on spatial relations, in deformable models. In the proposed approach, spatial relations are represented as fuzzy subsets of the image space and incorporated in the deformable model as a new external force. Three methods to construct an external force from a fuzzy set representing a spatial relation are introduced and discussed. This framework is then used to segment brain subcortical structures in magnetic resonance images (MRI). A training step is proposed to estimate the main parameters defining the relations. The results demonstrate that the introduction of spatial relations in a deformable model can substantially improve the segmentation of structures with low contrast and ill-defined boundaries. (c) 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
[ "spatial relations", "deformable models", "mri", "fuzzy sets", "subcortical structures" ]
[ "P", "P", "P", "P", "P" ]
gPDxuuL
An efficient animation of wrinkled cloth with approximate implicit integration
This paper presents an efficient method for creating the animation of flexible objects. The mass-spring model was used to represent flexible objects. The easiest approach to creating animation with the mass-spring model is the explicit Euler method, but the method has a serious weakness in that it suffers from an instability problem. The implicit integration method is a possible solution, but a critical flaw of the implicit method is that it involves a large linear system. This paper presents an approximate implicit method for the mass-spring model. The proposed technique updates with stability the state of n mass points in O(n) time when the number of total springs is O(n). In order to increase the efficiency of simulation or reduce the numerical errors of the proposed approximate implicit method, the number of mass points must be as small as possible. However, coarse discretization with a small number of mass points generates an unrealistic appearance for a cloth model. By introducing a wrinkled cubic spline curve, we propose a new technique that generates realistic details of the cloth model, even though a small number of mass points are used for simulation.
[ "implicit method", "realistic detail", "cloth animation", "mass spring model", "wrinkled curve" ]
[ "P", "P", "R", "R", "R" ]
1eB1TJb
Does computer confidence relate to levels of achievement in ICT-enriched learning models?
Employer expectations have changed: university students are expected to graduate with computer competencies appropriate for their field. Educators are also harnessing technology as a medium for learning in the belief that information and communication technologies (ICTs) can enliven and motivate learning across a wide range of disciplines. Alongside developing students computer skills and introducing them to the use of professional software, educators are also harnessing professional and scientific packages for learning in some disciplines. As the educational use of information and communication technologies increases dramatically, questions arise about the effects on learners. While the use of computers for delivery, support, and communication, is generally easy and unthreatening, higher-level use may pose a barrier to learning for those who lack confidence or experience. Computer confidence may mediate in how well students perform in learning environments that require interaction with computers. This paper examines the role played by computer confidence (or computer self-efficacy) in a technology-enriched science and engineering mathematics course in an Australian university. Findings revealed that careful and appropriate use of professional software did indeed enliven learning for the majority of students. However, computer confidence occupied a very different dimension to mathematics confidence: and was not a predictor of achievement in the mathematics tasks, not even those requiring use of technology. Moreover, despite careful and nurturing support for use of the software, students with low computer confidence levels felt threatened and disadvantaged by computer laboratory tasks. The educational implications of these findings are discussed with regard to teaching and assessment, in particular. The TCAT scales used to measure technology attitudes, computer confidence/self-efficacy and mathematics confidence are included in an Appendix. Well-established, reliable, internally consistent, they may be useful to other researchers. The development of the computer confidence scale is outlined, and guidelines are offered for the design of other discipline-specific confidence/self-efficacy scales appropriate for use alongside the computer confidence scale.
[ "achievement", "learning", "scales", "computer attitudes" ]
[ "P", "P", "P", "R" ]
:AZkFJZ
dos protection for udp-based protocols
Since IP packet reassembly requires resources, a denial of service attack can be mounted by swamping a receiver with IP fragments. In this paper we argue how this attack need not affect protocols that do not rely on IP fragmentation, and argue how most protocols, e.g., those that run on top of TCP, can avoid the need for fragmentation. However, protocols such as IPsec's IKE protocol, which both runs on top of UDP and requires sending large packets, depend on IP packet reassembly. Photuris, an early proposal for IKE, introduced the concept of a stateless cookie, intended for DoS protection. However, the stateless cookie mechanism cannot protect against a DoS attack unless the receiver can successfully receive the cookie, which it will not be able to do if reassembly resources are exhausted. Thus, without additional design and/or implementation defenses, an attacker can successfully, through a fragmentation attack, prevent legitimate IKE handshakes from completing. Defense against this attack requires both protocol design and implementation defenses. The IKEv2 protocol was designed to make it easy to design a defensive implementation. This paper explains the defense strategy designed into the IKEv2 protocol, along with the additional needed implementation mechanisms. It also describes and contrasts several other potential strategies that could work for similar UDP-based protocols.
[ "dos", "denial of service", "fragmentation", "ipsec", "ike", "protocol design", "network security", "buffer exhaustion" ]
[ "P", "P", "P", "P", "P", "P", "U", "M" ]
2EG-7X4
On the parallel efficiency and scalability of the correntropy coefficient for image analysis
Similarity measures have application in many scenarios of digital image processing. The correntropy is a robust and relatively new similarity measure that recently has been employed in various engineering applications. Despite other competitive characteristics, its computational cost is relatively high and may impose hard-to-cope time restrictions for high-dimensional applications, including image analysis and computer vision.
[ "parallel efficiency", "correntropy", "similarity measures", "multi-core architecture", "parallel scalability" ]
[ "P", "P", "P", "U", "R" ]
4o-ByxM
Positive solution to a special singular second-order boundary value problem
Let lambda be a nonnegative parameter. The existence of a positive solution is studied for a semipositone second-order boundary value problem u''(t) = lambda q(t) f(t, u(t),u'(t)), alpha u(0) - beta u'(0) = d, u(1) = 0, where d > 0, alpha >= 0, beta >= 0, alpha + beta > 0, q(t) f (t, u, v) >= 0 on a suitable subset of [0, 1] x [0,+ infinity) x (-infinity,+ infinity) and f (t, u, v) is allowed to be singular at t = 0, t = 1 and u = 0. The proofs are based on the Leray-Schauder fixed point theorem and the localization method. (c) 2008 Published by Elsevier Ltd.
[ "positive solution", "existence", "ordinary differential equation", "singular boundary value problem" ]
[ "P", "P", "U", "R" ]
QoZrBbk
Report of research activities in fuzzy AI and medicine at USFCSE
Several projects involving the use of fuzzy and neuro-fuzzy methods in medical applications, developed by members of the Department of Computer Science and Engineering, University of South Florida, Tampa, Florida, are briefly reviewed. The successful applications are emphasized. (C) 2001 Elsevier Science B.V. All rights reserved.
[ "neuro-fuzzy system", "sudden infant death syndrome", "fuzzy logic" ]
[ "M", "U", "M" ]
n7HrWWV
Societally connected multimedia across cultures
The advance of the Internet in the past decade has radically changed the way people communicate and collaborate with each other. Physical distance is no more a barrier in online social networks, but cultural differences (at the individual, community, as well as societal levels) still govern human-human interactions and must be considered and leveraged in the online world. The rapid deployment of high-speed Internet allows humans to interact using a rich set of multimedia data such as texts, pictures, and videos. This position paper proposes to define a new research area called 'connected multimedia', which is the study of a collection of research issues of the super-area social media that receive little attention in the literature. By connected multimedia, we mean the study of the social and technical interactions among users, multimedia data, and devices across cultures and explicitly exploiting the cultural differences. We justify why it is necessary to bring attention to this new research area and what benefits of this new research area may bring to the broader scientific research community and the humanity.
[ "connected multimedia", "social media", "social-cultural constraint" ]
[ "P", "P", "U" ]
4rAMaYY
multiple object retrieval in image databases using hierarchical segmentation tree
With the rapid growth of information, efficient and robust information retrieval techniques have become increasingly more important. Multiple object retrieval remains challenging due to the complex nature of this problem. The proposed research, unlike most existing works that are designed for single object retrieval or adopt heuristic multiple object matching scheme, aims at contributing to this field through the development of an image retrieval system that adopts a hierarchical region-tree representation of image, and enables effective and efficient multiple object retrieval and automatic discovery of the objects of interest through users' relevance feedback. We believe this is the first systematic attempt to formulate a comprehensive, intelligent, and interactive framework for multiple object retrieval in image databases that makes use of a hierarchical region-tree representation.
[ "hierarchical region-tree", "multi-object retrieval", "content-based image retrieval", "multi-resolution image segmentation" ]
[ "P", "M", "M", "M" ]
2YqT9EA
Delay-dependent stability analysis for impulsive neural networks with time varying delays
In this paper, the global exponential stability and global asymptotic stability of the neural networks with impulsive effect and time varying delays is investigated. By using LyapunovKrasovskii-type functional, the quality of negative definite matrix and Cauchy criterion, we obtain the sufficient conditions for global exponential stability and global asymptotic stability of such model, in terms of linear matrix inequality (LMI), which depend on the delays. Two examples are given to illustrate the effectiveness of our theoretical results.
[ "delays", "impulsive neural networks", "global exponential stability", "global asymptotic stability", "negative definite matrix", "linear matrix inequality (lmi)" ]
[ "P", "P", "P", "P", "P", "P" ]
-sHwwxY
OLAP over uncertain and imprecise data
We extend the OLAP data model to represent data ambiguity, specifically imprecision and uncertainty, and introduce an allocation-based approach to the semantics of aggregation queries over such data. We identify three natural query properties and use them to shed light on alternative query semantics. While there is much work on representing and querying ambiguous data, to our knowledge this is the first paper to handle both imprecision and uncertainty in an OLAP setting.
[ "imprecision", "ambiguous", "uncertainty", "aggregation" ]
[ "P", "P", "P", "P" ]