id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
44kAPSp
the effects of interaction frequency on the optimization performance of cooperative coevolution
Cooperative coevolution is often used to solve difficult optimization problems by means of problem decomposition. Its performance on this task is influenced by many design decisions. It would be useful to have some knowledge of the performance effects of these decisions, in order to make the more beneficial ones. In this paper we study the effects on performance of the frequency of interaction between populations. We show them to be problem-dependent and use dynamics analysis to explain this dependency.
[ "performance", "cooperative coevolution", "dynamics" ]
[ "P", "P", "P" ]
45ikM58
Brain-inspired method for solving fuzzy multi-criteria decision making problems (BIFMCDM)
We propose a brain-inspired method for solving fuzzy decision making problems. We study a websites ranking problem for an e-alliance. Processing fuzzy information as just abstract element could lead to wrong decision.
[ "brain informatics", "fuzzy sets theory", "multi criteria decision making", "simulation", "web intelligence", "world wide wisdom web (w4)" ]
[ "U", "M", "M", "U", "U", "M" ]
4nvyiKG
Environmental model access and interoperability: The GEO Model Web initiative
The Group on Earth Observation (GEO) Model Web initiative utilizes a Model as a Service approach to increase model access and sharing. It relies on gradual, organic growth leading towards dynamic webs of interacting models, analogous to the World Wide Web. The long term vision is for a consultative infrastructure that can help address "what if" and other questions that decision makers and other users have. Four basic principles underlie the Model Web: open access, minimal barriers to entry, service-driven, and scalability; any implementation approach meeting these principles will be a step towards the long term vision. Implementing a Model Web encounters a number of technical challenges, including information modelling, minimizing interoperability agreements, performance, and long term access, each of which has its own implications. For example, a clear information model is essential for accommodating the different resources published in the Model Web (model engines, model services, etc.), and a flexible architecture, capable of integrating different existing distributed computing infrastructures, is required to address the performance requirements. Architectural solutions, in keeping with the Model Web principles, exist for each of these technical challenges. There are also a variety of other key challenges, including difficulties in making models interoperable; calibration and validation; and social, cultural, and institutional constraints. Although the long term vision of a consultative infrastructure is clearly an ambitious goal, even small steps towards that vision provide immediate benefits. A variety of activities are now in progress that are beginning to take those steps. (C) 2012 Elsevier Ltd. All rights reserved.
[ "environmental modelling", "interoperability", "model web", "composition as a service (caas)", "model as a service (maas)", "geoss" ]
[ "P", "P", "P", "M", "M", "U" ]
r3BEJqe
non-uniform micro-channel design for stacked 3d-ics
Micro-channel cooling shows great potential in removing high density heat in 3D circuits. The current micro-channel heat sink designs spread the entire surface to be cooled with micro-channels. This approach, though might provide sufficient cooling, requires quite high pumping power. In this paper, we investigate the non-uniform allocation of micro-channels to provide sufficient cooling with less pumping power. Specifically, we decide the count, location and pumping pressure drop/flow rate of micro-channels such that acceptable cooling is achieved at minimum pumping power. Thermal wake effect and runtime pressure drop/flow rate control are also considered. The experiments showed that, compared with the conventional design which spreads micro-channels all over the chip, our non-uniform microchannel design achieves 55--60\% pumping power saving.
[ "micro-channel", "3d-ic", "power", "liquid cooling" ]
[ "P", "P", "P", "M" ]
aGKMUjC
Neurocomputing techniques to dynamically forecast spatiotemporal air pollution data
Real time monitoring, forecasting and modeling air pollutants concentrations in major urban centers is one of the top priorities of all local and national authorities globally. This paper studies and analyzes the parameters related to the problem, aiming in the design and development of an effective machine learning model and its corresponding system, capable of forecasting dangerous levels of ozone (O3) concentrations in the city center of Athens and more specifically in the Athinas air quality monitoring station. This is a multi parametric case, so an effort has been made to combine a vast number of data vectors from several operational nearby measurements stations. The final result was the design and construction of a group of artificial neural networks capable of estimating O3 concentrations in real time mode and also having the capacity of forecasting the same values for future time intervals of 1, 2, 3 and 6h, respectively.
[ "machine learning", "artificial neural networks", "multi parametric ann", "pollution of the atmosphere", "ozone estimation and forecasting" ]
[ "P", "P", "M", "M", "R" ]
-thfeBi
Revisiting rational bubbles in the G-7 stock markets using the Fourier unit root test and the nonparametric rank test for cointegration
This paper re-investigates whether rational bubbles existed in the G-7 stock markets during the period of January 2000-June 2009 using the newly developed Fourier unit root test and a nonparametric rank test for cointegration. The empirical results from our Fourier unit test indicate that the null hypothesis of J(1) unit root in stock prices can be rejected for Canada, France, Italy and the UK. However, the empirical results from the rank test reveal that rational bubbles did not exist in the G-7 stock markets during the sample period. (C) 2011 IMACS. Published by Elsevier B.V. All rights reserved.
[ "rational bubbles", "fourier unit root test", "rank test for nonlinear cointegration", "g7 stock markets" ]
[ "P", "P", "M", "M" ]
2kgvzGi
Orchestrating Stream Graphs Using Model Checking
In this article we use model checking to statically distribute and schedule Synchronous DataFlow (SDF) graphs on heterogeneous execution architectures. We show that model checking is capable of providing an optimal solution and it arrives at these solutions faster (in terms of algorithm runtime) than equivalent ILP formulations. Furthermore, we also show how different types of optimizations such as task parallelism, data parallelism, and state sharing can be included within our framework. Finally, comparison of our approach with the current state-of-the-art heuristic techniques show the pitfalls of these techniques and gives a glimpse of how these heuristic techniques can be improved.
[ "streaming", "dataflow", "parallelization", "languages", "performance", "compiler" ]
[ "P", "P", "P", "U", "U", "U" ]
577Xnfn
Model-averaged Wald confidence intervals
The process of model averaging has become increasingly popular as a method for performing inference in the presence of model uncertainty. In the frequentist setting, a model-averaged estimate of a parameter is calculated as the weighted sum of single-model estimates, often using weights derived from an information criterion such as AIC or BIC. A standard method for calculating a model-averaged confidence interval is to use a Wald interval centered around the model-averaged estimate. We propose a new method for construction of a model-averaged Wald confidence interval, based on the idea of model averaging tail areas of the sampling distributions of the single-model estimates. We use simulation to compare the performance of the new method and existing methods, in terms of coverage rate and interval width. The new method consistently outperforms existing methods in terms of coverage, often for little increase in the interval width. We also consider choice of model weights, and find that AIC weights are preferable to either AICc or BIC weights in terms of coverage.
[ "model averaging", "model uncertainty", "wald interval", "coverage rate", "model weight" ]
[ "P", "P", "P", "P", "P" ]
34zZpgU
Dynamics of the difference equation x n + 1 = x n + p x n ? k x n + q
We study the invariant interval, the character of semicycles, the global stability, and the boundedness of the difference equation
[ "invariant interval", "boundedness", "local asymptotic stability", "semicycle behavior", "global asymptotic stability" ]
[ "P", "P", "M", "M", "M" ]
4y9byo8
Cross-validation based single response adaptive design of experiments for Kriging metamodeling of deterministic computer simulations
A new approach for single response adaptive design of deterministic computer experiments is presented. The approach is called SFCVT, for Space-Filling Cross-Validation Tradeoff. SFCVT uses metamodeling to obtain an estimate of cross-validation errors, which are maximized subject to a constraint on space filling to determine sample points in the design space. The proposed method is compared, using a test suite of forty four numerical examples, with three DOE methods from the literature. The numerical test examples can be classified into symmetric and asymmetric functions. Symmetric examples refer to functions for which the extreme points are located symmetrically in the design space and asymmetric examples are those for which the extreme regions are not located in a symmetric fashion in the design space. Based upon the comparison results for the numerical examples, it is shown that SFCVT performs better than an existing adaptive and a non-adaptive DOE method for asymmetric multimodal functions with high nonlinearity near the boundary, and is comparable for symmetric multimodal functions and other test problems. The proposed approach is integrated with a multi-scale heat exchanger optimization tool to reduce the computational effort involved in the design of novel air-to-water heat exchangers. The resulting designs are shown to be significantly more compact than mainstream heat exchanger designs.
[ "design of experiments", "kriging metamodeling", "heat exchanger design", "design optimization" ]
[ "P", "P", "P", "R" ]
1y9EoKe
Bottlenecks and Hubs in Inferred Networks Are Important for Virulence in Salmonella typhimurium
Recent advances in experimental methods have provided sufficient data to consider systems as large networks of interconnected components. High-throughput determination of protein-protein interaction networks has led to the observation that topological bottlenecks, proteins defined by high centrality in the network, are enriched in proteins with systems-level phenotypes such as essentiality. Global transcriptional profiling by microarray analysis has been used extensively to characterize systems, for example, examining cellular response to environmental conditions and effects of genetic mutations. These transcriptomic datasets have been used to infer regulatory and functional relationship networks based on co-regulation. We use the context likelihood of relatedness (CLR) method to infer networks from two datasets gathered from the pathogen Salmonella typhimurium: one under a range of environmental culture conditions and the other from deletions of 15 regulators found to be essential in virulence. Bottleneck and hub genes were identified from these inferred networks, and we show for the first time that these genes are significantly more likely to be essential for virulence than their non-bottleneck or non-hub counterparts. Networks generated using simple similarity metrics (correlation and mutual information) did not display this behavior. Overall, this study demonstrates that topology of networks inferred from global transcriptional profiles provides information about the systems-level roles of bottleneck genes. Analysis of the differences between the two CLR-derived networks suggests that the bottleneck nodes are either mediators of transitions between system states or sentinels that reflect the dynamics of these transitions.
[ "bottlenecks", "virulence", "salmonella typhimurium", "network inference" ]
[ "P", "P", "P", "P" ]
7b1w9z2
Semantic manipulation of users queries and modeling the health and nutrition preferences
People depend on popular search engines to look for the desired health and nutrition information. Many search engines cannot semantically interpret, enrich the users natural language queries easily and hence do not retrieve the personalized information that fits the users needs. One reason for retrieving irrelevant information is the fact that people have different preferences where each one likes and dislikes certain types of food. In addition, some people have specific health conditions that restrict their food choices and encourage them to take other foods. Moreover, the cultures, where people live in, influence food choices while the search engines are not aware of these cultural habits. Therefore, it will be helpful to develop a system that semantically manipulates users queries and models the users preferences to retrieve personalized health and food information. In this paper, we harness semantic Web technology to capture users preferences, construct a nutritional and health-oriented users profile, model the users preferences and use them to organize the related knowledge so that users can retrieve personalized health and food information. We present an approach that uses the personalization techniques based on integrated domain ontologies, pre-constructed by domain experts, to retrieve relevant food and health information that is consistent with peoples needs. We implemented the system, and the empirical results show high precision and recall with a superior users satisfaction.
[ "personalization", "semantic query manipulation", "user profile ontology" ]
[ "P", "R", "R" ]
1MHhWrc
Swift and stable polygon growth and broken line offset
The problem of object growing (offsetting the object boundary by a certain distance) is an important and widely studied problem. In this paper we propose a new approach for offsetting the boundary of an object described by segments which are not necessarily connected. This approach avoids many destructive special cases that arise in some heuristic-based approaches. Moreover, the method developed in this paper is stable in that it does not fail because of missing segments. Also, the time required for the computation of the offset is relatively short and therefore inexpensive, i.e. it is expected to be O(n*log n).
[ "offset technique", "cad/cam", "algorithms", "trimmed offsets" ]
[ "M", "U", "U", "M" ]
UPYxW6V
ILP-based multistage placement of PMUs with dynamic monitoring constraints
A multistage planning of PMUs placement for power systems is proposed. The methodology takes into account network expansion plans. System observability is maximized over time. The methodology identifies nodes to locate PMUs based on security criteria. The stepwise approach allows the utilities to develop a path for PMU placement.
[ "observability", "phasor measurement unit (pmu)", "multistage pmu placement", "integer linear programming (ilp)", "coherency recognition", "community detection" ]
[ "P", "M", "R", "U", "U", "U" ]
3AuPFD1
Indoor solar energy harvesting for sensor network router nodes
A unique method has been developed to scavenge energy from monocrystaline solar cells to power wireless router nodes used in indoor applications. The systems energy harvesting module consists of solar cells connected in series-parallel combination to scavenge energy from 34W fluorescent lights. A set of ultracapacitors were used as the energy storage device. Two router nodes were used as a router pair at each route point to minimize power consumption. Test results show that the harvesting circuit which acted as a plug-in to the router nodes manages energy harvesting and storage, and enables near-perpetual, harvesting aware operation of the router node.
[ "solar energy", "energy harvesting", "router nodes", "wireless sensor networks", "energy scavenging" ]
[ "P", "P", "P", "R", "R" ]
-iGMPJs
Detecting data records in semi-structured web sites based on text token clustering
This paper describes a new approach to the use of clustering for automatic data detection in semi-structured web pages. Unlike most exiting web information extraction approaches that usually apply wrapper induction techniques to manually labelled web pages, this approach avoids the pattern induction process by using clustering techniques on unlabelled pages. In this approach, a variant Hierarchical Agglomerative Clustering (HAC) algorithm called K-neighbours-HAC is developed which uses the similarities of the data format (HTML tags) and the data content (text string values) to group similar text tokens into clusters. We also develop a new method to label text tokens to capture the hierarchical structure of HTML pages and an algorithm for mapping labelled text tokens to XML. The new approach is tested and compared with several common existing wrapper induction systems on three different sets of web pages. The results suggest that the new approach is effective for data record detection and that it outperforms these common existing approaches examined on these web sites. Compared with the existing approaches, the new approach does not require training and successfully avoids the explicit pattern induction process, and accordingly the entire data detection process is simpler.
[ "semi-structured web sites", "text token clustering", "automatic data detection", "web information extraction", "html tags" ]
[ "P", "P", "P", "P", "P" ]
-ohM7DM
The design of GSC FieldLog: ontology-based software for computer aided geological field mapping ?
Databases containing geological field information are increasingly being constructed directly in the field. The design of such databases is often challenged by opposing needs: (1) the individual need to maintain flexibility of database structure and contents, to accommodate unexpected field situations; and (2) the corporate need to retain compatibility between distinct field databases, to accommodate their interoperability. The FieldLog mapping software balances these needs by exploiting a domain ontology developed for field information, one that enables field database flexibility and facilitates compatibility. The ontology consists of cartographic, geospatial, geological and metadata objects that form a common basis for interoperability and that can be instantiated by users into customized field databases. The design of the FieldLog software, its foundation on this ontology, and the resulting benefits to usability are presented in this paper. The discussion concentrates on satisfying the flexibility requirement by implementing the ontology as a generic data model within an object-relational database environment; issues of interoperability are not considered in detail. Benefits of this ontologic-driven approach are also developed within a description of the FieldLog application, including (1) improved usability due to an user interface based on the geological components of the ontology, and (2) diminished technical prerequisites as users are shielded from the many database and GIS technicalities handled by the ontology.
[ "ontology", "data model", "gis", "geological mapping", "field data" ]
[ "P", "P", "P", "R", "R" ]
1p6&2Gt
Systemic disease sequelae in chronic inflammatory diseases and chronic psychological stress: comparison and pathophysiological model
In chronic inflammatory diseases (CIDs), the neuroendocrineimmune crosstalk is important to allocate energy-rich substrates to the activated immune system. Since the immune system can request energy-rich substrates independent of the rest of the body, I refer to it as the selfish immune system, an expression that was taken from the theory of the selfish brain, giving the brain a similar position. In CIDs, the theory predicts the appearance of long-term disease sequelae, such as metabolic syndrome. Since long-standing energy requirements of the immune system determine disease sequelae, the question arose as to whether chronic psychological stress due to chronic activation of the brain causes similar sequelae. Indeed, there are many similarities; however, there are also differences. A major difference is the behavior of body weight (constant in CIDs versus loss or gain in stress). To explain this discrepancy, a new pathophysiological theory is presented that places inflammation and stress axes in the middle.
[ "systemic disease sequelae", "chronic inflammatory disease", "psychological stress", "rheumatoid arthritis" ]
[ "P", "P", "P", "U" ]
1boAQoL
Setting parameters by example
We introduce a class of "inverse parametric optimization" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other "optimal subgraph" problems and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.
[ "minimum spanning tree", "shortest paths", "inverse optimization", "parametric search", "vehicle routing", "adaptive user interfaces", "alpha-beta search", "evaluation function", "randomized algorithms", "ellipsoid method" ]
[ "P", "P", "R", "M", "R", "U", "U", "U", "M", "U" ]
4QuBGpd
PedVis: A Structured, Space-Efficient Technique for Pedigree Visualization
Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.
[ "pedigree", "genealogy", "h-tree" ]
[ "P", "P", "P" ]
45tTJMQ
The evolution of mobile communications in Europe: The transition from the second to the third generation
This paper analyses the evolution of the mobile communications industry in the European Union. The research focuses its interest on the different roles played by the regulator in Europe and in other regions of the world (mainly the US). The diffusion of GSM was extraordinarily fast in Europe, mainly due to the adoption of a unified standard from inception. This rapid diffusion has resulted in an important competitive advantage for European operators. Interestingly, while the regulator acted similarly in the case of UMTS, the development of the latter has faced many problems and, presently, its diffusion is still low (about 5% in the EU). The paper also offers basic information on market structure that may be useful for extracting some preliminary conclusions about the degree of rivalry within the industry and the differences that can be observed between European countries.
[ "regulation", "market structure", "european mobile communications", "2g", "3g" ]
[ "P", "P", "R", "U", "U" ]
4nhpvfG
A Decision Support System for Design of Transmission System of Low Power Tractors
A decision support system (DSS) was developed in Visual Basic 6.0 programming language to design transmission system of low horsepower agricultural tractors, which involved the design of clutch and gearbox. The DSS provided graphical user interface by linking databases to support decision on design of transmission system for low horsepower tractors on the basis of modified ASABE draft model. The developed program for design of tractor transmission system calculated clutch size, gear ratios, number of teeth on each gear, and various gear design parameters. Related deviation was computed for design of transmission system of tractors based on measured and predicted values (simulated). The related deviation was less than 7% for design of clutch plate outer diameter and less than 3% for inner diameter. There was less than 1% variation between the predicted results by the developed DSS and those obtained from actual measurement for design of gear ratio. The DSS program was user friendly and efficient for predicting the design of transmission system for different tractor models to meet requirements of research institutions and industry. 2015 Wiley Periodicals, Inc. Comput. Appl. Eng. Educ. Comput Appl Eng Educ 23:760770, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21648
[ "decision support system", "low power tractors", "tractor transmission system", "simulation", "asabe equation" ]
[ "P", "P", "P", "P", "M" ]
13uRqWC
Distributed automation: PABADIS versus HMS
Distributed control systems (DCS) have gained huge interest in the automation business. Several approaches have been made which aim at the design and application of DCS to improve system flexibility and robustness. Important approaches are (among others) the holonic manufacturing systems (HMS) and the plant automation based on distributed systems (PABADIS) approach. PABADIS deals with plant automation systems in a distributed way using generic mobile and stationary agents and plug and participate facilities within a flat structure as key points of the developed control architecture. HMS deals with a similar structure, but aims more at a control hierarchy of special agents. This paper gives a description of the PABADIS project and makes comparisons between the two concepts, showing advantages and disadvantages of both systems. Based on this paper, it will be possible to observe the abilities and drawbacks of distributed agent-based control systems.
[ "distributed control systems (dcs)", "holonic manufacturing systems (hms)", "plant automation based on distributed systems (pabadis)", "manufacturing execution system", "multiagent system" ]
[ "P", "P", "P", "M", "M" ]
4XLhhRT
A new supervised learning algorithm for multiple spiking neural networks with application in epilepsy and seizure detection
A new Multi-Spiking Neural Network (MuSpiNN) model is presented in which information from one neuron is transmitted to the next in the form of multiple spikes via multiple synapses. A new supervised learning algorithm, dubbed Multi-SpikeProp, is developed for training MuSpiNN. The model and learning algorithm employ the heuristic rules and optimum parameter values presented by the authors in a recent paper that improved the efficiency of the original single-spiking Spiking Neural Network (SNN) model by two orders of magnitude. The classification accuracies of MuSpiNN and Multi-SpikeProp are evaluated using three increasingly more complicated problems: the XOR problem, the Fisher iris classification problem, and the epilepsy and seizure detection (EEG classification) problem. It is observed that MuSpiNN learns the XOR problem in twice the number of epochs compared with the single-spiking SNN model but requires only one-fourth the number of synapses. For the iris and EEG classification problems, a modular architecture is employed to reduce each 3-class classification problem to three 2-class classification problems and improve the classification accuracy. For the complicated EEG classification problem a classification accuracy in the range of 90.7%94.8% was achieved, which is significantly higher than the 82% classification accuracy obtained using the single-spiking SNN with SpikeProp.
[ "supervised learning", "spiking neural networks", "epilepsy", "eeg classification" ]
[ "P", "P", "P", "P" ]
4aNzdjA
Investigations about replication of empirical studies in software engineering: A systematic mapping study ?
Two recent mapping studies which were intended to verify the current state of replication of empirical studies in Software Engineering (SE) identified two sets of studies: empirical studies actually reporting replications (published between 1994 and 2012) and a second group of studies that are concerned with definitions, classifications, processes, guidelines, and other research topics or themes about replication work in empirical software engineering research (published between 1996 and 2012). In this current article, our goal is to analyze and discuss the contents of the second set of studies about replications to increase our understanding of the current state of the work on replication in empirical software engineering research. We applied the systematic literature review method to build a systematic mapping study, in which the primary studies were collected by two previous mapping studies covering the period 19962012 complemented by manual and automatic search procedures that collected articles published in 2013. We analyzed 37 papers reporting studies about replication published in the last 17years. These papers explore different topics related to concepts and classifications, presented guidelines, and discuss theoretical issues that are relevant for our understanding of replication in our field. We also investigated how these 37 papers have been cited in the 135 replication papers published between 1994 and 2012. Replication in SE still lacks a set of standardized concepts and terminology, which has a negative impact on the replication work in our field. To improve this situation, it is important that the SE research community engage on an effort to create and evaluate taxonomy, frameworks, guidelines, and methodologies to fully support the development of replications.
[ "replications", "empirical studies", "software engineering", "mapping study", "systematic literature review", "experiments" ]
[ "P", "P", "P", "P", "P", "U" ]
2Fw6Zw5
worst-case analysis of memory allocation algorithms
Various memory allocation problems can be modeled by the following abstract problem. Given a list A &equil; (&agr; 1 ,&agr; 2 ,...&agr; n ,) of real numbers in the range (0, 1], place these in a minimum number of bins so that no bin holds numbers summing to more than 1. We let A* be the smallest number of bins into which the numbers of list A may be placed. Since a general placement algorithm for attaining A* appears to be impractical, it is important to determine good heuristic methods for assigning numbers of bins. We consider four such simple methods and analyze the worst-case performance of each, closely bounding the maximum of the ratio of the number of bins used by each method applied to list A to the optimal quantity A*.
[ "analysis", "memory allocation", "algorithm", "abstraction", "place", "general", "placement", "heuristics", "method", "performance", "optimality", "case" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U" ]
Cq&Ywx7
Building a financial diagnosis system based on fuzzy logic production system
The purpose of this study is to build a financial expert system based on fuzzy theory and Fuzzy LOgic Production System (FLOPS), which is an expert tool for processing the ambiguity. The study consists if four parts. For the first part, the basic features of expert systems are presented. For the second part, fuzzy concepts and the evaluation of classical expert systems to fuzzy expert systems will be presented. For the third part, the expert system shell (FLOPS) used in this study will be described. For the last part, it will be presented the financial diagnosis system, developed by using the Wall's seven ratios, traditional seven ratios and also 34 ratios selected by a financial expert. Alter analyzing and investigating these three kinds of methods, financial diagnosis system will be developed as a fuzzy expert system which used a membership function bared on averages and standard deviation. At the last step, the new approach will be tried by increasing the fuzzy sets far five membership functions. Some practical examples will be given. Throughout the paper, the way of budding I financial diagnosis system based on fuzzy expert system is stressed.
[ "expert system", "fuzzy theory", "financial analysis", "area of the paper, expert systems and ai-based systems" ]
[ "P", "P", "M", "M" ]
3ncb6Hf
A hybrid Boundary Element-Wave Based Method for an efficient solution of bounded acoustic problems with inclusions
This paper presents a novel hybrid approach for the efficient solution of bounded acoustic problems with arbitrarily shaped inclusions. The hybrid method couples the Wave Based Method (WBM) and the Boundary Element Method (BEM) in order to benefit from the prominent advantages of both. The WBM is based on an indirect Trefftz approach; as such, it uses exact solutions of the governing equations to approximate the field variables. It has a high computational advantage as compared to conventional element based methods, when applied on moderately complex geometries. The BEM, on the other hand, can tackle complex geometries with ease. However, it can be computationally expensive. The hybrid Boundary Element-Wave Based Method (BE-WBM) combines the best properties of the two; it makes use of the efficient WBM for the moderately complex bounded domains and utilizes the flexibility of the BEM for the complex objects that reside in the bounded domains. The accuracy and the efficiency of the method is demonstrated with three numerical examples, where the hybrid BE-WBM is shown to be more efficient than a quadratic Finite Element Method (FEM). While the hybrid method provides efficient solution for the bounded problems with inclusions, it also brings certain conceptual advantages over the FEM. The fact that it is a boundary-type method with an easy refinement concept reduces the modeling effort on the preprocessing step. Moreover, for certain optimization scenarios such as optimization of the position of inclusions, the FEM becomes disadvantageous because of its domain discretization requirements for each iteration. On the other hand, the hybrid method allows reusing of the fixed geometries and only needs recalculation of the coupling matrices without a further need of preprocessing. As such, the hybrid method combines efficiency with versatility.
[ "bounded acoustic problem", "inclusions", "wave based method", "boundary element method", "helmholtz problem", "trefftz method" ]
[ "P", "P", "P", "P", "M", "R" ]
1famzuu
A fuzzy clustering-based binary threshold bispectrum estimation approach
A fuzzy clustering bispectrum estimation approach is proposed in this paper and applied on the rolling element bearing fault recognition. The method combines the basic higher order spectrum theory and fuzzy clustering technique in data mining. At first, all the bispectrum estimation results of the training samples and test samples are taken binarization threshold processing and turned into binary feature images. Then, the binary feature images of the training samples are used to construct object templates including kernel images and domain images. Every fault category has one object templates. At last, by calculating the distances between test samples binary feature images and the different object templates, the object classification and pattern recognition can be effectively accomplished. Bearing is the most important and much easier to be damaged component in rotating machinery. Furthermore, there exist large amounts of noise jamming and nonlinear coupling components in bearing vibration signals. Higher Order Cumulants, which can quantitatively describe the nonlinear characteristic signals with close relationship between the mechanical faults, are introduced in this paper to de-noise the raw bearing vibration signals and obtain the bispectrum estimation pictures. At last, the rolling bearing fault diagnosis experiment results showed that the classification was completely correct.
[ "fuzzy clustering", "bispectrum estimation", "bearing fault", "fault recognition" ]
[ "P", "P", "P", "P" ]
nNdqvxm
Scalability of write-ahead logging on multicore and multisocket hardware
The shift to multi-core and multi-socket hardware brings new challenges to database systems, as the software parallelism determines performance. Even though database systems traditionally accommodate simultaneous requests, a multitude of synchronization barriers serialize execution. Write-ahead logging is a fundamental, omnipresent component in ARIES-style concurrency and recovery, and one of the most important yet-to-be addressed potential bottlenecks, especially in OLTP workloads making frequent small changes to data. In this paper, we identify four logging-related impediments to database system scalability. Each issue challenges different level in the software architecture: (a) the high volume of small-sized I/O requests may saturate the disk, (b) transactions hold locks while waiting for the log flush, (c) extensive context switching overwhelms the OS scheduler with threads executing log I/Os, and (d) contention appears as transactions serialize accesses to in-memory log data structures. We demonstrate these problems and address them with techniques that, when combined, comprise a holistic, scalable approach to logging. Our solution achieves a 20-69% speedup over a modern database system when running log-intensive workloads, such as the TPC-B and TATP benchmarks, in a single-socket multiprocessor server. Moreover, it achieves log insert throughput over 2.2 GB/s for small log records on the single-socket server, roughly 20 times higher than the traditional way of accessing the log using a single mutex. Furthermore, we investigate techniques on scaling the performance of logging to multi-socket servers. We present a set of optimizations which partly ameliorate the latency penalty that comes with multi-socket hardware, and then we investigate the feasibility of applying a distributed log buffer design at the socket level.
[ "log manager", "early lock release", "flush pipelining", "log buffer contention", "consolidation array", "scaling to multisockets" ]
[ "M", "M", "M", "R", "U", "R" ]
DcfNrEE
Time-Driven Priority Router Implementation: Analysis and Experiments
Low complexity solutions to provide deterministic quality over packet switched networks while achieving high resource utilization have been an open research issue for many years. Service differentiation combined with resource overprovisioning has been considered an acceptable compromise and widely deployed given that the amount of traffic requiring quality guarantees has been limited. This approach is not viable, though, as new bandwidth hungry applications, such as video on demand, telepresence, and virtual reality, populate networks invalidating the rationale that made it acceptable so far. Time-driven priority represents a potentially interesting solution. However, the fact that the network operation is based on a time reference shared by all nodes raises concerns on the complexity of the nodes, from the point of view of both their hardware and software architecture. This work analyzes the implications that the timing requirements of time-driven priority have on network nodes and shows how proper operation can be ensured even when system components introduce timing uncertainties. Experimental results on a time-driven priority router implementation based on a personal computer both validate the analysis and demonstrate the feasibility of the technology even on an architecture that is not designed for operating under timing constraints.
[ "time-driven priority", "architecture related performance", "experiments on a network testbed", "packet scheduling" ]
[ "P", "M", "M", "M" ]
RrMpcZz
Computationally sound symbolic security reduction analysis of the group key exchange protocols using bilinear pairings
The security of the group key exchange protocols has been widely studied in the cryptographic community in recent years. Current work usually applies either the computational approach or the symbolic approach for security analysis. The symbolic approach is more efficient than the computational approach, because it can be easily automated. However, compared with the computational approach, it has to overcome three challenges: (1) The computational soundness is unclear; (2) the number of participants must be fixed; and (3) the advantage of efficiency disappears, if the number of participants is large. This paper proposes a computationally sound symbolic security reduction approach to resolve these three issues. On one hand, combined with the properties of the bilinear pairings, the universally composable symbolic analysis (UCSA) approach is extended from the two-party protocols to the group key exchange protocols. Meanwhile, the computational soundness of the symbolic approach is guaranteed. On the other hand, for the group key exchange protocols which satisfy the syntax of the simple protocols proposed in this paper, the security is proved to be unrelated with the number of participants. As a result, the symbolic approach just needs to deal with the protocols among three participants. This makes the symbolic approach has the ability to handle arbitrary number of participants. Therefore, the advantage of efficiency is still guaranteed. The proposed approach can also be applied to other types of cryptographic primitives besides bilinear pairing for computationally sound and efficient symbolic analysis of group key exchange protocols.
[ "computational soundness", "group key exchange protocol", "bilinear pairing", "universally composable symbolic analysis" ]
[ "P", "P", "P", "P" ]
3vFFnyX
Dynamic gradient method for PEBS detection in power system transient stability assessment
In methods for assessing the critical clearing time based on the transient energy function, the dominant procedures in use for detecting the exit point across the potential energy boundary surface (PEBS) are the ray and the gradient methods. Because both methods rely on the geometrical characteristics of the post-fault potential energy surface, they may yield erroneous results. In this paper, a more reliable method for PEBS detection is proposed. It is called the dynamic gradient method to indicate that from a given system state, a small portion of the trajectory of the gradient system is approximated and tested for convergence toward the post-fault stable equilibrium point. It is shown that a trade-off between computing time and reliability can be found as the number of machines in the system becomes greater. The method is illustrated on 3-machine and 10-machine systems.
[ "transient stability", "critical clearing time", "potential energy boundary surface (pebs)", "gradient system", "lyapunov methods", "electric power systems" ]
[ "P", "P", "P", "P", "M", "M" ]
zg11Mba
Exact Matrix Completion via Convex Optimization
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= Cn(1.2)r log n for some positive numerical constant C, then with very high probability, most n x n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
[ "matrix completion", "convex optimization", "low-rank matrices", "compressed sensing", "duality in optimization", "nuclear norm minimization", "random matrices", "noncommutative khintchine inequality", "decoupling" ]
[ "P", "P", "P", "P", "M", "M", "R", "U", "U" ]
3ChHeyC
A general framework for expressing preferences in causal reasoning and planning
We consider the problem of representing arbitrary preferences in causal reasoning and planning systems. In planning, a preference may be seen as a goal or constraint that is desirable, but not necessary, to satisfy. To begin, we define a very general query language for histories, or interleaved sequences of world states and actions. Based on this, we specify a second language in which preferences are defined. A single preference defines a binary relation on histories, indicating that one history is preferred to the other. From this, one can define global preference orderings on the set of histories, the maximal elements of which are the preferred histories. The approach is very general and flexible; thus it constitutes a base language in terms of which higher-level preferences may be defined. To this end, we investigate two fundamental types of preferences that we call choice and temporal preferences. We consider concrete strategies for these types of preferences and encode them in terms of our framework. We suggest how to express aggregates in the approach, allowing, e.g. the expression of a preference for histories with lowest total action costs. Last, our approach can be used to express other approaches and so serves as a common framework in which such approaches can be expressed and compared. We illustrate this by indicating how an approach due to Son and Pontelli can be encoded in our approach, as well as the language PDDL3.
[ "preferences", "planning", "knowledge representation", "logical representations of preferences" ]
[ "P", "P", "U", "M" ]
2WqXAMh
A cost sensitive decision tree algorithm with two adaptive mechanisms
An adaptive selecting cut point mechanism is designed to build a classifier. Adaptive removing attribute mechanism will remove the redundant attributes. We adopt two mechanisms to design algorithm which for classifier construction. Experimental results show the effectiveness and feasibility of our algorithm.
[ "cost sensitive", "decision tree", "adaptive mechanisms", "granular computing" ]
[ "P", "P", "P", "U" ]
eV3GE7o
Solving the Buckley-Leverett equation with gravity in a heterogeneous porous medium
Immiscible two-phase flow in porous media can be described by the fractional flow model. If capillary forces are neglected, then the saturation equation is a non-linear hyperbolic conservation law, known as the Buckley-Leverett equation. This equation can be numerically solved by the method of Godunov, in which the saturation is computed from the solution of Riemann problems at cell interfaces. At a discontinuity of permeability this solution has to be constructed from two flux functions. In order to determine a unique solution an entropy inequality is needed. In this article an entropy inequality is derived from a regularisation procedure, where the physical capillary pressure term is added to the Buckley-Leverett equation. This entropy inequality determines unique solutions of Riemann problems for all initial conditions. It leads to a simple recipe for the computation of interface fluxes for the method of Godunov.
[ "buckley-leverett equation", "heterogeneous porous medium", "two-phase flow", "fractional flow model", "riemann problem", "entropy inequality", "godunov scheme" ]
[ "P", "P", "P", "P", "P", "P", "M" ]
4wz1n19
An exploratory study of enterprise resource planning adoption in Greek companies
Purpose - To examine enterprise resource planning (ERP) adoption in Greek companies, and explore the effects of uncertainty on the performance of these systems and the methods used to cope with uncertainty. Design/methodology/approach - This research was exploratory and six case studies were generated. This work was part of a larger project on the adoption, implementation and integration of ERP systems in Greek enterprises. A taxonomy of ERP adoption research was developed from the literature review and used to underpin the issues investigated in these cases. The results were compared with the literature on ERP adoption in the USA and UK. Findings - There were major differences between ERP adoption in Greek companies and companies in other countries. The adoption, implementation and integration of ERP systems were fragmented in Greek companies. This fragmentation demonstrated that the internal enterprise's culture, resources available, skills of employees, and the way ERP systems are perceived, treated and integrated within the business and in the supply chain, play critical roles in determining the success/failure of ERP systems adoption. A warehouse management system was adopted by some Greek enterprises to cope with uncertainty. Research limitations/implications - A comparison of ERP adoption was made between the USA, UK and Greece, and may limit its usefulness elsewhere. Practical implications - Practical advice is offered to managers contemplating adopting ERP. Originality/value - A new taxonomy of ERP adoption research was developed, which refocused the ERP implementation and integration into related critical success/failure factors and total integration issues, thus providing a more holistic ERP adoption framework.
[ "greece", "uncertainty management", "resource management" ]
[ "P", "R", "R" ]
1Jzd28u
Classification of newborn EEG maturity with Bayesian averaging over decision trees
EEG experts can assess a newborns brain maturity by visual analysis of age-related patterns in sleep EEG. It is highly desirable to make the results of assessment most accurate and reliable. However, the expert analysis is limited in capability to provide the estimate of uncertainty in assessments. Bayesian inference has been shown providing the most accurate estimates of uncertainty by using Markov Chain Monte Carlo (MCMC) integration over the posterior distribution. The use of MCMC enables to approximate the desired distribution by sampling the areas of interests in which the density of distribution is high. In practice, the posterior distribution can be multimodal, and so that the existing MCMC techniques cannot provide the proportional sampling from the areas of interest. The lack of prior information makes MCMC integration more difficult when a model parameter space is large and cannot be explored in detail within a reasonable time. In particular, the lack of information about EEG feature importance can affect the results of Bayesian assessment of EEG maturity. In this paper we explore how the posterior information about EEG feature importance can be used to reduce a negative influence of disproportional sampling on the results of Bayesian assessment. We found that the MCMC integration tends to oversample the areas in which a model parameter space includes one or more features, the importance of which counted in terms of their posterior use is low. Using this finding, we proposed to cure the results of MCMC integration and then described the results of testing the proposed method on a set of sleep EEG recordings.
[ "eeg", "brain maturity", "markov chain monte carlo", "bayesian model averaging" ]
[ "P", "P", "P", "R" ]
4-dSPEU
Parametric estimation of the continuous non-stationary spectrum and its dynamics in surface EMG studies
Frequency spectrum of surface electromyographic signals (SEMGs) exhibit a non-stationary nature even in the case of constant level isometric muscle contractions due to changes related to muscle fatigue processes. These changes can be evaluated by methods for estimation of time-varying (TV) spectrum. The most widely adopted non-parametric approach is a short time Fourier transform (STFT), from which changes of mean frequency (MF) as well as other parameters for qualitative description of spectrum variation can be calculated. Similar idea of a sliding-window generalisation can also be used in case of parametric spectrum analysis methods. We applied such approach to obtain TV linear models of SEMGs, although its large variance due to independence of estimations in consequent windows represents a major drawback. This variance causes unrealistic abrupt changes in the curve of overall spectrum dynamics, calculated either as the second derivative of the MF or, as we propose, autoregressive moving average (ARMA) distance between subsequent linear models forming the TV parametric spectrum. A smoother estimation is therefore sought and another method shows to be superior over a simple sliding window technique. It supposes that trajectories of TV linear model coefficients can be described as linear combinations of known basis functions. We demonstrate that the later method is very appropriate for description of slowly changing spectra of SEMGs and that dynamics measures obtained from such estimations can be used as an additional indication of the fatigue process.
[ "muscle fatigue", "surface electromyography", "time-varying linear modelling", "dynamic signals" ]
[ "P", "M", "R", "R" ]
42kPmR3
A multiple-case design methodology for studying MRP success and CSFs
We used a multiple-case design to study materials requirements planning (MRP) implementation outcome in 10 manufacturing companies in Singapore. Using a two-phased data collection approach (pre-interview questionnaires and personal interviews), we sought to develop a comprehensive and operationally acceptable measure of MRP success. Our measure consists of two linked components. They are a satisfaction score (a quantitative measure) and a complementary measure based on comments from the interviewees regarding the level of usage and acceptance of the system. We also extended and consolidated a seven-factor critical success factor (CSF) framework using this methodology. CSFs are important, but knowing the linkages between them is even more important, because these linkages tell us which CSFs to emphasize at various stages of the project.
[ "multiple-case design", "materials requirements planning (mrp)", "critical success factors (csfs)", "mrp implementation outcome" ]
[ "P", "P", "P", "R" ]
eJBdNsi
Hybrid heuristic-waterfilling game theory approach in MC-CDMA resource allocation
This paper discusses the power allocation with fixed rate constraint problem in multi-carrier code division multiple access (MC-CDMA) networks, that has been solved through game theoretic perspective by the use of an iterative water-filling algorithm (IWFA). The problem is analyzed under various interference density configurations, and its reliability is studied in terms of solution existence and uniqueness. Moreover, numerical results reveal the approach shortcoming, thus a new method combining swarm intelligence and IWFA is proposed to make practicable the use of game theoretic approaches in realistic MC-CDMA systems scenarios. The contribution of this paper is twofold: (i) provide a complete analysis for the existence and uniqueness of the game solution, from simple to more realist and complex interference scenarios; (ii) propose a hybrid power allocation optimization method combining swarm intelligence, game theory and IWFA. To corroborate the effectiveness of the proposed method, an outage probability analysis in realistic interference scenarios, and a complexity comparison with the classical IWFA are presented. (C) 2011 Elsevier B.V. All rights reserved.
[ "game theory", "iterative water-filling algorithm", "power-rate allocation control", "siso multi-rate mc-cdma", "qos" ]
[ "P", "P", "M", "M", "U" ]
1pzzsx8
Direct type-specific conic fitting and eigenvalue bias correction
A new method to fit specific types of conics to scattered data points is introduced. Direct, specific fitting of ellipses and hyperbolae is achieved by imposing a quadratic constraint on the conic coefficients, whereby an improved partitioning of the design matrix is devised so as to improve computational efficiency and numerical stability by eliminating redundant aspects of the fitting procedure. Fitting of parabolas is achieved by determining an orthogonal basis vector set in the Grassmannian space of the quadratic terms coefficients. The linear combination of the basis vectors that fulfills the parabolic condition and has a minimum residual norm is determined using Lagrange multipliers. This is the first known direct solution for parabola specific fitting. Furthermore, the inherent bias of a linear conic fit is addressed. We propose a linear method of correcting this bias, producing better geometric fits which are still constrained to specific conic type.
[ "conics", "curve fitting", "constrained least squares" ]
[ "P", "M", "M" ]
4zg1JKk
Communication with WWW in Czech
This paper describes UIO, a multi-domain question-answering system for the Czech language that looks for answers on the web. UIO exploits two fields, namely natural language interface to databases and question answering. In its current version, UIO can be used for asking questions about train and coach timetables, cinema and theatre performances, about currency exchange rates, name-days and on the Diderot Encyclopaedia. Much effort have been made into making addition of a new domain very easy. No limits concerning words or the form of a question need to be set in UIO. Users can ask syntactically correct as well as incorrect questions, or use keywords. A Czech morphological analyser and a bottom-up chart parser are employed for analysis of the question. The database of multi-word expressions is automatically updated when a new item has been found on the web. For all domains UIO has an accuracy rate about 80%.
[ "question answering", "natural language processing" ]
[ "P", "M" ]
4Udvfng
efficient coloring of a large spectrum of graphs
We have developed a new algorithm and software for graph coloring by systematically combining several algorithm and software development ideas that had crucial impact on the algorithm's performance. The algorithm explores the divide-and-conquer paradigm, global search for constrained independent sets using a computationally inexpensive objective function, assignment of most-constrained vertices to least-constraining colors, reuse and locality exploration of intermediate solutions, search time management, post-processing lottery-scheduling iterative improvement, and statistical parameter determination and validation. The algorithm was tested on a set of real-life examples. We found that hard-to-color real-life examples are common especially in domains where problem modeling results in denser graphs. Systematic experimentations demonstrated that for numerous instances the algorithm outperformed all other implementations reported in literature in solution quality and run-time.
[ "efficiency", "color", "graph", "algorithm", "software", "graph coloring", "combinational", "software development", "performance", "exploration", "divide-and-conquer", "global", "search", "object", "functional", "assignment", "reuse", "locality", "timing", "management", "iter", "statistics", "determinism", "examples", "domain", "model", "experimentation", "implementation", "quality", "scheduling", "spread spectrum communication", "ism frequency band", "digital radio", "rf cmos", "transceiver", "process" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "U", "U", "U", "U" ]
4r6&45t
Boolean Equations and Boolean Inequations
In this paper we consider Boolean inequations of the form f(X) not equal 0. We also consider the system of Boolean inequation and Boolean equation f(X) not equal 0 boolean AND g(X) = 0 and we describe all the solutions of this system.
[ "boolean inequations", "boolean functions" ]
[ "P", "M" ]
-dmAGHQ
Time efficient centralized gossiping in radio networks
In this paper we study the gossiping problem (all-to-all communication) in radio networks where all nodes are aware of the network topology. We start our presentation with a deterministic gossiping algorithm that works in at most n units of time in any radio network of size n. This algorithm is optimal in the worst case scenario since there exist radio network topologies, such as lines, stars and complete graphs in which radio gossiping cannot be completed in less than n communication rounds. Furthermore, we show that there does not exist any radio network topology in which the gossiping task can be solved in less than [log(n - 1)] +2 rounds. We also show that this lower bound can be matched from above for a fraction of all possible integer values of n, and for all other values of n we propose a solution which accomplishes gossiping in [log(n - 1)] + 2 rounds. Then we show an almost optimal radio gossiping algorithm in trees, which misses the optimal time complexity by a single round. Finally, we study asymptotically optimal O(D)-time gossiping (where D is the diameter of the network) in graphs with the maximum degree Delta = O(D1-1/(i+1)/log(i) n), for any integer constant i >= 0 and D large enough. (c) 2007 Elsevier B.V. All rights reserved.
[ "radio networks", "broadcasting and gossiping", "centralized algorithms" ]
[ "P", "M", "R" ]
1tsrx4e
Tree kernel-based protein-protein interaction extraction from biomedical literature
There is a surge of research interest in protein-protein interaction (PPI) extraction from biomedical literature. While most of the state-of-the-art PPI extraction systems focus on dependency-based structured information, the rich structured information inherent in constituent parse trees has not been extensively explored for PPI extraction. In this paper, we propose a novel approach to tree kernel-baled PPI extraction, where the tree representation generated from a constituent syntactic parser is further refined using the shortest dependency path between two proteins derived from a dependency parser. Specifically, all the constituent tree nodes associated with the nodes on the shortest dependency path are kept intact, while other nodes are removed safely to make the constituent tree concise and precise for PPI extraction. Compared with previously used constituent tree setups, our dependency-motivated constituent tree setup achieves the best results across five commonly used PPI corpora. Moreover, our tree kernel-based method outperforms other single kernel-based ones and performs comparably with some multiple kernel ones on the most commonly tested AIMed corpus. (C) 2012 Elsevier Inc. All rights reserved.
[ "protein-protein interaction", "constituent parse tree", "shortest dependency path", "convolution tree kernel" ]
[ "P", "P", "P", "M" ]
-8CjFKe
A review of content-based image retrieval systems in medical applications - clinical benefits and future directions
Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and toots have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization (similar to1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although stilt a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it Looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
[ "visual information retrieval", "pacs", "medical image retrieval", "content-based search" ]
[ "P", "P", "R", "R" ]
peFoD81
Infinitesimal Plane-Based Pose Estimation
Estimating the pose of a plane given a set of point correspondences is a core problem in computer vision with many applications including Augmented Reality (AR), camera calibration and 3D scene reconstruction and interpretation. Despite much progress over recent years there is still the need for a more efficient and more accurate solution, particularly in mobile applications where the run-time budget is critical. We present a new analytic solution to the problem which is far faster than current methods based on solving Pose from \(n\) Points (PnP) and is in most cases more accurate. Our approach involves a new way to exploit redundancy in the homography coefficients. This uses the fact that when the homography is noisy it will estimate the true transform between the model plane and the image better at some regions on the plane than at others. Our method is based on locating a point where the transform is best estimated, and using only the local transformation at that point to constrain pose. This involves solving pose with a local non-redundant 1st-order PDE. We call this framework Infinitesimal Plane-based Pose Estimation (IPPE), because one can think of it as solving pose using the transform about an infinitesimally small region on the surface. We show experimentally that IPPE leads to very accurate pose estimates. Because IPPE is analytic it is both extremely fast and allows us to fully characterise the method in terms of degeneracies, number of returned solutions, and the geometric relationship of these solutions. This characterisation is not possible with state-of-the-art PnP methods.
[ "plane", "pose", "pnp", "homography", "sfm" ]
[ "P", "P", "P", "P", "U" ]
1Ku-Mf1
Evolutionary learning of spiking neural networks towards quantification of 3D MRI brain tumor tissues
This paper presents a new classification technique for 3D MR images, based on a third-generation network of spiking neurons. Implementation of multi-dimensional co-occurrence matrices for the identification of pathological tumor tissue and normal brain tissue features are assessed. The results show the ability of spiking classifier with iterative training using genetic algorithm to automatically and simultaneously recover tissue-specific structural patterns and achieve segmentation of tumor part. The spiking network classifier has been validated and tested for various real-time and Harvard benchmark datasets, where appreciable performance in terms of mean square error, accuracy and computational time is obtained. The spiking network employed Izhikevich neurons as nodes in a multi-layered structure. The classifier has been compared with computational power of multi-layer neural networks with sigmoidal neurons. The results on misclassified tumors are analyzed and suggestions for future work are discussed.
[ "spiking neural networks", "multi-dimensional co-occurrence matrices", "genetic algorithm", "izhikevich neurons", "3d magnetic resonance imaging" ]
[ "P", "P", "P", "P", "M" ]
2LqVVj2
Inferential queueing and speculative push
Communication latencies within critical sections constitute a major bottleneck in some classes of emerging parallel workloads. In this paper, we argue for the use of two mechanisms to reduce these communication latencies: Inferentially Queued locks (IQLs) and Speculative Push (SP). With IQLs, the processor infers the existence, and limits, of a critical section from the use of synchronization instructions and joins a queue of lock requestors, reducing synchronization delay. The SP mechanism extracts information about program structure by observing IQLs. SP allows the cache controller, responding to a request for a cache line that likely includes a lock variable, to predict the data sets the requestor will modify within the associated critical section. The controller then pushes these lines from its own cache to the target cache, as well as writing them to memory. Overlapping the protected data transfer with that of the lock can substantially reduce the communication latencies within critical sections. By pushing data in exclusive state, the mechanism can collapse a read-modify-write sequences within a critical section into a single local cache access. The write-back to memory allows the receiving cache to ignore the push. Neither mechanism requires any programmer or compiler support nor any instruction set changes. Our experiments demonstrate that IQLs and SP can improve performance of applications employing frequent synchronization.
[ "inferential queueing", "critical sections", "synchronization", "data forwarding", "migratory sharing" ]
[ "P", "P", "P", "M", "U" ]
4AzgHQM
The thermal failure process of the quantum cascade laser
We report the thermal failure process of the quantum cascade laser. Firstly, high temperature and strain in the active region are verified by Raman spectra, and the conspicuous catastrophically failed characteristics are observed by scanning electron microscope. Secondly, the defects generate serious structure disorder due to the high temperature of the active region and the resulted strain relaxation. Thirdly, the abundant atomic diffusion in the active region and substrate are observed. The structure disorder and the change of element composition in the active region directly lead to the quantum cascade laser failure. The theoretical analysis fits well with the results of experimental studies.
[ "thermal", "failure", "high temperature", "quantum cascade lasers (qcls)" ]
[ "P", "P", "P", "M" ]
4ibktan
ADMiRA: Atomic Decomposition for Minimum Rank Approximation (vol 56, pg 4402, 2010)
In this correspondence, a corrected version of the convergence analysis given by Lee and Bresler is presented.
[ "compressed sensing", "matrix completion", "performance guarantee", "rank minimization", "singular value decomposition" ]
[ "U", "U", "U", "M", "M" ]
3SPrpmH
Stable advection-reaction-diffusion with arbitrary anisotropy
Turing first theorized that many biological patterns arise through the processes of reaction and diffusion. Subsequently, reaction-diffusion systems have been studied in many fields, including computer graphics. We first show that for visual simulation purposes, reaction-diffusion equations can be made unconditionally stable using a variety of straightforward methods. Second, we propose an anisotropy embedding that significantly expands the space of possible patterns that can be generated. Third, we show that by adding an advection term, the simulation can be coupled to a fluid simulation to produce visually appealing flows. Fourth, we couple fast marching methods to our anisotropy embedding to create a painting interface to the simulation. Unconditional stability is maintained throughout, and our system runs at interactive rates. Finally, we show that on the Cell processor, it is possible to implement reaction-diffusion on top of an existing fluid solver with no significant performance impact. Copyright (c) 2007 John Wiley & Sons, Ltd.
[ "fluid simulation", "physically based animation", "computer animation" ]
[ "P", "U", "M" ]
a2r3f-&
Alexander duality and moments in reliability modelling
There are strong connections between coherent systems in reliability for systems which have components with a finite number of states and certain algebraic structures. A special case is binary systems where there are two states: fail and not fail. The connection relies on an order property in the system and a way of coding states alpha = (alpha(1),...,alpha(d)) with monomials x(alpha) = (x(1)(alpha1),...,x(d)(alphad)). The algebraic entities are the Scarf complex and the associated Alexander duality. The failure ''event'' can be studied using these ideas and identities and bounds derived when the algebra is combined with probability distributions on the states. The x(alpha) coding aids the use of moments mu(alpha)=E(X-alpha) with respect to the underlying distribution.
[ "moments", "reliability", "scarf complex", "monomial ideals", "alexander dual", "hilbert series" ]
[ "P", "P", "P", "M", "M", "U" ]
53hy8N7
Failure identification for linear repetitive processes
This paper investigates the fault detection and isolation (FDI) problem for discrete-time linear repetitive processes using a geometric approach, starting from a 2-D model for these processes that incorporates a representation of the failure. Based on this model, the FDI problem is formulated in the geometric setting and sufficient conditions for solvability of this problem are given. Moreover, the processess behaviour in the presence of noise is considered, leading to the development of a statistical approach for determining a decision threshold. Finally, a FDI procedure is developed based on an asymptotic observer reconstruction of the state vector.
[ "linear repetitive processes", "fault detection and isolation", "fdi", "geometric approach", "multidimensional systems" ]
[ "P", "P", "P", "P", "U" ]
3G8BKDa
Determinants of web site information by Spanish city councils
Purpose - The purpose of this research is to analyse the web sites of large Spanish city councils with the objective of assessing the extent of information disseminated on the internet and determining what factors are affecting the observed levels of information disclosure. Design/methodology/approach - The study takes as its reference point the existing literature on the examination of the quality of web sites, in particular the provisions of the Web Quality Model (WQM) and the importance of content as a key variable in determining web site quality. In order to quantify the information on city council web sites, a Disclosure Index has been designed which takes into account the content, navigability and presentation of the web sites. In order to contrast which variables determine the information provided on the web sites, our investigation bases itself on the studies about voluntary disclosure in the public sector, and six lineal regressions models have been performed. Findings - The empirical evidence obtained reveals low disclosure levels among Spanish city council web sites. In spite of this, almost 50 per cent of the city councils have reached the "approved" level and of these, around a quarter obtained good marks. Our results show that disclosure levels depend on political competition, public media visibility and the access to technology and educational levels of the citizens. Practical implications - The strategy of communication on the internet by local Spanish authorities is limited in general to an ornamental web presence but one that does not respond efficiently to the requirements of the digital society. During the coming years, local Spanish politicians will have to strive to take advantage of the opportunities that the internet offers to increase both the relational and informational capacity of municipal web sites as well as the digital information transparency of their public management. Originality/value - The internet is a potent channel of communication that is modifying the way in which people access and relate to information and each other. The public sector is not unaware of these changes and is incorporating itself gradually into the new network society. This study systematises; the analysis of local administration web sites, showing the lack of digital transparency, and orients politicians in the direction to follow in order to introduce improvements in their electronic relationships with the public.
[ "internet", "information strategy", "worldwide web", "local authorities", "spain" ]
[ "P", "R", "M", "R", "U" ]
2a48Se7
The development of regional collaboration for resource efficiency: A network perspective on industrial symbiosis
Three development patterns of industrial symbiosis systems are proposed and empirically examined. Industrial symbiosis networks build on and strengthen the disparity of firms capability on building symbiotic relations. Due to this disparity, self-organized industrial symbiosis networks favor the most capable firms and grow preferentially. Coordinating agencies improve disadvantaged firms capabilities, and change the preferential growth to a homogeneous one. Strong government engagement helps disadvantaged firms and facilitates non-preferential symbiosis development in a region.
[ "preferential growth", "industrial ecosystem", "by-product", "homogeneous growth", "complex network", "degree correlation" ]
[ "P", "M", "U", "R", "M", "U" ]
K&zHZBC
Effective diagnosis of heart disease through neural networks ensembles
In the last decades, several tools and various methodologies have been proposed by the researchers for developing effective medical decision support systems. Moreover, new methodologies and new tools are continued to develop and represent day by day. Diagnosing of the heart disease is one of the important issue and many researchers investigated to develop intelligent medical decision support systems to improve the ability of the physicians. In this paper, we introduce a methodology which uses SAS base software 9.1.3 for diagnosing of the heart disease. A neural networks ensemble method is in the centre of the proposed system. This ensemble based methods creates new models by combining the posterior probabilities or the predicted values from multiple predecessor models. So, more effective models can be created. We performed experiments with the proposed tool. We obtained 89.01% classification accuracy from the experiments made on the data taken from Cleveland heart disease database. We also obtained 80.95% and 95.91% sensitivity and specificity values, respectively, in heart disease diagnosis.
[ "heart disease", "neural networks", "sas base software", "ensemble based model" ]
[ "P", "P", "P", "R" ]
4Tu37-B
Soil carbon model Yasso07 graphical user interface ?
In this article, we present a graphical user interface software for the litter decomposition and soil carbon model Yasso07 and an overview of the principles and formulae it is based on. The software can be used to test the model and use it in simple applications. Yasso07 is applicable to upland soils of different ecosystems worldwide, because it has been developed using data covering the global climate conditions and representing various ecosystem types. As input information, Yasso07 requires data on litter input to soil, climate conditions, and land-use change if any. The model predictions are given as probability densities representing the uncertainties in the parameter values of the model and those in the input data the user interface calculates these densities using a built-in Monte Carlo simulation.
[ "soil carbon", "software", "decomposition", "monte carlo simulation", "statistical modelling", "uncertainty estimation" ]
[ "P", "P", "P", "P", "M", "M" ]
zEgcBdQ
time synchronization attacks in sensor networks
Time synchronization is a critical building block in distributed wireless sensor networks. Because sensor nodes may be severely resource-constrained, traditional time-synchronization protocols cannot be used in sensor networks. Various time-synchronization protocols tailored for such networks have been proposed to solve this problem. However, none of these protocols have been designed with security in mind. If an adversary were able to compromise a node, he might prevent a network from effectively executing certain applications, such as sensing or tracking an object, or he might even disable the network by disrupting a fundamental service such as a TDMA-based channel-sharing scheme. In this paper we give a survey of the most common time synchronization protocols and outline the possible attacks on each protocol. In addition, we discuss how different sensor network applications that are affected by time synchronization attacks, and we propose some countermeasures for these attack.
[ "attack", "sensor", "senses", "sensor network", "network", "critic", "building block", "distributed", "wireless sensor network", "time-synchronization", "security", "applications", "tracking", "object", "scheme", "paper", "survey", "sharing", "clock drift and skew", "authentication", "resource", "tdma" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "U", "U" ]
n4&xknp
Enhanced Privacy ID: A Direct Anonymous Attestation Scheme with Enhanced Revocation Capabilities
Direct Anonymous Attestation (DAA) is a scheme that enables the remote authentication of a Trusted Platform Module (TPM) while preserving the user's privacy. A TPM can prove to a remote party that it is a valid TPM without revealing its identity and without linkability. In the DAA scheme, a TPM can be revoked only if the DAA private key in the hardware has been extracted and published widely so that verifiers obtain the corrupted private key. If the unlinkability requirement is relaxed, a TPM suspected of being compromised can be revoked even if the private key is not known. However, with the full unlinkability requirement intact, if a TPM has been compromised but its private key has not been distributed to verifiers, the TPM cannot be revoked. Furthermore, a TPM cannot be revoked from the issuer, if the TPM is found to be compromised after the DAA issuing has occurred. In this paper, we present a new DAA scheme called Enhanced Privacy ID (EPID) scheme that addresses the above limitations. While still providing unlinkability, our scheme provides a method to revoke a TPM even if the TPM private key is unknown. This expanded revocation property makes the scheme useful for other applications such as for driver's license. Our EPID scheme is efficient and provably secure in the same security model as DAA, i.e., in the random oracle model under the strong RSA assumption and the decisional Diffie-Hellman assumption.
[ "privacy", "anonymity", "security and protection", "cryptographic protocols", "trusted computing" ]
[ "P", "P", "M", "U", "M" ]
m:xwGH2
using predicate path information in hardware to determine true dependences
Predicated Execution has been put forth as a method for improving processor performance by removing hard-to-predict branches. As part of the process of turning a set of basic blocks into a predicated region, both paths of a branch are combined into a single path. There can be multiple definitions from disjoint paths that reach a use. Waiting to find out the correct definition that actually reaches the use can cause pipeline stalls.In this paper we examine a hardware optimization that dynamically collects and analyzes path information to determine valid dependences for predicated regions of code. We then use this information for an in-order VLIW predicated processor, so that instructions can continue towards execution without having to wait on operands from false dependences. Our results show that using our Disjoint Path Analysis System provides speedups over 6\% and elimination of false RAW dependences of up to 14\% due to the detection of erroneous dependences in if-converted regions of code.
[ "predicated execution", "path analysis", "dependence analysis" ]
[ "P", "P", "R" ]
2S&riSP
Simulation of DNA damage clustering after proton irradiation using an adapted DBSCAN algorithm
In this work the Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm was adapted to early stage DNA damage clustering calculations. The resulting algorithm takes into account the distribution of energy deposit induced by ionising particles and a damage probability function that depends on the total energy deposit amount. Proton track simulations were carried out in small micrometric volumes representing small DNA containments. The algorithm was used to determine the damage concentration clusters and thus to deduce the DSB/SSB ratios created by protons between 500keV and 50MeV. The obtained results are compared to other calculations and to available experimental data of fibroblast and plasmid cells irradiations, both extracted from literature.
[ "dna damage", "clustering", "proton irradiation", "data mining", "monte-carlo", "radiobiology" ]
[ "P", "P", "P", "M", "U", "U" ]
1GbDHCc
building the academic strategy program
Purpose -- to present the application of a theory and best practice of a balanced scorecard BSC method, to create BSC strategic program for academic education decision makers, and to present framework for strategy program about research and education in faculty. Methodology/approach -- based on the investigation of number of successful project on this topic and on the authors exercise in balanced scorecard approach about educational strategy the program is created and modelled. Findings -- the balanced scorecard strategic program is developed and it allows enhancing the leadership capability across university. Practical implications -- the program can facilitate faculty and staff to formulate and measure strategic management decisions and to create competitive educational and research environment at the university. Originality/value -- the value of the program is in integrating competences, experience, best practices and tools within one new program design. The paper shows how to translate the academic strategy into different strategic objectives and goals, how to model them and how to communicate academic research and education processes to realize important improvements in cost and quality of academic services.
[ "balanced scorecard program" ]
[ "R" ]
4DWjnj-
Nonlinear transport in quantum point contact structures
We have investigated the magnetotransport properties under nonlinear conditions in quantum point contact structures fabricated on high mobility AlGaAs/GaAs two-dimensional electron gas (2DEG) layers. Nonlinearities in the IV characteristics are observed at the threshold for conduction when biased initially from the tunneling regime as observed previously. We observe that this non-ideality is enhanced by a magnetic field normal to the plane of the 2DEG. This behavior is interpreted in terms of corrections to the Landauer model extended to nonequilibrium conditions.
[ "transport", "nanostructures", "semiconductors", "quantum dots" ]
[ "P", "U", "U", "M" ]
4mVhHUa
MRM: A matrix representation and mapping approach for knowledge acquisition
Knowledge acquisition plays a critical role in constructing a knowledge-based system (KBS). It is the most time-consuming phase and has been recognized as the bottleneck of KBS development. This paper presents a matrix representation and mapping (MRM) approach to facilitate the effectiveness of knowledge acquisition in building a KBS. The proposed MRM approach, which is based on matrix representation and mapping operations, comprises six consecutive steps for generating rules. The procedure in each step is elaborated. A case study on primarily diagnosing an automotive system is employed to illustrate how the MRM approach works.
[ "matrix representation and mapping", "knowledge acquisition", "knowledge-based systems", "general sorting", "rule generation" ]
[ "P", "P", "P", "M", "R" ]
jG-g&Yn
Integrated Obstacle Avoidance and Path Following Through a Feedback Control Law
The article proposes a novel approach to path following in the presence of obstacles with unique characteristics. First, the approach proposes an integrated method for obstacle avoidance and path following based on a single feedback control law, which produces commands to actuators directly executable by a robot with unicycle kinematics. Second, the approach offers a new solution to the well-known dilemma that one has to face when dealing with multiple sensor readings, i.e., whether it is better, to summarize a huge amount of sensor data, to consider only the closest sensor reading, to consider all sensor readings separately to compute the resulting force vector, or to build a local map. The approach requires very little memory and computational resources, thus being implementable even on simpler robots moving in unknown environments.
[ "obstacle avoidance", "path following", "mobile robots" ]
[ "P", "P", "M" ]
cMpXnMq
A review of learning vector quantization classifiers
In this work, we present a review of the state of the art of learning vector quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets.
[ "learning vector quantization", "supervised learning", "neural networks", "margin maximization", "likelihood ratio maximization" ]
[ "P", "M", "U", "U", "U" ]
1Xhqhsf
sketching concurrent data structures
We describe PSketch, a program synthesizer that helps programmers implement concurrent data structures. The system is based on the concept of sketching, a form of synthesis that allows programmers to express their insight about an implementation as a partial program: a sketch. The synthesizer automatically completes the sketch to produce an implementation that matches a given correctness criteria. PSketch is based on a new counterexample-guided inductive synthesis algorithm (CEGIS) that generalizes the original sketch synthesis algorithm from Solar-Lezama et.al. to cope efficiently with concurrent programs. The new algorithm produces a correct implementation by iteratively generating candidate implementations, running them through a verifier, and if they fail, learning from the counterexample traces to produce a better candidate; converging to a solution in a handful of iterations. PSketch also extends Sketch with higher-level sketching constructs that allow the programmer to express her insight as a "soup" of ingredients from which complicated code fragments must be assembled. Such sketches can be viewed as syntactic descriptions of huge spaces of candidate programs (over 10 8 candidates for some sketches we resolved). We have used the PSketch system to implement several classes of concurrent data structures, including lock-free queues and concurrent sets with fine-grained locking. We have also sketched some other concurrent objects including a sense-reversing barrier and a protocol for the dining philosophers problem; all these sketches resolved in under an hour.
[ "sketching", "concurrency", "synthesis", "spin", "sat" ]
[ "P", "P", "P", "U", "U" ]
3WF4WKP
Evaluating change in user error when using ruggedized handheld devices
There are no significant differences between user error and age. Lack of corrective software may not impact user error as much as expected. Keypad devices had more character errors while touchscreen devices had more word.
[ "user error", "handheld devices", "generation" ]
[ "P", "P", "U" ]
2mRArXC
Support for situation awareness in trustworthy ubiquitous computing application software
Due to the dynamic and ephemeral nature of ubiquitous computing (ubicomp) environments, it is especially important that the application software in ubicomp environments is trustworthy. In order to have trustworthy application software in ubicomp environments, situation-awareness (SAW) in the application software is needed to enforce flexible security policies and detect violations of security policies. In this paper, an approach is presented to provide development and runtime support to incorporate SAW in trustworthy ubicomp, application software. The development support is to provide SAW requirement specification and automated code generation to achieve SAW in trustworthy ubicomp application software, and the runtime support is for context acquisition, situation analysis and situation-aware communication. To realize our approach, the improved Reconfigurable Context-Sensitive Middleware (RCSM) is developed to provide the above development and runtime support. Copyright (C) 2006 John Wiley & Sons, Ltd.
[ "situation-awareness", "development and runtime support", "trustworthy ubiquitous application software", "situation-aware interface definition language (sa-idl)", "situation-aware middleware", "situation-aware security policies" ]
[ "P", "P", "R", "M", "R", "R" ]
1piA9DR
Validation of temperature-perturbation and CFD-based modelling for the prediction of the thermal urban environment: the Lecce (IT) case study
Two modelling approaches for air temperature prediction in cities are evaluated. Daily trends of air temperature are well captured. ENVI-met requires an ad-hoc tuning of surface boundary conditions. ADMS-TH model performance depends on the accuracy of energy balance terms. ADMS-TH shows an overall better performance than ENVI-met.
[ "urban temperature", "envi-met model", "adms-temperature and humidity model", "land use parameters", "morphological analysis", "lecce city" ]
[ "R", "R", "M", "U", "U", "R" ]
-KRVSY-
Comparison of software for computing the action of the matrix exponential
The implementation of exponential integrators requires the action of the matrix exponential and related functions of a possibly large matrix. There are various methods in the literature for carrying out this task. In this paper we describe a new implementation of a method based on interpolation at Leja points. We numerically compare this method with other codes from the literature. As we are interested in applications to exponential integrators we choose the test examples from spatial discretization of time dependent partial differential equations in two and three space dimensions. The test matrices thus have large eigenvalues and can be nonnormal.
[ "exponential integrators", "leja interpolation", "action of matrix exponential", "krylov subspace method", "taylor series", "65f60", "65d05", "65l04" ]
[ "P", "R", "R", "M", "U", "U", "U", "U" ]
L1HSqUi
Learning while designing
This paper describes how a computational system for designing can learn useful, reusable, generalized search strategy rules from its own experience of designing. It can then apply this experience to transform the design process from search based (knowledge lean) to knowledge based (knowledge rich). The domain of application is the design of spatial layouts for architectural design. The processes of designing and learning are tightly coupled.
[ "learning", "strategy rules", "design process", "situations" ]
[ "P", "P", "P", "U" ]
1Kxpqbv
An algebraic construction of codes for Slepian-Wolf source networks
This correspondence proposes an explicit construction of fixed-length codes for Slepian-Wolf (SW) source networks. The proposed code is linear, and has two-step encoding and decoding procedures similar to the concatenated code used for channel coding. Encoding and decoding of the code can be done in a polynomial order of the block length. The proposed code can achieve arbitrary small probability of error for ergodic sources with finite alphabets, if the pair of encoding rates is in the achievable region. Further, if the sources are memoryless, the proposed code can be modified to become universal and the probability of error vanishes exponentially as the block length tends to infinity.
[ "fixed-length code", "ergodic process", "slepian-wolf (sw) coding", "source coding" ]
[ "P", "M", "R", "R" ]
4Vs41Sp
ANN and ANFIS models for performance evaluation of a vertical ground source heat pump system
The aim of this study is to demonstrate the comparison of an artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) for the prediction performance of a vertical ground source heat pump (VGSHP) system. The VGSHP system using R-22 as refrigerant has a three single U-tube ground heat exchanger (GHE) made of polyethylene pipe with a 40mm outside diameter. The GHEs were placed in a vertical boreholes (VBs) with 30 (VB1), 60 (VB2) and 90 (VB3)m depths and 150mm diameters. The monthly mean values of COP for VB1, VB2 and VB3 are obtained to be 3.37/1.93, 3.85/2.37, and 4.33/3.03, respectively, in cooling/heating seasons. Experimental performances were performed to verify the results from the ANN and ANFIS approaches. ANN model, Multi-layered Perceptron/Back-propagation with three different learning algorithms (the LevenbergMarquardt (LM), Scaled Conjugate Gradient (SCG) and Pola-Ribiere Conjugate Gradient (CGP) algorithms and the ANFIS model were developed using the same input variables. Finally, the statistical values are given in as tables. This paper shows the appropriateness of ANFIS for the quantitative modeling of GSHP systems.
[ "ground source heat pump", "adaptive neuro-fuzzy inference system", "membership functions", "vertical heat exchanger", "coefficient of performance" ]
[ "P", "P", "U", "R", "M" ]
1j23-V:
Effects of agent heterogeneity in the presence of a land-market: A systematic test in an agent-based laboratory
Representing agent heterogeneity is one of the main reasons that agent-based models become increasingly popular in simulating the emergence of land-use, land-cover change and socioeconomic phenomena. However, the relationship between heterogeneous economic agents and the resultant landscape patterns and socioeconomic dynamics has not been systematically explored. In this paper, we present a stylized agent-based land market model, Land Use in eXurban Environments (LUXE), to study the effects of multidimensional agents heterogeneity on the spatial and socioeconomic patterns of urban land use change under various market representations. We examined two sources of agent heterogeneity: budget heterogeneity, which imposes constraints on the affordability of land, and preference heterogeneity, which determines location choice. The effects of the two dimensions of agents heterogeneity are systematically explored across different market representations by three experiments. Agents heterogeneity exhibits a complex interplay with various forms of market institutions as indicated by macro-measures (landscape metrics, segregation index, and socioeconomic metrics). In general, budget heterogeneity has pronounced effect on socioeconomic results, while preference heterogeneity is highly pertinent to spatial outcomes. The relationship between agent heterogeneity and macro-measures becomes more complex when more land market mechanisms are represented. In other words, appropriately simulating agent heterogeneity plays an important role in guaranteeing the fidelity of replicating empirical land use change process.
[ "agent heterogeneity", "agent-based modeling", "land market", "competitive bidding", "budget constraints" ]
[ "P", "P", "P", "U", "R" ]
2CANUUm
feature shaping for linear svm classifiers
Linear classifiers have been shown to be effective for many discrimination tasks. Irrespective of the learning algorithm itself, the final classifier has a weight to multiply by each feature. This suggests that ideally each input feature should be linearly correlated with the target variable (or anti-correlated), whereas raw features may be highly non-linear. In this paper, we attempt to re-shape each input feature so that it is appropriate to use with a linear weight and to scale the different features in proportion to their predictive value. We demonstrate that this pre-processing is beneficial for linear SVM classifiers on a large benchmark of text classification tasks as well as UCI datasets.
[ "svm", "text classification", "machine learning", "feature weighting", "feature scaling", "linear support vector machine" ]
[ "P", "P", "M", "R", "R", "M" ]
2hxJJhQ
a service driven development process (sddp) model for ultra large scale systems
Achieving ultra-large-scale software systems will necessarily require new and special development processes. This position paper suggests overall structure of a process model to develop and maintain system of systems similar to Ultra Large Scale (ULS) systems. The proposed process model will be introduced in details and finally, we will evaluate it considering CMMI-ACQ which has been presented by SEI for acquirer organizations.
[ "service", "development process", "ultra large scale systems", "software engineering" ]
[ "P", "P", "P", "M" ]
-N6fsCk
Delineating white matter structure in diffusion tensor MRI with anisotropy creases
Geometric models of white matter architecture play an increasing role in neuroscientific applications of diffusion tensor imaging, and the most popular method for building them is fiber tractography. For some analysis tasks, however, a compelling alternative may be found in the first and second derivatives of diffusion anisotropy. We extend to tensor fields the notion from classical computer vision of ridges and valleys, and define anisotropy creases as features of locally extremal tensor anisotropy. Mathematically, these are the loci where the gradient of anisotropy is orthogonal to one or more eigenvectors of its Hessian. We propose that anisotropy creases provide a basis for extracting a skeleton of the major white matter pathways, in that ridges of anisotropy coincide with interiors of fiber tracts, and valleys of anisotropy coincide with the interfaces between adjacent but distinctly oriented tracts. The crease extraction algorithm we present generates high-quality polygonal models of crease surfaces, which are further simplified by connected-component analysis. We demonstrate anisotropy creases on measured diffusion MRI data, and visualize them in combination with tractography to confirm their anatomic relevance.
[ "diffusion tensor mri", "anisotropy", "ridges and valleys", "crease surface extraction", "white matter geometry" ]
[ "P", "P", "P", "R", "M" ]
-oy2fzr
A fully parallel method for the singular eigenvalue problem
In this paper, a fully parallel method for finding some or all finite eigenvalues of a real symmetric matrix pencil (A, B) is presented, where A is a symmetric tridiagonal matrix and B is a diagonal matrix with b(1) > 0 and b(i) >= 0, i = 2,3,...,n. The method is based on the homotopy continuation with rank 2 perturbation. It is shown that there are exactly m disjoint, smooth homotopy paths connecting the trivial eigenvalues to the desired eigenvalues, where m is the number of finite eigenvalues of (A, B), It is also shown that the homotopy curves are monotonic and easy to follow. (c) 2005 Elsevier Ltd. All rights reserved.
[ "eigenvalues", "eigenvalue curves", "multiprocessors", "homotopy method" ]
[ "P", "R", "U", "R" ]
2eqv-9x
Potential and Requirements of IT for Ambient Assisted Living Technologies Results of a Delphi Study
Objectives: Ambient Assisted Living (AAL) technologies are developed to enable elderly to live independently and safely. Innovative information technology (IT) can interconnect personal devices and offer suitable user interfaces. Often dedicated solutions are developed for particular projects. The aim of our research was to identify major IT challenges for AAL to enable generic and sustainable solutions. Methods: Delphi Survey. An online questionnaire was sent to 1800 members of the German Innovation Partnership AAL. The first round was qualitative to collect statements. Statements were reduced to items by qualitative content analysis. Items were assessed in the following two rounds by a 5-point Likert-scale. Quantitative analyses for second and third round: descriptive statistics, factor analysis and ANOVA. Results: Respondents: 81 in first, 173 in second and 70 in third round. All items got a rather high assessment. Medical issues were rated as having a very high potential. Items related to user-friendliness were regarded as most important requirements. Common requirements to all AAL-solutions are reliability, robustness, availability, data security, data privacy, legal issues, ethical requirements, easy configuration. The complete list of requirements can be used as framework for customizing future AAL projects. Conclusions: A wide variety of IT issues have been assessed important for AAL. The extensive list of requirements makes obvious that it is not efficient to develop dedicated solutions for individual projects but to provide generic methods and reusable components. Experiences and results from medical informatics research can be used to advance AAL solutions (e.g. eHealth and knowledge-based approaches).
[ "ambient assisted living", "delphi study", "medical informatics", "health information technologies", "health services for elderly" ]
[ "P", "P", "P", "M", "M" ]
3FGkjNK
HYBRID HARMONIC CODING OF SPEECH AT LOW BIT-RATES
This paper presents a novel approach to sinusoidal coding of speech which avoids the use of a voicing detector. The proposed model represents the speech signal as a sum of sinusoids and bandpass random signals and it is denoted hybrid harmonic model in this paper. The use of two different sets of basis functions increases the robustness of the model since there is no need to switch between techniques tailored to particular classes of sounds. Sinusoidal basis functions with harmonically related frequencies allow an accurate representation of the quasi-periodic structure of voiced speech but show difficulties in representing unvoiced sounds. On the other hand, the bandpass random functions are well suited for high quality representation of unvoiced speech sounds, since their bandwidth is larger than the bandwidth of sinusoids. The amplitudes of both sets of basis functions are simultaneously estimated by a least squares algorithm and the output speech signal is synthesized in the time domain by the superposition of all basis functions multiplied by their amplitudes. Experimental tests confirm an improved performance of the hybrid model for operation with noise-corrupted input speech, relative to classic sinusoidal models which exhibit a strong dependency on voicing decision. Finally, the implementation and test of a fully quantized hybrid coder at 4.8 kbit/s is described.
[ "coding", "speech modeling", "sinusodal modeling" ]
[ "P", "R", "M" ]
1oYRqn9
Semi-supervised learning and condition fusion for fault diagnosis
Manifold regularization based semi-supervised learning is introduced to fault diagnosis. Unlabeled condition data are also utilized to enhance the multi-fault detection. A new single-conditions and all-conditions labeled mode is proposed to feed SSL. This SSL approach outperforms supervised learning in both labeled modes. The manifold fundamental of single-conditions labeled mode is analyzed with dimensionality reduction.
[ "fault diagnosis", "semi-supervised learning (ssl)", "condition-based maintenance (cbm)", "manifold regularization (mr)", "conditions labeled mode" ]
[ "P", "M", "U", "M", "R" ]
-iaLozA
A deductive system for proving workflow models from operational procedures
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several different approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational structures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.
[ "workflow", "modelling", "automation", "planning", "petri-net", "management" ]
[ "P", "P", "P", "P", "U", "U" ]
1THFJLh
Graph based signature classes for detecting polymorphic worms via content analysis
Malicious softwares such as trojans, viruses, or worms can cause serious damage for information systems by exploiting operating system and application software vulnerabilities. Worms constitute a significant proportion of overall malicious software and infect a large number of systems in very short periods. Polymorphic worms combine polymorphism techniques with self-replicating and fast-spreading characteristics of worms. Each copy of a polymorphic worm has a different pattern so it is not effective to use simple signature matching techniques. In this work, we propose a graph based classification framework of content based polymorphic worm signatures. This framework aims to guide researchers to propose new polymorphic worm signature schemes. We also propose a new polymorphic worm signature scheme, Conjunction of Combinational Motifs (CCM), based on the defined framework. CCM utilizes common substrings of polymorphic worm copies and also the relation between those substrings through dependency analysis. CCM is resilient to new versions of a polymorphic worm. CCM also automatically generates signatures for new versions of a polymorphic worm, triggered by partial signature matches. Experimental results support that CCM has good flow evaluation time performance with low false positives and low false negatives.
[ "graph based signature", "polymorphic worm", "worm detection" ]
[ "P", "P", "R" ]
3pXsTyN
polynomial-time theory of matrix groups
We consider matrix groups, specified by a list of generators, over finite fields. The two most basic questions about such groups are membership in and the order of the group. Even in the case of abelian groups it is not known how to answer these questions without solving hard number theoretic problems (factoring and discrete log); in fact, constructive membership testing in the case of 1 1 matrices is precisely the discrete log problem. So the reasonable question is whether these problems are solvable in randomized polynomial time using number theory oracles. Building on 25 years of work, including remarkable recent developments by several groups of authors, we are now able to determine the order of a matrix group over a finite field of odd characteristic, and to perform constructive membership testing in such groups, in randomized polynomial time, using oracles for factoring and discrete log. One of the new ingredients of this result is the following. A group is called semisimple if it has no abelian normal subgroups. For matrix groups over finite fields, we show that the order of the largest semisimple quotient can be determined in randomized polynomial time (no number theory oracles required and no restriction on parity). As a by-product, we obtain a natural problem that belongs to BPP and is not known to belong either to RP or to coRP. No such problem outside the area of matrix groups appears to be known. The problem is the decision version of the above: Given a list A of nonsingular d d matrices over a finite field and an integer N, does the group generated by A have a semisimple quotient of order > N? We also make progress in the area of constructive recognition of simple groups, with the corollary that for a large class of matrix groups, our algorithms become Las Vegas.
[ "matrix groups", "discrete log", "computational group theory" ]
[ "P", "P", "M" ]
4m4NvF6
Delineation of the genomics field by hybrid citation-lexical methods: interaction with experts and validation process
In advanced methods of delineation and mapping of scientific fields, hybrid methods open a promising path to the capitalisation of advantages of approaches based on words and citations. One way to validate the hybrid approaches is to work in cooperation with experts of the fields under scrutiny. We report here an experiment in the field of genomics, where a corpus of documents has been built by a hybrid citation-lexical method, and then clustered into research themes. Experts of the field were associated in the various stages of the process: lexical queries for building the initial set of documents, the seed; citation-based extension aiming at reducing silence; final clustering to identify noise and allow discussion on border areas. The analysis of experts advices show a high level of validation of the process, which combines a high-precision and low-recall seed, obtained by journal and lexical queries, and a citation-based extension enhancing the recall. This findings on the genomics field suggest that hybrid methods can efficiently retrieve a corpus of relevant literature, even in complex and emerging fields.
[ "genomics", "information retrieval", "bibliographic coupling", "citation methods", "bibliometrics", "science mapping", "field delineation" ]
[ "P", "M", "U", "R", "U", "M", "R" ]
1uaSKZr
SAMPLING CORRELATION SOURCES FOR TIMING YIELD ANALYSIS OF SEQUENTIAL CIRCUITS WITH CLOCK NETWORKS
Analyzing timing yield under process variations is difficult because of the presence of correlations. Reconvergent fan-out nodes (RFONs) within combinational subcircuits are a major source of topological correlation. We identify two more sources of topological correlation in clocked sequential circuit: sequential RFONs, which are nodes within a clock network where the clock paths to more than one flip-flop branch out; and sequential branch-points, which are nodes within a combinational block where combinational paths to more than one capturing flip-flop branch out. Dealing with all sources of correlation is unacceptably complicated, and we therefore show how to sample a handful of correlation sources without sacrificing significant accuracy in the yield. A further reduction in computation time can be obtained by sampling only those nodes that are likely to affect the yield. These techniques are applied to yield analysis using statistical static timing analysis based on discrete random variables and also to yield analysis based on Monte Carlo simulation; the accuracy and efficiency of both methods are assessed using example circuits. The sequential RFONs suggest that timing yield may be improved by optimizing the clock network, and we address this possibility.
[ "correlation", "timing yield", "sequential circuit", "clock network", "statistical static timing analysis", "monte carlo simulation" ]
[ "P", "P", "P", "P", "P", "P" ]
37VGw2&
Optimal search and one-way trading online algorithms
This paper is concerned with the time series search and one-way trading problems. In the (time series) search problem a player is searching for the maximum (or minimum) price in a sequence that unfolds sequentially, one price at a time. Once during this game the player can decide to accept the current price p in which case the game ends and the player's payoff is p. In the one-way trading problem a trader is given the task of trading dollars to yen. Each day, a new exchange rate is announced and the trader must decide how many dollars to convert to yen according to the current rate. The game ends when the trader trades his entire dollar wealth to yen and his payoff is the number of yen acquired. The search and one-way trading are intimately related. Any (deterministic or randomized) one-way trading algorithm can be viewed as a randomized search algorithm. Using the competitive ratio as a performance measure we determine the optimal competitive performance for several variants of these problems. In particular, we show that a simple threat-based strategy is optimal and we determine its competitive ratio which yields, for realistic values of the problem parameters, surprisingly low competitive ratios. We also consider and analyze a one-way trading game played against an adversary called Nature where the online player knows the probability distribution of the maximum exchange rate and that distribution has been chosen by Nature. Finally, we consider some applications for a Special case of portfolio selection called two-way trading in which the trader may trade back and forth between cash and one asset.
[ "one-way trading", "online algorithms", "time series search", "portfolio selection", "two-way trading", "competitive analysis" ]
[ "P", "P", "P", "P", "P", "M" ]
3KR4oAJ
Interpretation of complex scenes using dynamic tree-structure Bayesian networks
This paper addresses the problem of object detection and recognition in complex scenes, where objects are partially occluded. The approach presented herein is based on the hypothesis that a careful analysis of visible object details at various scales is critical for recognition in such settings. In general, however, computational complexity becomes prohibitive when trying to analyze multiple sub-parts of multiple objects in an image. To alleviate this problem, we propose a generative-model framework-namely, dynamic tree-structure belief networks (DTSBNs). This framework formulates object detection and recognition as inference of DTSBN structure and image-class conditional distributions, given an image. The causal (Markovian) dependencies in DTSBNs allow for design of computationally efficient inference, as well as for interpretation of the estimated structure as follows: each root represents a whole distinct object, while children nodes down the sub-tree represent parts of that object at various scales. Therefore, within the DTSBN framework, the treatment and recognition of object parts requires no additional training, but merely a particular interpretation of the tree/subtree structure. This property leads to a strategy for recognition of objects as a whole through recognition of their visible parts. Our experimental results demonstrate that this approach remarkably outperforms strategies without explicit analysis of object parts. (c) 2006 Elsevier Inc. All rights reserved.
[ "bayesian networks", "generative models", "dynamic trees", "variational inference", "image segmentation", "object recognition" ]
[ "P", "M", "R", "M", "M", "R" ]
-c43ztU
consistent group membership in ad hoc networks
The design of ad hoc mobile applications often requires the availability of a consistent view of the application state among the participating hosts. Such views are important because they simplify both the programming and verification tasks. Essential to constructing a consistent view is the ability to know what hosts are within proximity of each other, i.e., form a group in support of the particular application. In this paper we propose an algorithm that allows hosts within communication range to maintain a consistent view of the group membership despite movement and frequent disconnections. The novel features of this algorithm are its reliance on location information and a conservative notion of logical connectivity that creates the illusion of announced disconnection. Movement patterns and delays are factored in the policy that determines which physical connections are susceptible to disconnection.
[ "consistency", "group", "group membership", "ad hoc network", "design", "mobility", "mobile application", "applications", "availability", "views", "program", "verification", "proximics", "support", "paper", "algorithm", "communication", "maintainability", "movement", "connection", "feature", "locatability", "informal", "pattern", "delay", "policy", "physical", "ad-hoc" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U" ]
3ucAJTc
a real-time collision detection algorithm for mobile billiards game
Collision detection is a key technique in game design. However, some algorithms employed in PC game are not suitable for mobile game because of the low performance and small screen size in mobile devices. Combining with the features of the mobile devices, this paper proposes a quick and feasible collision detection algorithm. This algorithm makes use of the multi-level collision detection and dynamic multi-resolution grid subdivision to reduce the computing time for collision detection, which improves the algorithm performance greatly. In the collision response phase, this paper adopts the time step binary search algorithm to ensure both the computing precision and system efficiency. The mobile billiards game designed for the Bird Company indicates that this algorithm has good performance and real-time interaction.
[ "collision detection", "mobile game", "multi-resolution", "binary search" ]
[ "P", "P", "P", "P" ]
6FtabHt
Iterative execution-feedback model-directed GUI testing
Current fully automatic model-based test-case generation techniques for GUIs employ a static model. Therefore they are unable to leverage certain state-based relationships between GUI events (e.g., one enables the other, one alters the others execution) that are revealed at run-time and non-trivial to infer statically. We present ALT a new technique to generate GUI test cases in batches. Because of its alternating nature, ALT enhances the next batch by using GUI run-time information from the current batch. An empirical study on four fielded GUI-based applications demonstrated that ALT was able to detect new 4- and 5-way GUI interaction faults; in contrast, previous techniques, due to their requirement of too many test cases, were unable to even test 4- and 5-way GUI interactions.
[ "gui testing", "test-case generation", "model-based testing", "event-driven software", "event-flow graphs" ]
[ "P", "P", "R", "U", "U" ]
U&yHys6
A hierarchical refinement algorithm for fully automatic gridding in spotted DNA microarray image processing
Gridding, the first step in spotted DNA microarray image processing, usually requires human intervention to achieve acceptable accuracy. We present a new algorithm for automatic gridding based on hierarchical refinement to improve the efficiency, robustness and reproducibility of microarray data analysis. This algorithm employs morphological reconstruction along with global and local rotation detection, non-parametric optimal thresholding and local fine-tuning without any human intervention. Using synthetic data and real microarray images of different sizes and with different degrees of rotation of subarrays, we demonstrate that this algorithm can detect and compensate for alignment and rotation problems to obtain reliable and robust results.
[ "automatic gridding", "dna microarray image", "image processing", "gene expression" ]
[ "P", "P", "P", "U" ]
4GESiaH
Genetic algorithms applied in BOPP film scheduling problems: minimizing total absolute deviation and setup times
The frequent changeovers in the production processes indicate the importance of setup time in many real-world manufacturing activities. The traditional approaches in dealing with setup times are that either to omit or to merge into the processing times so as to simplify the problems. These approaches could reduce the complexity of the problem, but often generated unrealistic outcomes because of the assumed conditions. This situation motivated us to consider sequence-dependent setup times in a real-world BOPP film scheduling problem. First, a setup time-based heuristic method was developed to generate the initial solutions for the genetic algorithms (GAs). Then, genetic algorithms with different mutation methods were applied. Extensive experimental results showed that the setup time-based heuristic method was relatively efficient. It was also found that a genetic algorithm with the variable mutation rate performed much effectively than one with the fixed mutation rate.
[ "genetic algorithm", "total absolute deviation", "setup time", "variable mutation rate", "bopp scheduling problem" ]
[ "P", "P", "P", "P", "R" ]
1mijwv8
Automatic grading of Scots pine (Pinus sylvestris L.) sawlogs using an industrial X-ray log scanner
The successful running of a sawmill is dependent on its ability to achieve the highest possible value recovery from the sawlogs, i.e. to optimize the use of the raw material. Such optimization requires information about the properties of every log. One method of measuring these properties is to use an X-ray log scanner. The objective of the present study was to determine the accuracy when grading Scots pine (Pinus sylvestris L.) sawlogs using an industrial scanner known as the X-ray LogScanner. The study was based on 150 Scots pine sawlogs from a sawmill in northern Sweden. All logs were scanned in the LogScanner at a speed of 125 m/min. The X-ray images were analyzed on-line with measures of different properties as a result (e.g. density and density variations). The logs were then sawn with a normal sawing pattern (50125 mm) and the logs were graded depending on the result from the manual grading of the center boards. Finally, partial least squares (PLS) regression was used to calibrate statistical models that predict the log grade based on the properties measured by the X-ray LogScanner. The study showed that 7783% of the logs were correctly sorted when using the scanner to sort logs into three groups according to the predicted grade of the center boards. After sawing the sorted logs, 67% of the boards had the correct grade. When scanning the same logs repeatedly, the relative standard deviation of the predicted grade was 1220%. The study also showed that it is possible to sort out 10 and 16%, respectively, of the material into two groups with high quality logs, without changing the grade distribution of the rest of the material to any great extent.
[ "sawlogs", "x-ray", "scanning" ]
[ "P", "P", "P" ]
-zsXxMa
Learning juntas in the presence of noise
We investigate the combination of two major challenges in computational learning: dealing with huge amounts of irrelevant information and learning from noisy data. It is shown that large classes of Boolean concepts that depend only on a small fraction of their variables - so-called juntas - can be learned efficiently from uniformly distributed examples that are corrupted by random attribute and classification noise. We present solutions to cope with the manifold problems that inhibit a straightforward generalization of the noise-free case. Additionally, we extend our methods to non-uniformly distributed examples and derive new results for monotone juntas and for parity juntas in this setting. It is assumed that the attribute noise is generated by a product distribution. Without any restrictions of the attribute noise distribution, learning in the presence of noise is in general impossible. This follows from our construction of a noise distribution P and a concept class C such that it is impossible to learn C under P-noise. (c) 2007 Elsevier B.V. All rights reserved.
[ "juntas", "learning in the presence of noise", "learning of boolean functions", "learning in the presence of irrelevant information", "fourier analysis" ]
[ "P", "P", "M", "R", "U" ]