Search results for: Neutrosophic sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 603

Search results for: Neutrosophic sets

513 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (Bi)digraphs, rough set theory, systems of interacting agents, complex systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1143
512 Tri-Axis Receiver for Wireless Micro-Power Transmission

Authors: Nan-Chyuan Tsai, Sheng-Liang Hsu

Abstract:

An innovative tri-axes micro-power receiver is proposed. The tri-axes micro-power receiver consists of two sets 3-D micro-solenoids and one set planar micro-coils in which iron core is embedded. The three sets of micro-coils are designed to be orthogonal to each other. Therefore, no matter which direction the flux is present along, the magnetic energy can be harvested and transformed into electric power. Not only dead space of receiving power is mostly reduced, but also transformation efficiency of electromagnetic energy to electric power can be efficiently raised. By employing commercial software, Ansoft Maxwell, the preliminary simulation results verify that the proposed micro-power receiver can efficiently pick up the energy transmitted by magnetic power source. As to the fabrication process, the isotropic etching technique is employed to micro-machine the inverse-trapezoid fillister so that the copper wire can be successfully electroplated. The adhesion between micro-coils and fillister is much enhanced.

Keywords: Wireless Power Transmission, Magnetic Flux, Tri-axes Micro-power Receiver

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1305
511 Comparative Evaluation of Color-Based Video Signatures in the Presence of Various Distortion Types

Authors: Aritz Sánchez de la Fuente, Patrick Ndjiki-Nya, Karsten Sühring, Tobias Hinz, Karsten Müller, Thomas Wiegand

Abstract:

The robustness of color-based signatures in the presence of a selection of representative distortions is investigated. Considered are five signatures that have been developed and evaluated within a new modular framework. Two signatures presented in this work are directly derived from histograms gathered from video frames. The other three signatures are based on temporal information by computing difference histograms between adjacent frames. In order to obtain objective and reproducible results, the evaluations are conducted based on several randomly assembled test sets. These test sets are extracted from a video repository that contains a wide range of broadcast content including documentaries, sports, news, movies, etc. Overall, the experimental results show the adequacy of color-histogram-based signatures for video fingerprinting applications and indicate which type of signature should be preferred in the presence of certain distortions.

Keywords: color histograms, robust hashing, video retrieval, video signature

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
510 Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering

Authors: Yunus Doğan, Ahmet Durap

Abstract:

Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.

Keywords: Clustering algorithms, coastal engineering, data mining, data summarization, statistical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1189
509 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: Instance selection, data reduction, MapReduce, kNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
508 A Rigid Point Set Registration of Remote Sensing Images Based on Genetic Algorithms and Hausdorff Distance

Authors: F. Meskine, N. Taleb, M. Chikr El-Mezouar, K. Kpalma, A. Almhdie

Abstract:

Image registration is the process of establishing point by point correspondence between images obtained from a same scene. This process is very useful in remote sensing, medicine, cartography, computer vision, etc. Then, the task of registration is to place the data into a common reference frame by estimating the transformations between the data sets. In this work, we develop a rigid point registration method based on the application of genetic algorithms and Hausdorff distance. First, we extract the feature points from both images based on the algorithm of global and local curvature corner. After refining the feature points, we use Hausdorff distance as similarity measure between the two data sets and for optimizing the search space we use genetic algorithms to achieve high computation speed for its inertial parallel. The results show the efficiency of this method for registration of satellite images.

Keywords: Feature extraction, Genetic algorithms, Hausdorff distance, Image registration, Point registration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
507 Flexible, Adaptable and Scaleable Business Rules Management System for Data Validation

Authors: Kashif Kamran, Farooque Azam

Abstract:

The policies governing the business of any organization are well reflected in her business rules. The business rules are implemented by data validation techniques, coded during the software development process. Any change in business policies results in change in the code written for data validation used to enforce the business policies. Implementing the change in business rules without changing the code is the objective of this paper. The proposed approach enables users to create rule sets at run time once the software has been developed. The newly defined rule sets by end users are associated with the data variables for which the validation is required. The proposed approach facilitates the users to define business rules using all the comparison operators and Boolean operators. Multithreading is used to validate the data entered by end user against the business rules applied. The evaluation of the data is performed by a newly created thread using an enhanced form of the RPN (Reverse Polish Notation) algorithm.

Keywords: Business Rules, data validation, multithreading, Reverse Polish Notation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2229
506 3D Point Cloud Model Color Adjustment by Combining Terrestrial Laser Scanner and Close Range Photogrammetry Datasets

Authors: M. Pepe, S. Ackermann, L. Fregonese, C. Achille

Abstract:

3D models obtained with advanced survey techniques such as close-range photogrammetry and laser scanner are nowadays particularly appreciated in Cultural Heritage and Archaeology fields. In order to produce high quality models representing archaeological evidences and anthropological artifacts, the appearance of the model (i.e. color) beyond the geometric accuracy, is not a negligible aspect. The integration of the close-range photogrammetry survey techniques with the laser scanner is still a topic of study and research. By combining point cloud data sets of the same object generated with both technologies, or with the same technology but registered in different moment and/or natural light condition, could construct a final point cloud with accentuated color dissimilarities. In this paper, a methodology to uniform the different data sets, to improve the chromatic quality and to highlight further details by balancing the point color will be presented.

Keywords: Color models, cultural heritage, laser scanner, photogrammetry, point cloud color.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
505 Project Selection Using Fuzzy Group Analytic Network Process

Authors: Hamed Rafiei, Masoud Rabbani

Abstract:

This paper deals with the project selection problem. Project selection problem is one of the problems arose firstly in the field of operations research following some production concepts from primary product mix problem. Afterward, introduction of managerial considerations into the project selection problem have emerged qualitative factors and criteria to be regarded as well as quantitative ones. To overcome both kinds of criteria, an analytic network process is developed in this paper enhanced with fuzzy sets theory to tackle the vagueness of experts- comments to evaluate the alternatives. Additionally, a modified version of Least-Square method through a non-linear programming model is augmented to the developed group decision making structure in order to elicit the final weights from comparison matrices. Finally, a case study is considered by which developed structure in this paper is validated. Moreover, a sensitivity analysis is performed to validate the response of the model with respect to the condition alteration.

Keywords: Analytic network process, Fuzzy sets theory, Nonlinear programming, Project selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733
504 Effect of Physical Contact (Hand-Holding) on Heart Rate Variability

Authors: T. Pishbin, S.M.P. Firoozabadi, N. Jafarnia Dabanloo, F. Mohammadi, S. Koozehgari

Abstract:

Heart-s electric field can be measured anywhere on the surface of the body (ECG). When individuals touch, one person-s ECG signal can be registered in other person-s EEG and elsewhere on his body. Now, the aim of this study was to test the hypothesis that physical contact (hand-holding) of two persons changes their heart rate variability. Subjects were sixteen healthy female (age: 20- 26) which divided into eight sets. In each sets, we had two friends that they passed intimacy test of J.sternberg. ECG of two subjects (each set) acquired for 5 minutes before hand-holding (as control group) and 5 minutes during they held their hands (as experimental group). Then heart rate variability signals were extracted from subjects' ECG and analyzed in linear feature space (time and frequency domain) and nonlinear feature space. Considering the results, we conclude that physical contact (hand-holding of two friends) increases parasympathetic activity, as indicate by increase SD1, SD1/SD2, HF and MF power (p<0.05) and decreases sympathetic activity, as indicate by decrease LF power (p<0.01) and LF/HF ratio (p<0.05).

Keywords: Autonomic nervous system (ANS), Hand- holding, Heart rate variability (HRV), Power spectral density analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3045
503 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)

Authors: Mingren Shi, Michael Renton

Abstract:

There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.

Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
502 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code

Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare

Abstract:

Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.

Keywords: Concatenated coding, low–density parity–check codes, array code, error floors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 945
501 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map

Authors: Anurag Sharma, Christian W. Omlin

Abstract:

Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.

Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
500 A Novel Modified Adaptive Fuzzy Inference Engine and Its Application to Pattern Classification

Authors: J. Hossen, A. Rahman, K. Samsudin, F. Rokhani, S. Sayeed, R. Hasan

Abstract:

The Neuro-Fuzzy hybridization scheme has become of research interest in pattern classification over the past decade. The present paper proposes a novel Modified Adaptive Fuzzy Inference Engine (MAFIE) for pattern classification. A modified Apriori algorithm technique is utilized to reduce a minimal set of decision rules based on input output data sets. A TSK type fuzzy inference system is constructed by the automatic generation of membership functions and rules by the fuzzy c-means clustering and Apriori algorithm technique, respectively. The generated adaptive fuzzy inference engine is adjusted by the least-squares fit and a conjugate gradient descent algorithm towards better performance with a minimal set of rules. The proposed MAFIE is able to reduce the number of rules which increases exponentially when more input variables are involved. The performance of the proposed MAFIE is compared with other existing applications of pattern classification schemes using Fisher-s Iris and Wisconsin breast cancer data sets and shown to be very competitive.

Keywords: Apriori algorithm, Fuzzy C-means, MAFIE, TSK

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
499 A Study about the Distribution of the Spanning Ratios of Yao Graphs

Authors: Maryam Hsaini, Mostafa Nouri-Baygi

Abstract:

A critical problem in wireless sensor networks is limited battery and memory of nodes. Therefore, each node in the network could maintain only a subset of its neighbors to communicate with. This will increase the battery usage in the network because each packet should take more hops to reach its destination. In order to tackle these problems, spanner graphs are defined. Since each node has a small degree in a spanner graph and the distance in the graph is not much greater than its actual geographical distance, spanner graphs are suitable candidates to be used for the topology of a wireless sensor network. In this paper, we study Yao graphs and their behavior for a randomly selected set of points. We generate several random point sets and compare the properties of their Yao graphs with the complete graph. Based on our data sets, we obtain several charts demonstrating how Yao graphs behave for a set of randomly chosen point set. As the results show, the stretch factor of a Yao graph follows a normal distribution. Furthermore, the stretch factor is in average far less than the worst case stretch factor proved for Yao graphs in previous results. Furthermore, we use Yao graph for a realistic point set and study its stretch factor in real world.

Keywords: Wireless sensor network, spanner graph, Yao Graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552
498 Primer Design with Specific PCR Product using Particle Swarm Optimization

Authors: Cheng-Hong Yang, Yu-Huei Cheng, Hsueh-Wei Chang, Li-Yeh Chuang

Abstract:

Before performing polymerase chain reactions (PCR), a feasible primer set is required. Many primer design methods have been proposed for design a feasible primer set. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this paper, a particle swarm optimization (PSO) algorithm is proposed to solve primer design problems associated with providing a specific product for PCR experiments. A test set of the gene CYP1A1, associated with a heightened lung cancer risk was analyzed and the comparison of accuracy and running time with the genetic algorithm (GA) and memetic algorithm (MA) was performed. A comparison of results indicated that the proposed PSO method for primer design finds optimal or near-optimal primer sets and effective PCR products in a relatively short time.

Keywords: polymerase chain reaction (PCR), primer design, evolutionary computation, particle swarm optimization (PSO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
497 Evolution of Performance Measurement Methods in Conditions of Uncertainty: The Implementation of Fuzzy Sets in Performance Measurement

Authors: E. A. Tkachenko, E. M. Rogova, V. V. Klimov

Abstract:

One of the basic issues of development management is connected with performance measurement as a prerequisite for identifying the achievement of development objectives. The aim of our research is to develop an improved model of assessing a company’s development results. The model should take into account the cyclical nature of development and the high degree of uncertainty in dealing with numerous management tasks. Our hypotheses may be formulated as follows: Hypothesis 1. The cycle of a company’s development may be studied from the standpoint of a project cycle. To do that, methods and tools of project analysis are to be used. Hypothesis 2. The problem of the uncertainty when justifying managerial decisions within the framework of a company’s development cycle can be solved through the use of the mathematical apparatus of fuzzy logic. The reasoned justification of the validity of the hypotheses made is given in the suggested article. The fuzzy logic toolkit applies to the case of technology shift within an enterprise. It is proven that some restrictions in performance measurement that are incurred to conventional methods could be eliminated by implementation of the fuzzy logic apparatus in performance measurement models.

Keywords: Fuzzy logic, fuzzy sets, performance measurement, project analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1033
496 Examination of Flood Runoff Reproductivity for Different Rainfall Sources in Central Vietnam

Authors: Do Hoai Nam, Keiko Udo, Akira Mano

Abstract:

This paper presents the combination of different precipitation data sets and the distributed hydrological model, in order to examine the flood runoff reproductivity of scattered observation catchments. The precipitation data sets were obtained from observation using rain-gages, satellite based estimate (TRMM), and numerical weather prediction model (NWP), then were coupled with the super tank model. The case study was conducted in three basins (small, medium, and large size) located in Central Vietnam. Calculated hydrographs based on ground observation rainfall showed best fit to measured stream flow, while those obtained from TRMM and NWP showed high uncertainty of peak discharges. However, calculated hydrographs using the adjusted rainfield depicted a promising alternative for the application of TRMM and NWP in flood modeling for scattered observation catchments, especially for the extension of forecast lead time.

Keywords: Flood forecast, rainfall-runoff model, satellite rainfall estimate, numerical weather prediction, quantitative precipitation forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
495 An Intelligent Combined Method Based on Power Spectral Density, Decision Trees and Fuzzy Logic for Hydraulic Pumps Fault Diagnosis

Authors: Kaveh Mollazade, Hojat Ahmadi, Mahmoud Omid, Reza Alimardani

Abstract:

Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. The aim of this work is to investigate the effectiveness of a new fault diagnosis method based on power spectral density (PSD) of vibration signals in combination with decision trees and fuzzy inference system (FIS). To this end, a series of studies was conducted on an external gear hydraulic pump. After a test under normal condition, a number of different machine defect conditions were introduced for three working levels of pump speed (1000, 1500, and 2000 rpm), corresponding to (i) Journal-bearing with inner face wear (BIFW), (ii) Gear with tooth face wear (GTFW), and (iii) Journal-bearing with inner face wear plus Gear with tooth face wear (B&GW). The features of PSD values of vibration signal were extracted using descriptive statistical parameters. J48 algorithm is used as a feature selection procedure to select pertinent features from data set. The output of J48 algorithm was employed to produce the crisp if-then rule and membership function sets. The structure of FIS classifier was then defined based on the crisp sets. In order to evaluate the proposed PSD-J48-FIS model, the data sets obtained from vibration signals of the pump were used. Results showed that the total classification accuracy for 1000, 1500, and 2000 rpm conditions were 96.42%, 100%, and 96.42% respectively. The results indicate that the combined PSD-J48-FIS model has the potential for fault diagnosis of hydraulic pumps.

Keywords: Power Spectral Density, Machine ConditionMonitoring, Hydraulic Pump, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2635
494 Development of an Automated Quality Management System to Control District Heating

Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov

Abstract:

To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system. 

Keywords: Balanced scorecard, heat supply, quality management system, the theory of fuzzy sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
493 Determination and Comparison of Fabric Pills Distribution Using Image Processing and Spatial Data Analysis Tools

Authors: Lenka Techniková, Maroš Tunák, Jiří Janáček

Abstract:

This work deals with the determination and comparison of pill patterns in 2 sets of fabric samples which differ in way of pill creation. The first set contains fabric samples with the pills created by simulation on a Martindale abrasion machine, while pills in the second set originated during normal wearing and maintenance. The goal of the study is to determine whether the pattern of the fabric pills created by simulation is the same as the pattern of naturally occurring pills. The system of determination and comparison of the pills is based on image processing and spatial data analysis tools. Firstly, 3D reconstruction of the fabric surfaces with the pills is realized with using a gradient fields method. The gradient fields method creates a 3D fabric surface from a set of 4 images. Thereafter, the pills are detected in 3D fabric surfaces using image-processing tools in the MATLAB software. Determination and comparison of the pills patterns of two sets of fabric samples is based on spatial data analysis using tools in R software.

Keywords: 3D reconstruction of the surface, image analysis tools, distribution of the pills, spatial data analysis tools.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
492 Optimal Rest Interval between Sets in Robot-Based Upper-Arm Rehabilitation

Authors: Virgil Miranda, Gissele Mosqueda, Pablo Delgado, Yimesker Yihun

Abstract:

Muscular fatigue affects the muscle activation that is needed for producing the desired clinical outcome. Integrating optimal muscle relaxation periods into a variety of health care rehabilitation protocols is important to maximize the efficiency of the therapy. In this study, four muscle relaxation periods (30, 60, 90 and 120 seconds) and their effectiveness in producing consistent muscle activation of the muscle biceps brachii between sets of an elbow flexion and extension task were investigated among a sample of 10 subjects with no disabilities. The same resting periods were then utilized in a controlled exoskeleton-based exercise for a sample size of 5 subjects and have shown similar results. On average, the muscle activity of the biceps brachii decreased by 0.3% when rested for 30 seconds, and it increased by 1.25%, 0.76% and 0.82% when using muscle relaxation periods of 60, 90 and 120 seconds, respectively. The preliminary results suggest that a muscle relaxation period of about 60 seconds is needed for optimal continuous muscle activation within rehabilitation regimens. Robot-based rehabilitation is good to produce repetitive tasks with the right intensity and knowing the optimal resting period will make the automation more effective.

Keywords: Rest intervals, muscle biceps brachii, robot rehabilitation, muscle fatigue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 368
491 Spatio-Temporal Data Mining with Association Rules for Lake Van

Authors: T. Aydin, M. F. Alaeddinoglu

Abstract:

People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatiotemporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newlyformed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.

Keywords: Apriori algorithm, association rules, data mining, spatio-temporal data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
490 Classifier Based Text Mining for Neural Network

Authors: M. Govindarajan, R. M. Chandrasekaran

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

Keywords: Back propagation, classification accuracy, textmining, time complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4164
489 The Effect of Strength Training and Consumption of Glutamine Supplement on GH/IGF1 Axis

Authors: Alireza Barari

Abstract:

Physical activity and diet are factors that influence the body's structure. The purpose of this study was to compare the effects of four weeks of resistance training, and glutamine supplement consumption on growth hormone (GH), and Insulin-like growth factor 1 (IGF-1) Axis. 40 amateur male bodybuilders, participated in this study. They were randomly divided into four equal groups, Resistance (R), Glutamine (G), Resistance with Glutamine (RG), and Control (C). The R group was assigned to a four week resistance training program, three times/week, three sets of 10 exercises with 6-10 repetitions, at the 80-95% 1RM (One Repetition Maximum), with 120 seconds rest between sets), G group is consuming l-glutamine (0.1 g/kg-1/day-1), RG group resistance training with consuming L-glutamine, and C group continued their normal lifestyle without exercise training. GH, IGF1, IGFBP-III plasma levels were measured before and after the protocol. One-way ANOVA indicated significant change in GH, IGF, and IGFBP-III between the four groups, and the Tukey test demonstrated significant increase in GH, IGF1, IGFBP-III plasma levels in R, and RG group. Based upon these findings, we concluded that resistance training at 80-95% 1RM intensity, and resistance training along with oral glutamine shows significantly increase secretion of GH, IGF-1, and IGFBP-III in amateur males, but the addition of oral glutamine to the exercise program did not show significant difference in GH, IGF-1, and IGFBP-III.

Keywords: Strength, glutamine, growth hormone, insulin-like growth factor 1.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1005
488 The Spanning Laceability of k-ary n-cubes when k is Even

Authors: Yuan-Kang Shih, Shu-Li Chang, Shin-Shin Kao

Abstract:

Qk n has been shown as an alternative to the hypercube family. For any even integer k ≥ 4 and any integer n ≥ 2, Qk n is a bipartite graph. In this paper, we will prove that given any pair of vertices, w and b, from different partite sets of Qk n, there exist 2n internally disjoint paths between w and b, denoted by {Pi | 0 ≤ i ≤ 2n-1}, such that 2n-1 i=0 Pi covers all vertices of Qk n. The result is optimal since each vertex of Qk n has exactly 2n neighbors.

Keywords: container, Hamiltonian, k-ary n-cube, m*-connected.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
487 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: Concrete bridges, deterioration, Markov chains, probability matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1387
486 Reducing the Need for Multi-Input Multi-Output in Multi-Beam Base Transceiver Station Antennas Using Orthogonally-Polarized Feeds with an Arbitrary Number of Ports

Authors: Mohamed Sanad, Noha Hassan

Abstract:

A multi-beam BTS (Base Transceiver Station) antenna has been developed using dual parabolic cylindrical reflectors. The ±45° polarization feeds are used in spatial diversity MIMO (Multi-Input Multi-Output). They can be replaced by single-port orthogonally polarized feeds. Then, with two sets of beams generated above each other, the ± 45° polarization ports of any conventional transceiver can be connected to two of these beam sets. Thus, with two-port transceivers, the system will be equivalent to 4x4 MIMO, instead of 2x2. Radio Frequency (RF) power combiners/splitters can also be used to combine the multiple beams into a single beam or any arbitrary number of beams/ports. The gain of the combined-beam will be more than 20-24 dBi instead of 17-18 dBi of conventional wide-beam antennas. Furthermore, the gain of the combined beam will be high over the whole beam angle. Moreover, the users will always be close to the peak gain value of the combined beam regardless of their location within the combined beam angle. The frequency bands of all the combined beams are adjusted such that they all have the same frequency band. Different configurations of RF power splitter/combiners can be used to provide any arbitrary number of beams/ports according to the requirements of any existing base station configuration.

Keywords: 5G mobile communications, BTS antennas, MIMO, orthogonally polarized antennas, multi-beam antennas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 628
485 An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference

Authors: Ayman A. Aly, Abdallah A. Alshnnaway

Abstract:

The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.

Keywords: Additive noise, edge preserving filtering, fuzzy image filtering, noise reduction, two dimensional mechanical images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
484 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting: The WICKED Method

Authors: S. Impey, D. Berry, S. Furtado, M. Galvin, L. Grogan, O. Hardiman, L. Hederman, M. Heverin, V. Wade, L. Douris, D. O'Sullivan, G. Stephens

Abstract:

Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.

Keywords: Healthcare, knowledge acquisition, maximal data sets, action design science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 385