Search results for: Rough Sets.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 698

Search results for: Rough Sets.

128 Effects of Hidden Unit Sizes and Autoregressive Features in Mental Task Classification

Authors: Ramaswamy Palaniappan, Nai-Jen Huan

Abstract:

Classification of electroencephalogram (EEG) signals extracted during mental tasks is a technique that is actively pursued for Brain Computer Interfaces (BCI) designs. In this paper, we compared the classification performances of univariateautoregressive (AR) and multivariate autoregressive (MAR) models for representing EEG signals that were extracted during different mental tasks. Multilayer Perceptron (MLP) neural network (NN) trained by the backpropagation (BP) algorithm was used to classify these features into the different categories representing the mental tasks. Classification performances were also compared across different mental task combinations and 2 sets of hidden units (HU): 2 to 10 HU in steps of 2 and 20 to 100 HU in steps of 20. Five different mental tasks from 4 subjects were used in the experimental study and combinations of 2 different mental tasks were studied for each subject. Three different feature extraction methods with 6th order were used to extract features from these EEG signals: AR coefficients computed with Burg-s algorithm (ARBG), AR coefficients computed with stepwise least square algorithm (ARLS) and MAR coefficients computed with stepwise least square algorithm. The best results were obtained with 20 to 100 HU using ARBG. It is concluded that i) it is important to choose the suitable mental tasks for different individuals for a successful BCI design, ii) higher HU are more suitable and iii) ARBG is the most suitable feature extraction method.

Keywords: Autoregressive, Brain-Computer Interface, Electroencephalogram, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
127 Rotation Invariant Face Recognition Based on Hybrid LPT/DCT Features

Authors: Rehab F. Abdel-Kader, Rabab M. Ramadan, Rawya Y. Rizk

Abstract:

The recognition of human faces, especially those with different orientations is a challenging and important problem in image analysis and classification. This paper proposes an effective scheme for rotation invariant face recognition using Log-Polar Transform and Discrete Cosine Transform combined features. The rotation invariant feature extraction for a given face image involves applying the logpolar transform to eliminate the rotation effect and to produce a row shifted log-polar image. The discrete cosine transform is then applied to eliminate the row shift effect and to generate the low-dimensional feature vector. A PSO-based feature selection algorithm is utilized to search the feature vector space for the optimal feature subset. Evolution is driven by a fitness function defined in terms of maximizing the between-class separation (scatter index). Experimental results, based on the ORL face database using testing data sets for images with different orientations; show that the proposed system outperforms other face recognition methods. The overall recognition rate for the rotated test images being 97%, demonstrating that the extracted feature vector is an effective rotation invariant feature set with minimal set of selected features.

Keywords: Discrete Cosine Transform, Face Recognition, Feature Extraction, Log Polar Transform, Particle SwarmOptimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875
126 Application of a Similarity Measure for Graphs to Web-based Document Structures

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian, Max Mühlhauser

Abstract:

Due to the tremendous amount of information provided by the World Wide Web (WWW) developing methods for mining the structure of web-based documents is of considerable interest. In this paper we present a similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as linear integer strings, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments for solving a novel and challenging problem: Measuring the structural similarity of generalized trees. In other words: We first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem for developing a efficient graph similarity measure. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based document structures.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
125 Ranking of Inventory Policies Using Distance Based Approach Method

Authors: Gupta Amit, Kumar Ramesh, Tewari P. C.

Abstract:

Globalization is putting enormous pressure on the business organizations specially manufacturing one to rethink the supply chain in innovative manners. Inventory consumes major portion of total sale revenue. Effective and efficient inventory management plays a vital role for the successful functioning of any organization. Selection of inventory policy is one of the important purchasing activities. This paper focuses on selection and ranking of alternative inventory policies. A deterministic quantitative model based on Distance Based Approach (DBA) method has been developed for evaluation and ranking of inventory policies. We have employed this concept first time for this type of the selection problem. Four inventory policies economic order quantity (EOQ), just in time (JIT), vendor managed inventory (VMI) and monthly policy are considered. Improper selection could affect a company’s competitiveness in terms of the productivity of its facilities and quality of its products. The ranking of inventory policies is a multi-criteria problem. There is a need to first identify the selection criteria and then processes the information with reference to relative importance of attributes for comparison. Criteria values for each inventory policy can be obtained either analytically or by using a simulation technique or they are linguistic subjective judgments defined by fuzzy sets, like, for example, the values of criteria. A methodology is developed and applied to rank the inventory policies.

Keywords: Inventory Policy, Ranking, DBA, Selection criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
124 A Codebook-based Redundancy Suppression Mechanism with Lifetime Prediction in Cluster-based WSN

Authors: Huan Chen, Bo-Chao Cheng, Chih-Chuan Cheng, Yi-Geng Chen, Yu Ling Chou

Abstract:

Wireless Sensor Network (WSN) comprises of sensor nodes which are designed to sense the environment, transmit sensed data back to the base station via multi-hop routing to reconstruct physical phenomena. Since physical phenomena exists significant overlaps between temporal redundancy and spatial redundancy, it is necessary to use Redundancy Suppression Algorithms (RSA) for sensor node to lower energy consumption by reducing the transmission of redundancy. A conventional algorithm of RSAs is threshold-based RSA, which sets threshold to suppress redundant data. Although many temporal and spatial RSAs are proposed, temporal-spatial RSA are seldom to be proposed because it is difficult to determine when to utilize temporal or spatial RSAs. In this paper, we proposed a novel temporal-spatial redundancy suppression algorithm, Codebookbase Redundancy Suppression Mechanism (CRSM). CRSM adopts vector quantization to generate a codebook, which is easily used to implement temporal-spatial RSA. CRSM not only achieves power saving and reliability for WSN, but also provides the predictability of network lifetime. Simulation result shows that the network lifetime of CRSM outperforms at least 23% of that of other RSAs.

Keywords: Redundancy Suppression Algorithm (RSA), Threshold-based RSA, Temporal RSA, Spatial RSA and Codebookbase Redundancy Suppression Mechanism (CRSM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
123 Integration of Seismic and Seismological Data Interpretation for Subsurface Structure Identification

Authors: Iftikhar Ahmed Satti, Wan Ismail Wan Yusoff

Abstract:

The structural interpretation of a part of eastern Potwar (Missa Keswal) has been carried out with available seismological, seismic and well data. Seismological data contains both the source parameters and fault plane solution (FPS) parameters and seismic data contains ten seismic lines that were re-interpreted by using well data. Structural interpretation depicts two broad types of fault sets namely, thrust and back thrust faults. These faults together give rise to pop up structures in the study area and also responsible for many structural traps and seismicity. Seismic interpretation includes time and depth contour maps of Chorgali Formation while seismological interpretation includes focal mechanism solution (FMS), depth, frequency, magnitude bar graphs and renewal of Seismotectonic map. The Focal Mechanism Solutions (FMS) that surrounds the study area are correlated with the different geological and structural maps of the area for the determination of the nature of subsurface faults. Results of structural interpretation from both seismic and seismological data show good correlation. It is hoped that the present work will help in better understanding of the variations in the subsurface structure and can be a useful tool for earthquake prediction, planning of oil field and reservoir monitoring.

Keywords: Focal mechanism solution (FMS), Fault plane solution (FPS), Reservoir monitoring, earthquake prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2482
122 Approach for Demonstrating Reliability Targets for Rail Transport during Low Mileage Accumulation in the Field: Methodology and Case Study

Authors: Nipun Manirajan, Heeralal Gargama, Sushil Guhe, Manoj Prabhakaran

Abstract:

In railway industry, train sets are designed based on contractual requirements (mission profile), where reliability targets are measured in terms of mean distance between failures (MDBF). However, during the beginning of revenue services, trains do not achieve the designed mission profile distance (mileage) within the timeframe due to infrastructure constraints, scarcity of commuters or other operational challenges thereby not respecting the original design inputs. Since trains do not run sufficiently and do not achieve the designed mileage within the specified time, car builder has a risk of not achieving the contractual MDBF target. This paper proposes a constant failure rate based model to deal with the situations where mileage accumulation is not a part of the design mission profile. The model provides appropriate MDBF target to be demonstrated based on actual accumulated mileage. A case study of rolling stock running in the field is undertaken to analyze the failure data and MDBF target demonstration during low mileage accumulation. The results of case study prove that with the proposed method, reliability targets are achieved under low mileage accumulation.

Keywords: Mean distance between failures, mileage based reliability, reliability target normalization, rolling stock reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1189
121 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Soo-Hyeon Jeon, Byeoung Kug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including large volumes of unstructured data and text have been created because of the rapid increase in the use of social media and the Internet. Usually, these documents are categorized for the convenience of users. Because the accuracy of manual categorization is not guaranteed, and such categorization requires a large amount of time and incurs huge costs. Many studies on automatic categorization have been conducted to help mitigate the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorize complex documents with multiple topics because they work on the assumption that individual documents can be categorized into single categories only. Therefore, to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, the learning process employed in these studies involves training using a multi-categorized document set. These methods therefore cannot be applied to the multi-categorization of most documents unless multi-categorized training sets using traditional multi-categorization algorithms are provided. To overcome this limitation, in this study, we review our novel methodology for extending the category of a single-categorized document to multiple categorizes, and then introduce a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: Big Data Analysis, Document Classification, Text Mining, Topic Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
120 A Generalization of Planar Pascal’s Triangle to Polynomial Expansion and Connection with Sierpinski Patterns

Authors: Wajdi Mohamed Ratemi

Abstract:

The very well-known stacked sets of numbers referred to as Pascal’s triangle present the coefficients of the binomial expansion of the form (x+y)n. This paper presents an approach (the Staircase Horizontal Vertical, SHV-method) to the generalization of planar Pascal’s triangle for polynomial expansion of the form (x+y+z+w+r+⋯)n. The presented generalization of Pascal’s triangle is different from other generalizations of Pascal’s triangles given in the literature. The coefficients of the generalized Pascal’s triangles, presented in this work, are generated by inspection, using embedded Pascal’s triangles. The coefficients of I-variables expansion are generated by horizontally laying out the Pascal’s elements of (I-1) variables expansion, in a staircase manner, and multiplying them with the relevant columns of vertically laid out classical Pascal’s elements, hence avoiding factorial calculations for generating the coefficients of the polynomial expansion. Furthermore, the classical Pascal’s triangle has some pattern built into it regarding its odd and even numbers. Such pattern is known as the Sierpinski’s triangle. In this study, a presentation of Sierpinski-like patterns of the generalized Pascal’s triangles is given. Applications related to those coefficients of the binomial expansion (Pascal’s triangle), or polynomial expansion (generalized Pascal’s triangles) can be in areas of combinatorics, and probabilities.

Keywords: Generalized Pascal’s triangle, Pascal’s triangle, polynomial expansion, Sierpinski’s triangle, staircase horizontal vertical method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2382
119 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — In the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data

Keywords: Rule induction, decision table, missing data, noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
118 Semantic Mobility Channel (SMC): Ubiquitous and Mobile Computing Meets the Semantic Web

Authors: José M. Cantera, Miguel Jiménez, Genoveva López, Javier Soriano

Abstract:

With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).

Keywords: Semantic web, ubiquitous and mobile computing, web content transcoding. semantic mark-up, mobile computing, middleware and services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
117 Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models

Authors: Rohitash Chandra, Christian W. Omlin

Abstract:

We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.

Keywords: Deterministic finite-state automata, genetic algorithm, hidden Markov models, hybrid systems and recurrent neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
116 An Investigation on the Accuracy of Nonlinear Static Procedures for Seismic Evaluation of Buckling-restrained Braced Frames

Authors: An Hong Nguyen, Chatpan Chintanapakdee, Toshiro Hayashikawa

Abstract:

Presented herein is an assessment of current nonlinear static procedures (NSPs) for seismic evaluation of bucklingrestrained braced frames (BRBFs) which have become a favorable lateral-force resisting system for earthquake resistant buildings. The bias and accuracy of modal, improved modal pushover analysis (MPA, IMPA) and mass proportional pushover (MPP) procedures are comparatively investigated when they are applied to BRBF buildings subjected to two sets of strong ground motions. The assessment is based on a comparison of seismic displacement demands such as target roof displacements, peak floor/roof displacements and inter-story drifts. The NSP estimates are compared to 'exact' results from nonlinear response history analysis (NLRHA). The response statistics presented show that the MPP procedure tends to significantly overestimate seismic demands of lower stories of tall buildings considered in this study while MPA and IMPA procedures provide reasonably accurate results in estimating maximum inter-story drift over all stories of studied BRBF systems.

Keywords: Buckling-restrained braced frames, nonlinearresponse history analysis, nonlinear static procedure, seismicdemands.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
115 The Impact of Motivation, Trust, and National Cultural Differences on Knowledge Sharing within the Context of Electronic Mail

Authors: Said Abdullah Al Saifi

Abstract:

The goal of this research is to examine the impact of trust, motivation, and national culture on knowledge sharing within the context of electronic mail. This study is quantitative and survey based. In order to conduct the research, 200 students from a leading university in New Zealand were chosen randomly to participate in a questionnaire survey. Motivation and trust were found to be significantly and positively related to knowledge sharing. The research findings illustrated that face saving, face gaining, and individualism positively moderates the relationship between motivation and knowledge sharing. In addition, collectivism culture negatively moderates the relationship between motivation and knowledge sharing. Moreover, the research findings reveal that face saving, individualism, and collectivism culture positively moderate the relationship between trust and knowledge sharing. In addition, face gaining culture negatively moderates the relationship between trust and knowledge sharing. This study sets out several implications for researchers and practitioners. The study produces an integrative model that shows how attributes of national culture impact knowledge sharing through the use of emails. A better understanding of the relationship between knowledge sharing and trust, motivation, and national culture differences will increase individuals’ ability to make wise choices when sharing knowledge with those from different cultures.

Keywords: Knowledge sharing, motivation, national culture, trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
114 A Study on the Condition Monitoring of Transmission Line by On-line Circuit Parameter Measurement

Authors: Il Dong Kim, Jin Rak Lee, Young Jun Ko, Young Taek Jin

Abstract:

An on-line condition monitoring method for transmission line is proposed using electrical circuit theory and IT technology in this paper. It is reasonable that the circuit parameters such as resistance (R), inductance (L), conductance (g) and capacitance (C) of a transmission line expose the electrical conditions and physical state of the line. Those parameters can be calculated from the linear equation composed of voltages and currents measured by synchro-phasor measurement technique at both end of the line. A set of linear voltage drop equations containing four terminal constants (A, B ,C ,D ) are mathematical models of the transmission line circuits. At least two sets of those linear equations are established from different operation condition of the line, they may mathematically yield those circuit parameters of the line. The conditions of line connectivity including state of connecting parts or contacting parts of the switching device may be monitored by resistance variations during operation. The insulation conditions of the line can be monitored by conductance (g) and capacitance(C) measurements. Together with other condition monitoring devices such as partial discharge, sensors and visual sensing device etc.,they may give useful information to monitor out any incipient symptoms of faults. The prototype of hardware system has been developed and tested through laboratory level simulated transmission lines. The test has shown enough evident to put the proposed method to practical uses.

Keywords: Transmission Line, Condition Monitoring, Circuit Parameters, Synchro- phasor Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3199
113 Examination of the Effect of Air Viscosity on Narrow Acoustic Tubes Using FEM Involving Complex Effective Density and Complex Bulk Modulus

Authors: M. Watanabe, T. Yamaguchi, M. Sasajima, Y. Kurosawa, Y. Koike

Abstract:

Earphones and headphones, which are compact electro-acoustic transducers, tend to have a lot of acoustic absorption materials and porous materials known as dampers, which often have a large number of extremely small holes and narrow slits to inhibit the resonance of the vibrating system, because the air viscosity significantly affects the acoustic characteristics in such acoustic paths. In order to perform simulations using the finite element method (FEM), it is necessary to be aware of material characteristics such as the impedance and propagation constants of sound absorbing materials and porous materials. The transfer function is widely known as a measurement method for an acoustic tube with such physical properties, but literature describing the measurements at the upper limits of the audible range is yet to be found. The acoustic tube, which is a measurement instrument, must be made narrow, and the distance between the two sets of microphones must be shortened in order to take measurements of acoustic characteristics at higher frequencies. When such a tube is made narrow, however, the characteristic impedance has been observed to become lower than the impedance of air. This paper considers the cause of this phenomenon to be the effect of the air viscosity and describes an FEM analysis of an acoustic tube considering air viscosity to compare to the theoretical formula by including the effect of air viscosity in the theoretical formula for an acoustic tube.

Keywords: Acoustic tube, air viscosity, earphones, FEM, porous materials, sound absorbing materials, transfer function method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
112 Computer Software Applicable in Rehabilitation, Cardiology and Molecular Biology

Authors: P. Kowalska, P. Gabka, K. Kamieniarz, M. Kamieniarz, W. Stryla, P. Guzik, T. Krauze

Abstract:

We have developed a computer program consisting of 6 subtests assessing the children hand dexterity applicable in the rehabilitation medicine. We have carried out a normative study on a representative sample of 285 children aged from 7 to 15 (mean age 11.3) and we have proposed clinical standards for three age groups (7-9, 9-11, 12-15 years). We have shown statistical significance of differences among the corresponding mean values of the task time completion. We have also found a strong correlation between the task time completion and the age of the subjects, as well as we have performed the test-retest reliability checks in the sample of 84 children, giving the high values of the Pearson coefficients for the dominant and non-dominant hand in the range 0.740.97 and 0.620.93, respectively. A new MATLAB-based programming tool aiming at analysis of cardiologic RR intervals and blood pressure descriptors, is worked out, too. For each set of data, ten different parameters are extracted: 2 in time domain, 4 in frequency domain and 4 in Poincaré plot analysis. In addition twelve different parameters of baroreflex sensitivity are calculated. All these data sets can be visualized in time domain together with their power spectra and Poincaré plots. If available, the respiratory oscillation curves can be also plotted for comparison. Another application processes biological data obtained from BLAST analysis.

Keywords: Biomedical data base processing, Computer software, Hand dexterity, Heart rate and blood pressure variability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
111 Communicative Competence in Technical Oral Presentation: That “Magic“ Perceived by ESL Educators versus Content Experts

Authors: Ena Bhattacharyya, Zullina H. Shaari

Abstract:

Till date, English as a Second Language (ESL) educators involved in teaching language and communication to engineering students face an uphill task in developing graduate communicative competency. This challenge is accentuated by the apparent lack of English for Specific Purposes (ESP) materials for engineering students in the engineering curriculum. As such, most ESL educators are forced to play multiple roles. They don tasks such as curriculum designers, material writers and teachers with limited knowledge of the disciplinary content. Previous research indicates that prospective professional engineers should possess some sub-sets of competency: technical, linguistic oral immediacy, meta-cognitive and rhetorical explanatory competence. Another study revealed that engineering students need to be equipped with technical and linguistic oral immediacy competence. However, little is known whether these competency needs are in line with the educators- perceptions of communicative competence. This paper examines the best mix of communicative competence subsets that create the magic for engineering students in technical oral presentations. For the purpose of this study, two groups of educators were interviewed. These educators were language and communication lecturers involved in teaching a speaking course and content experts who assess students- technical oral presentations at tertiary level. The findings indicate that these two groups differ in their perceptions

Keywords: Communicative competence, Content experts, Educators, Technical Oral Presentations

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
110 Assessment of Diagnostic Enzymes as Indices of Heavy Metal Pollution in Tilapia Fish

Authors: Justina I. R. Udotong

Abstract:

Diagnostic enzymes like aspartate aminotransferase (AST), alanine aminotransferase (ALT) and alkaline phosphatase (ALP) were determined as indices of heavy metal pollution in Tilapia guinensis. Three different sets of fishes treated with lead (Pb), iron (Fe) and copper (Cu) were used for the study while a fourth group with no heavy metal served as a control. Fishes in each of the groups were exposed to 2.65mg/l of Pb, 0.85mg/l of Fe and 0.35 mg/l of Cu in aerated aquaria for 96 hours. Tissue fractionation of the liver tissues was carried out and the three diagnostic enzymes (AST, ALT, and ALP) were estimated. Serum levels of the same diagnostic enzymes were also measured. The mean values of the serum enzyme activity for ALP in each experimental group were 19.5±1.62, 29.67±2.17 and 1.15±0.27 IU/L for Pb, Fe and Cu groups compared with 9.99±1.34 IU/L enzyme activity in the control. This result showed that Pb and Fe caused increased release of the enzyme into the blood circulation indicating increased tissue damage while Cu caused a reduction in the serum level as compared with the level in the control group. The mean values of enzyme activity obtained in the liver were 102.14±6.12, 140.17±2.06 and 168.23±3.52 IU/L for Pb, Fe and Cu groups, respectively compared to 91.20±9.42 IU/L enzyme activity for the control group. The serum and liver AST and ALT activities obtained in Pb, Fe, Cu and control groups are reported. It was generally noted that the presence of the heavy metal caused liver tissues damage and consequent increased level of the diagnostic enzymes in the serum.

Keywords: Diagnostic enzymes, enzyme activity, heavy metals, tissues investigations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2342
109 Biochemical and Multiplex PCR Analysis of Toxic Crystal Proteins to Determine Genes in Bacillus thuringiensis Mutants

Authors: Fatma N. Talkhan, H. H. Abo-Assy, K. A. Soliman, Marwa M. Azzam, A. Z. E. Abdelsalam, A. S. Abdel-Razek

Abstract:

The Egyptian Bacillus thuringiensis isolate (M5) produce crystal proteins that is toxic against insects was irradiated with UV light to induce mutants. Upon testing 10 of the resulting mutants for their toxicity against cotton leafworm larvae, the three mutants 62, 64 and 85 proved to be the most toxic ones. Upon testing these mutants along with their parental isolate by SDS-PAGE analysis of spores-crystals proteins as well as vegetative cells proteins, new induced bands appeared in the three mutants by UV radiation and also they showed disappearance of some other bands as compared with the wild type isolate. Multiplex PCR technique, with five sets of specific primers, was used to detect the three types of cryI genes cryIAa, cryIAb and cryIAc. Results showed that these three genes exist, as distinctive bands, in the wild type isolate (M5) as well as in mutants 62 and 85, while the mutant 64 had two distinctive bands of cryIAb and cryIAc genes, and a faint band of cryI Aa gene. Finally, these results revealed that mutant 62 is considered as the promising mutant since it is UV resistant, highly toxic against Spodoptera littoralis and active against a wide range of Lepidopteran insects.

Keywords: Bacillus thuringiensis, biological control, cry1 genes, multiplex PC, SDS- PAGE analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
108 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features

Authors: Rabab M. Ramadan, Elaraby A. Elgallad

Abstract:

With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.

Keywords: Iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, scale invariant feature transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
107 Processing the Medical Sensors Signals Using Fuzzy Inference System

Authors: S. Bouharati, I. Bouharati, C. Benzidane, F. Alleg, M. Belmahdi

Abstract:

Sensors possess several properties of physical measures. Whether devices that convert a sensed signal into an electrical signal, chemical sensors and biosensors, thus all these sensors can be considered as an interface between the physical and electrical equipment. The problem is the analysis of the multitudes of saved settings as input variables. However, they do not all have the same level of influence on the outputs. In order to identify the most sensitive parameters, those that can guide users in gathering information on the ground and in the process of model calibration and sensitivity analysis for the effect of each change made. Mathematical models used for processing become very complex. In this paper a fuzzy rule-based system is proposed as a solution for this problem. The system collects the available signals information from sensors. Moreover, the system allows the study of the influence of the various factors that take part in the decision system. Since its inception fuzzy set theory has been regarded as a formalism suitable to deal with the imprecision intrinsic to many problems. At the same time, fuzzy sets allow to use symbolic models. In this study an example was applied for resolving variety of physiological parameters that define human health state. The application system was done for medical diagnosis help. The inputs are the signals expressed the cardiovascular system parameters, blood pressure, Respiratory system paramsystem was done, it will be able to predict the state of patient according any input values.

Keywords: Sensors, Sensivity, fuzzy logic, analysis, physiological parameters, medical diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
106 Application of Neural Network in User Authentication for Smart Home System

Authors: A. Joseph, D.B.L. Bong, D.A.A. Mat

Abstract:

Security has been an important issue and concern in the smart home systems. Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen. Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. In this paper, a neural network is trained to store the passwords instead of using verification table. This method is useful in solving security problems that happened in some authentication system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the MLPs Neural Network to accelerate the training process. For the Data Part, 200 sets of UserID and Passwords were created and encoded into binary as the input. The simulation had been carried out to evaluate the performance for different number of hidden neurons and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the results obtained, using Tansig and Purelin in hidden and output layer and 250 hidden neurons gave the better performance. As a result, a password-based user authentication system for smart home by using neural network had been developed successfully.

Keywords: Neural Network, User Authentication, Smart Home, Security

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042
105 Multivariate Analytical Insights into Spatial and Temporal Variation in Water Quality of a Major Drinking Water Reservoir

Authors: Azadeh Golshan, Craig Evans, Phillip Geary, Abigail Morrow, Zoe Rogers, Marcel Maeder

Abstract:

22 physicochemical variables have been determined in water samples collected weekly from January to December in 2013 from three sampling stations located within a major drinking water reservoir. Classical Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) analysis was used to investigate the environmental factors associated with the physico-chemical variability of the water samples at each of the sampling stations. Matrix augmentation MCR-ALS (MA-MCR-ALS) was also applied, and the two sets of results were compared for interpretative clarity. Links between these factors, reservoir inflows and catchment land-uses were investigated and interpreted in relation to chemical composition of the water and their resolved geographical distribution profiles. The results suggested that the major factors affecting reservoir water quality were those associated with agricultural runoff, with evidence of influence on algal photosynthesis within the water column. Water quality variability within the reservoir was also found to be strongly linked to physical parameters such as water temperature and the occurrence of thermal stratification. The two methods applied (MCR-ALS and MA-MCR-ALS) led to similar conclusions; however, MA-MCR-ALS appeared to provide results more amenable to interpretation of temporal and geological variation than those obtained through classical MCR-ALS.

Keywords: Catchment management, drinking water reservoir, multivariate curve resolution alternating least squares, thermal stratification, water quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 923
104 A Hybrid Neural Network and Traditional Approach for Forecasting Lumpy Demand

Authors: A. Nasiri Pour, B. Rostami Tabar, A.Rahimzadeh

Abstract:

Accurate demand forecasting is one of the most key issues in inventory management of spare parts. The problem of modeling future consumption becomes especially difficult for lumpy patterns, which characterized by intervals in which there is no demand and, periods with actual demand occurrences with large variation in demand levels. However, many of the forecasting methods may perform poorly when demand for an item is lumpy. In this study based on the characteristic of lumpy demand patterns of spare parts a hybrid forecasting approach has been developed, which use a multi-layered perceptron neural network and a traditional recursive method for forecasting future demands. In the described approach the multi-layered perceptron are adapted to forecast occurrences of non-zero demands, and then a conventional recursive method is used to estimate the quantity of non-zero demands. In order to evaluate the performance of the proposed approach, their forecasts were compared to those obtained by using Syntetos & Boylan approximation, recently employed multi-layered perceptron neural network, generalized regression neural network and elman recurrent neural network in this area. The models were applied to forecast future demand of spare parts of Arak Petrochemical Company in Iran, using 30 types of real data sets. The results indicate that the forecasts obtained by using our proposed mode are superior to those obtained by using other methods.

Keywords: Lumpy Demand, Neural Network, Forecasting, Hybrid Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2682
103 VISMA: A Method for System Analysis in Early Lifecycle Phases

Authors: Walter Sebron, Hans Tschürtz, Peter Krebs

Abstract:

The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.

Keywords: Analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
102 The Use of Artificial Neural Network in Option Pricing: The Case of S and P 100 Index Options

Authors: Zeynep İltüzer Samur, Gül Tekin Temur

Abstract:

Due to the increasing and varying risks that economic units face with, derivative instruments gain substantial importance, and trading volumes of derivatives have reached very significant level. Parallel with these high trading volumes, researchers have developed many different models. Some are parametric, some are nonparametric. In this study, the aim is to analyse the success of artificial neural network in pricing of options with S&P 100 index options data. Generally, the previous studies cover the data of European type call options. This study includes not only European call option but also American call and put options and European put options. Three data sets are used to perform three different ANN models. One only includes data that are directly observed from the economic environment, i.e. strike price, spot price, interest rate, maturity, type of the contract. The others include an extra input that is not an observable data but a parameter, i.e. volatility. With these detail data, the performance of ANN in put/call dimension, American/European dimension, moneyness dimension is analyzed and whether the contribution of the volatility in neural network analysis make improvement in prediction performance or not is examined. The most striking results revealed by the study is that ANN shows better performance when pricing call options compared to put options; and the use of volatility parameter as an input does not improve the performance.

Keywords: Option Pricing, Neural Network, S&P 100 Index, American/European options

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3088
101 Flow Discharge Determination in Straight Compound Channels Using ANNs

Authors: A. Zahiri, A. A. Dehghani

Abstract:

Although many researchers have studied the flow hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different methods have been presented for these channels but extending them for all types of compound channels with different geometrical and hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed slope, main channel side slopes, flood plains side slopes and berm inclination and one output variable (flow discharge), have been used in ANNs. Comparison of ANNs model and traditional method (divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.

Keywords: ANN model, compound channels, divided channel method (DCM), flow rating curve

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2559
100 A Security Model of Voice Eavesdropping Protection over Digital Networks

Authors: Supachai Tangwongsan, Sathaporn Kassuvan

Abstract:

The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.

Keywords: Computer Security, Encryption, Key Exchange, Security Model, Voice Eavesdropping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
99 Dynamic Analysis of a Moderately Thick Plate on Pasternak Type Foundation under Impact and Moving Loads

Authors: Neslihan Genckal, Reha Gursoy, Vedat Z. Dogan

Abstract:

In this study, dynamic responses of composite plates on elastic foundations subjected to impact and moving loads are investigated. The first order shear deformation (FSDT) theory is used for moderately thick plates. Pasternak-type (two-parameter) elastic foundation is assumed. Elastic foundation effects are integrated into the governing equations. It is assumed that plate is first hit by a mass as an impact type loading then the mass continues to move on the composite plate as a distributed moving loading, which resembles the aircraft landing on airport pavements. Impact and moving loadings are modeled by a mass-spring-damper system with a wheel. The wheel is assumed to be continuously in contact with the plate after impact. The governing partial differential equations of motion for displacements are converted into the ordinary differential equations in the time domain by using Galerkin’s method. Then, these sets of equations are solved by using the Runge-Kutta method. Several parameters such as vertical and horizontal velocities of the aircraft, volume fractions of the steel rebar in the reinforced concrete layer, and the different touchdown locations of the aircraft tire on the runway are considered in the numerical simulation. The results are compared with those of the ABAQUS, which is a commercial finite element code.

Keywords: Elastic foundation, impact, moving load, thick plate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488