Search results for: order
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5192

Search results for: order

1142 Effects of Hidden Unit Sizes and Autoregressive Features in Mental Task Classification

Authors: Ramaswamy Palaniappan, Nai-Jen Huan

Abstract:

Classification of electroencephalogram (EEG) signals extracted during mental tasks is a technique that is actively pursued for Brain Computer Interfaces (BCI) designs. In this paper, we compared the classification performances of univariateautoregressive (AR) and multivariate autoregressive (MAR) models for representing EEG signals that were extracted during different mental tasks. Multilayer Perceptron (MLP) neural network (NN) trained by the backpropagation (BP) algorithm was used to classify these features into the different categories representing the mental tasks. Classification performances were also compared across different mental task combinations and 2 sets of hidden units (HU): 2 to 10 HU in steps of 2 and 20 to 100 HU in steps of 20. Five different mental tasks from 4 subjects were used in the experimental study and combinations of 2 different mental tasks were studied for each subject. Three different feature extraction methods with 6th order were used to extract features from these EEG signals: AR coefficients computed with Burg-s algorithm (ARBG), AR coefficients computed with stepwise least square algorithm (ARLS) and MAR coefficients computed with stepwise least square algorithm. The best results were obtained with 20 to 100 HU using ARBG. It is concluded that i) it is important to choose the suitable mental tasks for different individuals for a successful BCI design, ii) higher HU are more suitable and iii) ARBG is the most suitable feature extraction method.

Keywords: Autoregressive, Brain-Computer Interface, Electroencephalogram, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
1141 Limestone Briquette Production and Characterization

Authors: André C. Silva, Mariana R. Barros, Elenice M. S. Silva, Douglas. Y. Marinho, Diego F. Lopes, Débora N. Sousa, Raphael S. Tomáz

Abstract:

Modern agriculture requires productivity, efficiency and quality. Therefore, there is need for agricultural limestone implementation that provides adequate amounts of calcium and magnesium carbonates in order to correct soil acidity. During the limestone process, fine particles (with average size under 400#) are generated. These particles do not have economic value in agricultural and metallurgical sectors due their size. When limestone is used for agriculture purposes, these fine particles can be easily transported by wind generated air pollution. Therefore, briquetting, a mineral processing technique, was used to mitigate this problem resulting in an agglomerated product suitable for agriculture use. Briquetting uses compressive pressure to agglomerate fine particles. It can be aided by agglutination agents, allowing adjustments in shape, size and mechanical parameters of the mass. Briquettes can generate extra profits for mineral industry, presenting as a distinct product for agriculture, and can reduce the environmental liabilities of the fine particles storage or disposition. The produced limestone briquettes were subjected to shatter and water action resistance tests. The results show that after six minutes completely submerged in water, the briquettes where fully diluted, a highly favorable result considering its use for soil acidity correction.

Keywords: Agglomeration, briquetting, limestone, agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
1140 Investigation of Plant Density and Weed Competition in Different Cultivars of Wheat In Khoramabad Region

Authors: Ali Khourgami, Masoud Rafiee, Korous Rahmati, Ghobad Bour

Abstract:

In order to study the effect of plant density and competition of wheat with field bindweed (Convolvulus arvensis) on yield and agronomical properties of wheat(Triticum Sativum) in irrigated conditions, a factorial experiment as the base of complete randomize block design in three replication was conducted at the field of Kamalvand in khoramabad (Lorestan) region of Iran during 2008-2009. Three plant density (Factor A=200, 230 and 260kg/ha) three cultivar (Factor B=Bahar,Pishtaz and Alvand) and weed control (Factor C= control and no control of weeds)were assigned in experiment. Results show that: Plant density had significant effect (statistically) on seed yield, 1000 seed weight, weed density and dry weight of weeds, seed yield and harvest index had been meaningful effect for cultivars. The interaction between plant density and cultivars for weed density, seed yield, thousand seed weight and harvest index were significant. 260 kg/ha (plant density) of wheat had more effect on increasing of seed yield in Bahar cultivar wheat in khoramabad region of Iran.

Keywords: Convolvulus arvensis, plant density, Triticumsativum, weed density, Wheat

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2094
1139 Improvement of Model for SIMMER Code for SFR Corium Relocation Studies

Authors: A. Bachrata, N. Marie, F. Bertrand, J. B. Droin

Abstract:

The in-depth understanding of severe accident propagation in Generation IV of nuclear reactors is important so that appropriate risk management can be undertaken early in their design process. This paper is focused on model improvements in the SIMMER code in order to perform studies of severe accident mitigation of Sodium Fast Reactor. During the design process of the mitigation devices dedicated to extraction of molten fuel from the core region, the molten fuel propagation from the core up to the core catcher has to be studied. In this aim, analytical as well as the complex thermohydraulic simulations with SIMMER-III code are performed. The studies presented in this paper focus on physical phenomena and associated physical models that influence the corium relocation. Firstly, the molten pool heat exchange with surrounding structures is analyzed since it influences directly the instant of rupture of the dedicated tubes favoring the corium relocation for mitigation purpose. After the corium penetration into mitigation tubes, the fuel-coolant interactions result in formation of debris bed. Analyses of debris bed fluidization as well as sinking into a fluid are presented in this paper.

Keywords: Corium, mitigation tubes, SIMMER-III, sodium fast reactor (SFR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2890
1138 Nurse’s Role in Early Detection of Breast Cancer through Mammography and Genetic Screening and Its Impact on Patient's Outcome

Authors: Salwa Hagag Abdelaziz, Dorria Salem, Hoda Zaki, Suzan Atteya

Abstract:

Early detection of breast cancer saves many thousands of lives each year via application of mammography and genetic screening and many more lives could be saved if nurses are involved in breast care screening practices. So, the aim of the study was to identify nurse's role in early detection of breast cancer through mammography and genetic screening and its impact on patient's outcome. In order to achieve this aim, 400 women above 40 years, asymptomatic were recruited for mammography and genetic screening. In addition, 50 nurses and 6 technologists were involved in the study. A descriptive analytical design was used. Five tools were utilized: sociodemographic, mammographic examination and risk factors, women's before, during and after mammography, items relaying to technologists, and items related to nurses were also obtained. The study finding revealed that 3% of women detected for malignancy and 7.25% for fibroadenoma. Statistically significant differences were found between mammography results and age, family history, genetic screening, exposure to smoke, and using contraceptive pills. Nurses have insufficient knowledge about screening tests. Based on these findings the present study recommended involvement of nurses in breast care which is very important to in force population about screening practices.

Keywords: Early detection, Genetic Screening, Mammography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4242
1137 RTCoord: A Methodology to Design WSAN Applications

Authors: J. Barbarán, M. Díaz, I. Esteve, D. Garrido, L. Llopis, B. Rubio

Abstract:

Wireless Sensor and Actor Networks (WSANs) constitute an emerging and pervasive technology that is attracting increasing interest in the research community for a wide range of applications. WSANs have two important requirements: coordination interactions and real-time communication to perform correct and timely actions. This paper introduces a methodology to facilitate the task of the application programmer focusing on the coordination and real-time requirements of WSANs. The methodology proposed in this model uses a real-time component model, UM-RTCOM, which will help us to achieve the design and implementation of applications in WSAN by using the component oriented paradigm. This will help us to develop software components which offer some very interesting features, such as reusability and adaptability which are very suitable for WSANs as they are very dynamic environments with rapidly changing conditions. In addition, a high-level coordination model based on tuple channels (TC-WSAN) is integrated into the methodology by providing a component-based specification of this model in UM-RTCOM; this will allow us to satisfy both sensor-actor and actor-actor coordination requirements in WSANs. Finally, we present in this paper the design and implementation of an application which will help us to show how the methodology can be easily used in order to achieve the development of WSANs applications.

Keywords: Sensor networks, real time and embedded systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
1136 Robust Design and Optimization of Production Wastes: An Application for Industries

Authors: Christopher C. Ihueze, Charles C. Okpala, Christian E. Okafor, Peter O. Ogbobe

Abstract:

This paper focuses on robust design and optimization of industrial production wastes. Past literatures were reviewed to case study Clamason Industries Limited (CIL) - a leading ladder-tops manufacturer. A painstaking study of the firm-s practices at the shop floor revealed that Over-production, Waiting time, Excess inventory, and Defects are the major wastes that are impeding their progress and profitability. Design expert8 software was used to apply Taguchi robust design and response surface methodology in order to model, analyse and optimise the wastes cost in CIL. Waiting time and overproduction rank first and second in contributing to the costs of wastes in CIL. For minimal wastes cost the control factors of overproduction, waiting-time, defects and excess-inventory must be set at 0.30, 390.70, 4 and 55.70 respectively for CIL. The optimal value of cost of wastes for the months studied was 22.3679. Finally, a recommendation was made that for the company to enhance their profitability and customer satisfaction, they must adopt the Shingeo Shingo-s Single Minute Exchange of Dies (SMED), which will immediately tackle the waste of waiting by drastically reducing their setup time.

Keywords: Excess-inventory, setup time, single minute exchange of dies, optimal value, over-production, robust design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
1135 Fragility Assessment for Vertically Irregular Buildings with Soft Storey

Authors: N. Akhavan, Sh. Tavousi Tafreshi, A. Ghasemi

Abstract:

Seismic behavior of irregular structures through the past decades indicate that the stated buildings do not have appropriate performance. Among these subjects, the current paper has investigated the behavior of special steel moment frame with different configuration of soft storey vertically. The analyzing procedure has been evaluated with respect to incremental dynamic analysis (IDA), and numeric process was carried out by OpenSees finite element analysis package. To this end, nine 2D steel frames, with different numbers of stories and irregularity positions, which were subjected to seven pairs of ground motion records orthogonally with respect to Ibarra-Krawinkler deterioration model, have been investigated. This paper aims at evaluating the response of two-dimensional buildings incorporating soft storey which subjected to bi-directional seismic excitation. The IDAs were implemented for different stages of PGA with various ground motion records, in order to determine maximum inter-storey drift ratio. According to statistical elements and fracture range (standard deviation), the vulnerability or exceedance from above-mentioned cases has been examined. For this reason, fragility curves for different placement of soft storey in the first, middle and the last floor for 4, 8, and 16 storey buildings have been generated and compared properly.

Keywords: Special steel moment frame, soft storey, incremental dynamic analysis, fragility curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
1134 A Method of Representing Knowledge of Toolkits in a Pervasive Toolroom Maintenance System

Authors: A. Mohamed Mydeen, Pallapa Venkataram

Abstract:

The learning process needs to be so pervasive to impart the quality in acquiring the knowledge about a subject by making use of the advancement in the field of information and communication systems. However, pervasive learning paradigms designed so far are system automation types and they lack in factual pervasive realm. Providing factual pervasive realm requires subtle ways of teaching and learning with system intelligence. Augmentation of intelligence with pervasive learning necessitates the most efficient way of representing knowledge for the system in order to give the right learning material to the learner. This paper presents a method of representing knowledge for Pervasive Toolroom Maintenance System (PTMS) in which a learner acquires sublime knowledge about the various kinds of tools kept in the toolroom and also helps for effective maintenance of the toolroom. First, we explicate the generic model of knowledge representation for PTMS. Second, we expound the knowledge representation for specific cases of toolkits in PTMS. We have also presented the conceptual view of knowledge representation using ontology for both generic and specific cases. Third, we have devised the relations for pervasive knowledge in PTMS. Finally, events are identified in PTMS which are then linked with pervasive data of toolkits based on relation formulated. The experimental environment and case studies show the accuracy and efficient knowledge representation of toolkits in PTMS.

Keywords: Generic knowledge representation, toolkit, toolroom, pervasive computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2031
1133 Determination of an Efficient Differentiation Pathway of Stem Cells Employing Predictory Neural Network Model

Authors: Mughal Yar M, Israr Ul Haq, Bushra Noman

Abstract:

The stem cells have ability to differentiated themselves through mitotic cell division and various range of specialized cell types. Cellular differentiation is a way by which few specialized cell develops into more specialized.This paper studies the fundamental problem of computational schema for an artificial neural network based on chemical, physical and biological variables of state. By doing this type of study system could be model for a viable propagation of various economically important stem cells differentiation. This paper proposes various differentiation outcomes of artificial neural network into variety of potential specialized cells on implementing MATLAB version 2009. A feed-forward back propagation kind of network was created to input vector (five input elements) with single hidden layer and one output unit in output layer. The efficiency of neural network was done by the assessment of results achieved from this study with that of experimental data input and chosen target data. The propose solution for the efficiency of artificial neural network assessed by the comparatative analysis of “Mean Square Error" at zero epochs. There are different variables of data in order to test the targeted results.

Keywords: Computational shcmin, meiosis, mitosis, neuralnetwork, Stem cell SOM;

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1510
1132 Building a Personalized Multidimensional Intelligent Learning System

Authors: Lun-Ping Hung, Nan-Chen Hsieh, Chia-Ling Ho, Chien-Liang Chen

Abstract:

Currently, most of distance learning courses can only deliver standard material to students. Students receive course content passively which leads to the neglect of the goal of education – “to suit the teaching to the ability of students". Providing appropriate course content according to students- ability is the main goal of this paper. Except offering a series of conventional learning services, abundant information available, and instant message delivery, a complete online learning environment should be able to distinguish between students- ability and provide learning courses that best suit their ability. However, if a distance learning site contains well-designed course content and design but fails to provide adaptive courses, students will gradually loss their interests and confidence in learning and result in ineffective learning or discontinued learning. In this paper, an intelligent tutoring system is proposed and it consists of several modules working cooperatively in order to build an adaptive learning environment for distance education. The operation of the system is based on the result of Self-Organizing Map (SOM) to divide students into different groups according to their learning ability and learning interests and then provide them with suitable course content. Accordingly, the problem of information overload and internet traffic problem can be solved because the amount of traffic accessing the same content is reduced.

Keywords: Distance Learning, Intelligent Tutoring System(ITS), Self-Organizing Map (SOM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
1131 Learning to Order Terms: Supervised Interestingness Measures in Terminology Extraction

Authors: Jérôme Azé, Mathieu Roche, Yves Kodratoff, Michèle Sebag

Abstract:

Term Extraction, a key data preparation step in Text Mining, extracts the terms, i.e. relevant collocation of words, attached to specific concepts (e.g. genetic-algorithms and decisiontrees are terms associated to the concept “Machine Learning" ). In this paper, the task of extracting interesting collocations is achieved through a supervised learning algorithm, exploiting a few collocations manually labelled as interesting/not interesting. From these examples, the ROGER algorithm learns a numerical function, inducing some ranking on the collocations. This ranking is optimized using genetic algorithms, maximizing the trade-off between the false positive and true positive rates (Area Under the ROC curve). This approach uses a particular representation for the word collocations, namely the vector of values corresponding to the standard statistical interestingness measures attached to this collocation. As this representation is general (over corpora and natural languages), generality tests were performed by experimenting the ranking function learned from an English corpus in Biology, onto a French corpus of Curriculum Vitae, and vice versa, showing a good robustness of the approaches compared to the state-of-the-art Support Vector Machine (SVM).

Keywords: Text-mining, Terminology Extraction, Evolutionary algorithm, ROC Curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
1130 TiO2 Nanowires as Efficient Heterogeneous Photocatalysts for Waste-Water Treatment

Authors: Gul Afreen, Sreedevi Upadhyayula, Mahendra K. Sunkara

Abstract:

One-dimensional (1D) nanostructures like nanowires, nanotubes, and nanorods find variety of practical application owing to their unique physico-chemical properties. In this work, TiO2 nanowires were synthesized by direct oxidation of titanium particles in a unique microwave plasma jet reactor. The prepared TiO2 nanowires manifested the flexible features, and were characterized by using X-ray diffraction, Brunauer-Emmett-Teller (BET) surface area analyzer, UV-Visible and FTIR spectrophotometers, Scanning electron microscope, and Transmission electron microscope. Further, the photodegradation efficiency of these nanowires were tested against toxic organic dye like methylene blue (MB) and the results were compared with the commercial TiO2. It was found that TiO2 nanowires exhibited superior photocatalytic performance (89%) as compared to commercial TiO2 (75%) after 60 min of reaction. This is attributed to the lower recombination rate and increased interfacial charge transfer in TiO2 nanowire. Pseudo-first order kinetic modelling performed with the experimental results revealed that the rate constant of photodegradation in case of TiO2 nanowire was 1.3 times higher than that of commercial TiO2. Superoxide radical (O2˙) was found to be the major contributor in the photodegradation mechanism. Based on the trapping experiments, a plausible mechanism of the photocatalytic reaction is discussed.

Keywords: Heterogeneous catalysis, photodegradation, reactive oxygen species, TiO2 nanowires.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900
1129 Design and Synthesis of Two Tunable Bandpass Filters Based On Varactors and Defected Ground Structure

Authors: M. Boulakroune, M. Challal, H. Louazene, S. Fentiz

Abstract:

This paper presents two types of microstrip bandpass filter (BPF) at microwave frequencies. The first one is a tunable BPF using planar patch resonators based on a varactor diode. The filter is formed by a triple mode circular patch resonator with two pairs of slots, in which the varactor diodes are connected. Indeed, this filter is initially centered at 2.4 GHz; the center frequency of the tunable patch filter could be tuned up to 1.8 GHz simultaneously with the bandwidth, reaching high tuning ranges. Lossless simulations were compared to those considering the substrate dielectric, conductor losses and the equivalent electrical circuit model of the tuning element in order to assess their effects. Within these variations, simulation results showed insertion loss better than 2 dB and return loss better than 10 dB over the passband. The second structure is a BPF for ultra-wideband (UWB) applications based on multiple-mode resonator (MMR) and rectangular-shaped defected ground structure (DGS). This filter, which is compact size of 25.2 x 3.8 mm2, provides in the pass band an insertion loss of 0.57 dB and a return loss greater than 12 dB. The proposed filters presents good performances and the simulation results are in satisfactory agreement with the experimentation ones reported elsewhere.

Keywords: Defected ground structure, varactor diode, microstrip bandpass filter, multiple-mode resonator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2649
1128 Computer Verification in Cryptography

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.

Keywords: prime numbers, primality tests, (conditional) proba¬bility distributions, formal proof system, higher-order logic, formal verification, Bayes' Formula, Miller-Rabin primality test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
1127 Vaccinated Susceptible Infected and Recovered (VSIR) Mathematical Model to Study the Effect of Bacillus Calmette-Guerin (BCG) Vaccine and the Disease Stability Analysis

Authors: Muhammad Shahid, Nasir-uddin Khan, Mushtaq Hussain, Muhammad Liaquat Ali, Asif Mansoor

Abstract:

Tuberculosis (TB) remains a leading cause of infectious mortality. It is primarily transmitted by the respiratory route, individuals with active disease may infect others through airborne particles which releases when they cough, talk, or sing and subsequently inhale by others. In order to study the effect of the Bacilli Calmette-Guerin (BCG) vaccine after vaccination of TB patient, a Vaccinated Susceptible Infected and Recovered (VSIR) mathematical model is being developed to achieve the desired objectives. The mathematical model, so developed, shall be used to quantify the effect of BCG Vaccine to protect the immigrant young adult person. Moreover, equations are to be established for the disease endemic and free equilibrium states and subsequently utilized in disease stability analysis. The stability analysis will give a complete picture of disease annihilation from the total population if the total removal rate from the infectious group should be greater than total number of dormant infections produced throughout infectious period.

Keywords: Bacillus Calmette-Guerin vaccine, disease-free equilibrium state, VSIR Quantification, disease stability analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
1126 Fuzzy Control of the Air Conditioning System at Different Operating Pressures

Authors: Mohanad Alata , Moh'd Al-Nimr, Rami Al-Jarrah

Abstract:

The present work demonstrates the design and simulation of a fuzzy control of an air conditioning system at different pressures. The first order Sugeno fuzzy inference system is utilized to model the system and create the controller. In addition, an estimation of the heat transfer rate and water mass flow rate injection into or withdraw from the air conditioning system is determined by the fuzzy IF-THEN rules. The approach starts by generating the input/output data. Then, the subtractive clustering algorithm along with least square estimation (LSE) generates the fuzzy rules that describe the relationship between input/output data. The fuzzy rules are tuned by Adaptive Neuro-Fuzzy Inference System (ANFIS). The results show that when the pressure increases the amount of water flow rate and heat transfer rate decrease within the lower ranges of inlet dry bulb temperatures. On the other hand, and as pressure increases the amount of water flow rate and heat transfer rate increases within the higher ranges of inlet dry bulb temperatures. The inflection in the pressure effect trend occurs at lower temperatures as the inlet air humidity increases.

Keywords: Air Conditioning, ANFIS, Fuzzy Control, Sugeno System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3371
1125 Investigating the Effect of Refinancing on Financial Behavior of Energy Efficiency Projects

Authors: Zohreh Soltani, Seyedmohammadhossein Hosseinian

Abstract:

Reduction of energy consumption in built infrastructure, through the installation of energy-efficient technologies, is a major approach to achieving sustainability. In practice, the viability of energy efficiency projects strongly depends on the cost reimbursement and profitability. These projects are subject to failure if the actual cost savings do not reimburse the project cost promptly. In such cases, refinancing could be a solution to benefit from the long-term returns of the project, if implemented wisely. However, very little is still known about the effect of refinancing options on financial performance of energy efficiency projects. In order to fill this gap, the present study investigates the financial behavior of energy efficiency projects with focus on refinancing options, such as Leveraged Loans. A System Dynamics (SD) model is introduced, and the model application is presented using an actual case-study data. The case study results indicate that while high-interest start-ups make using Leveraged Loan inevitable, refinancing can rescue the project and bring about profitability. This paper also presents some managerial implications of refinancing energy efficiency projects based on the case-study analysis. Results of this study help to implement financially viable energy efficiency projects so that the community could benefit from their environmental advantages widely.

Keywords: Energy efficiency projects, leveraged loan, refinancing, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
1124 Classifier Based Text Mining for Neural Network

Authors: M. Govindarajan, R. M. Chandrasekaran

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

Keywords: Back propagation, classification accuracy, textmining, time complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4220
1123 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioural Economics and Econometrics

Authors: Okay Gunes

Abstract:

In this article, a new method is proposed for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure wellbeing inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence, it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.

Keywords: Heisenberg Uncertainty Principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2059
1122 Trajectory Tracking of a Redundant Hybrid Manipulator Using a Switching Control Method

Authors: Atilla Bayram

Abstract:

This paper presents the trajectory tracking control of a spatial redundant hybrid manipulator. This manipulator consists of two parallel manipulators which are a variable geometry truss (VGT) module. In fact, each VGT module with 3-degress of freedom (DOF) is a planar parallel manipulator and their operational planes of these VGT modules are arranged to be orthogonal to each other. Also, the manipulator contains a twist motion part attached to the top of the second VGT module to supply the missing orientation of the endeffector. These three modules constitute totally 7-DOF hybrid (parallel-parallel) redundant spatial manipulator. The forward kinematics equations of this manipulator are obtained, then, according to these equations, the inverse kinematics is solved based on an optimization with the joint limit avoidance. The dynamic equations are formed by using virtual work method. In order to test the performance of the redundant manipulator and the controllers presented, two different desired trajectories are followed by using the computed force control method and a switching control method. The switching control method is combined with the computed force control method and genetic algorithm. In the switching control method, the genetic algorithm is only used for fine tuning in the compensation of the trajectory tracking errors.

Keywords: Computed force control method, genetic algorithm, hybrid manipulator, inverse kinematics of redundant manipulators, variable geometry truss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
1121 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6

Authors: M. Moslehpour, S. Khorsandi

Abstract:

Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.

Keywords: NDP, IPsec, SEND, CGA, Modifier, Malicious node, Self-Computing, Distributed-Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378
1120 Distributed Generator Placement for Loss Reduction and Improvement in Reliability

Authors: Priyanka Paliwal, N.P. Patidar

Abstract:

Distributed Power generation has gained a lot of attention in recent times due to constraints associated with conventional power generation and new advancements in DG technologies .The need to operate the power system economically and with optimum levels of reliability has further led to an increase in interest in Distributed Generation. However it is important to place Distributed Generator on an optimum location so that the purpose of loss minimization and voltage regulation is dully served on the feeder. This paper investigates the impact of DG units installation on electric losses, reliability and voltage profile of distribution networks. In this paper, our aim would be to find optimal distributed generation allocation for loss reduction subjected to constraint of voltage regulation in distribution network. The system is further analyzed for increased levels of Reliability. Distributed Generator offers the additional advantage of increase in reliability levels as suggested by the improvements in various reliability indices such as SAIDI, CAIDI and AENS. Comparative studies are performed and related results are addressed. An analytical technique is used in order to find the optimal location of Distributed Generator. The suggested technique is programmed under MATLAB software. The results clearly indicate that DG can reduce the electrical line loss while simultaneously improving the reliability of the system.

Keywords: AENS, CAIDI, Distributed Generation, lossreduction, Reliability, SAIDI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3106
1119 Expert-Driving-Criteria Based on Fuzzy Logic Approach for Intelligent Driving Diagnosis

Authors: Andrés C. Cuervo Pinilla, Christian G. Quintero M., Chinthaka Premachandra

Abstract:

This paper considers people’s driving skills diagnosis under real driving conditions. In that sense, this research presents an approach that uses GPS signals which have a direct correlation with driving maneuvers. Besides, it is presented a novel expert-driving-criteria approximation using fuzzy logic which seeks to analyze GPS signals in order to issue an intelligent driving diagnosis. Based on above, this works presents in the first section the intelligent driving diagnosis system approach in terms of its own characteristics properties, explaining in detail significant considerations about how an expert-driving-criteria approximation must be developed. In the next section, the implementation of our developed system based on the proposed fuzzy logic approach is explained. Here, a proposed set of rules which corresponds to a quantitative abstraction of some traffics laws and driving secure techniques seeking to approach an expert-driving- criteria approximation is presented. Experimental testing has been performed in real driving conditions. The testing results show that the intelligent driving diagnosis system qualifies driver’s performance quantitatively with a high degree of reliability.

Keywords: Driver support systems, intelligent transportation systems, fuzzy logic, real time data processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1205
1118 High Performance of Direct Torque and Flux Control of a Double Stator Induction Motor Drive with a Fuzzy Stator Resistance Estimator

Authors: K. Kouzi

Abstract:

In order to have stable and high performance of direct torque and flux control (DTFC) of double star induction motor drive (DSIM), proper on-line adaptation of the stator resistance is very important. This is inevitably due to the variation of the stator resistance during operating conditions, which introduces error in estimated flux position and the magnitude of the stator flux. Error in the estimated stator flux deteriorates the performance of the DTFC drive. Also, the effect of error in estimation is very important especially at low speed. Due to this, our aim is to overcome the sensitivity of the DTFC to the stator resistance variation by proposing on-line fuzzy estimation stator resistance. The fuzzy estimation method is based on an on-line stator resistance correction through the variations of the stator current estimation error and its variations. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of the suggested algorithm control is to avoid the drive instability that may occur in certain situations and ensure the tracking of the actual stator resistance. The validity of the technique and the improvement of the whole system performance are proved by the results.

Keywords: Direct torque control, dual stator induction motor, fuzzy logic estimation, stator resistance adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1164
1117 Correlation and Prediction of Biodiesel Density

Authors: Nieves M. C. Talavera-Prieto, Abel G. M. Ferreira, António T. G. Portugal, Rui J. Moreira, Jaime B. Santos

Abstract:

The knowledge of biodiesel density over large ranges of temperature and pressure is important for predicting the behavior of fuel injection and combustion systems in diesel engines, and for the optimization of such systems. In this study, cottonseed oil was transesterified into biodiesel and its density was measured at temperatures between 288 K and 358 K and pressures between 0.1 MPa and 30 MPa, with expanded uncertainty estimated as ±1.6 kg⋅m- 3. Experimental pressure-volume-temperature (pVT) cottonseed data was used along with literature data relative to other 18 biodiesels, in order to build a database used to test the correlation of density with temperarure and pressure using the Goharshadi–Morsali–Abbaspour equation of state (GMA EoS). To our knowledge, this is the first that density measurements are presented for cottonseed biodiesel under such high pressures, and the GMA EoS used to model biodiesel density. The new tested EoS allowed correlations within 0.2 kg·m-3 corresponding to average relative deviations within 0.02%. The built database was used to develop and test a new full predictive model derived from the observed linear relation between density and degree of unsaturation (DU), which depended from biodiesel FAMEs profile. The average density deviation of this method was only about 3 kg.m-3 within the temperature and pressure limits of application. These results represent appreciable improvements in the context of density prediction at high pressure when compared with other equations of state.

Keywords: Biodiesel, Correlation, Density, Equation of state, Prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3515
1116 Enhanced Method of Conceptual Sizing of Aircraft Electro-Thermal De-icing System

Authors: Ahmed Shinkafi, Craig Lawson

Abstract:

There is a great advancement towards the All-Electric Aircraft (AEA) technology. The AEA concept assumes that all aircraft systems will be integrated into one electrical power source in the future. The principle of the electro-thermal system is to transfer the energy required for anti/de-icing to the protected areas in electrical form. However, powering a large aircraft anti-icing system electrically could be quite excessive in cost and system weight. Hence, maximising the anti/de-icing efficiency of the electro-thermal system in order to minimise its power demand has become crucial to electro-thermal de-icing system sizing. In this work, an enhanced methodology has been developed for conceptual sizing of aircraft electro-thermal de-icing System. The work factored those critical terms overlooked in previous studies which were critical to de-icing energy consumption. A case study of a typical large aircraft wing de-icing was used to test and validate the model. The model was used to optimise the system performance by a trade-off between the de-icing peak power and system energy consumption. The optimum melting surface temperatures and energy flux predicted enabled the reduction in the power required for de-icing. The weight penalty associated with electro-thermal anti-icing/de-icing method could be eliminated using this method without under estimating the de-icing power requirement.

Keywords: Aircraft de-icing system, electro-thermal, in-flight icing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4629
1115 The Ecological Footprint of Tourism in Jalapão/TO/Brazil

Authors: Mary L. G. S. Senna, Afonso R. Aquino

Abstract:

The development of tourism causes negative impacts on the environment. It is in this context, through the Ecological Footprint (EF) method that this study aimed to characterize the impacts of ecotourism on the community of Mateiros, Jalapão, Brazil. The EF, which consisted in its original a method to construct a land use matrix, considering some major categories of human consumption such as food, housing, transportation, consumer goods and services, and six other categories from the main land use which are divided into the topics: land use, degraded environment, gardens, fertile land, pasture and forests protected by the government. The main objective of this index is to calculate the land area required for the production and maintenance of goods and services consumed by a community. The field research was conducted throughout the year of 2014 until July 2015. After the calculations of each category, these components were added according to the presented method in order to determine the annual EF of the tourism sector in Mateiros. The results show that the EF resulting from tourism in Mateiros is 2,194.22 hectares of land required for tourism activities in the region. The EF of tourism was considered high, nevertheless, if it is added the total of hectares needed annually for tourism activities, the result found would be 2,194.22 hectares needed to absorb the CO2 emissions generated in the region directly from the tourism sector.

Keywords: Sustainable tourism, tourism ecological footprint, Jalapão/TO/Brazil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
1114 Flowability and Strength Development Characteristics of Bottom Ash Based Geopolymer

Authors: Si-Hwan Kim, Gum-Sung Ryu, Kyung-Taek Koh, Jang-Hwa Lee

Abstract:

Despite of the preponderant role played by cement among the construction materials, it is today considered as a material destructing the environment due to the large quantities of carbon dioxide exhausted during its manufacture. Besides, global warming is now recognized worldwide as the new threat to the humankind against which advanced countries are investigating measures to reduce the current amount of exhausted gases to the half by 2050. Accordingly, efforts to reduce green gases are exerted in all industrial fields. Especially, the cement industry strives to reduce the consumption of cement through the development of alkali-activated geopolymer mortars using industrial byproducts like bottom ash. This study intends to gather basic data on the flowability and strength development characteristics of alkali-activated geopolymer mortar by examining its FT-IT features with respect to the effects and strength of the alkali-activator in order to develop bottom ash-based alkali-activated geopolymer mortar. The results show that the 35:65 mass ratio of sodium hydroxide to sodium silicate is appropriate and that a molarity of 9M for sodium hydroxide is advantageous. The ratio of the alkali-activators to bottom ash is seen to have poor effect on the strength. Moreover, the FT-IR analysis reveals that larger improvement of the strength shifts the peak from 1060 cm–1 (T-O, T=Si or Al) toward shorter wavenumber.

Keywords: Bottom Ash, Geopolymer mortar, Flowability, Strength Properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2505
1113 Rating the Importance of Customer Requirements for Green Product Using Analytic Hierarchy Process Methodology

Authors: Lara F. Horani, Shurong Tong

Abstract:

Identification of customer requirements and their preferences are the starting points in the process of product design. Most of design methodologies focus on traditional requirements. But in the previous decade, the green products and the environment requirements have increasingly attracted the attention with the constant increase in the level of consumer awareness towards environmental problems (such as green-house effect, global warming, pollution and energy crisis, and waste management). Determining the importance weights for the customer requirements is an essential and crucial process. This paper used the analytic hierarchy process (AHP) approach to evaluate and rate the customer requirements for green products. With respect to the ultimate goal of customer satisfaction, surveys are conducted using a five-point scale analysis. With the help of this scale, one can derive the weight vectors. This approach can improve the imprecise ranking of customer requirements inherited from studies based on the conventional AHP. Furthermore, the AHP with extent analysis is simple and easy to implement to prioritize customer requirements. The research is based on collected data through a questionnaire survey conducted over a sample of 160 people belonging to different age, marital status, education and income groups in order to identify the customer preferences for green product requirements.

Keywords: Analytic hierarchy process, green product, customer requirements for green design, importance weights for the customer requirements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 893