Search results for: Decision making style
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2422

Search results for: Decision making style

442 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation

Authors: P. Luangpaiboon, S. Boonhao

Abstract:

This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.

Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
441 An Investigation into Kanji Character Discrimination Process from EEG Signals

Authors: Hiroshi Abe, Minoru Nakayama

Abstract:

The frontal area in the brain is known to be involved in behavioral judgement. Because a Kanji character can be discriminated visually and linguistically from other characters, in Kanji character discrimination, we hypothesized that frontal event-related potential (ERP) waveforms reflect two discrimination processes in separate time periods: one based on visual analysis and the other based on lexcical access. To examine this hypothesis, we recorded ERPs while performing a Kanji lexical decision task. In this task, either a known Kanji character, an unknown Kanji character or a symbol was presented and the subject had to report if the presented character was a known Kanji character for the subject or not. The same response was required for unknown Kanji trials and symbol trials. As a preprocessing of signals, we examined the performance of a method using independent component analysis for artifact rejection and found it was effective. Therefore we used it. In the ERP results, there were two time periods in which the frontal ERP wavefoms were significantly different betweeen the unknown Kanji trials and the symbol trials: around 170ms and around 300ms after stimulus onset. This result supported our hypothesis. In addition, the result suggests that Kanji character lexical access may be fully completed by around 260ms after stimulus onset.

Keywords: Character discrimination, Event-related Potential, IndependentComponent Analysis, Kanji, Lexical access.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
440 Application of Stochastic Models to Annual Extreme Streamflow Data

Authors: Karim Hamidi Machekposhti, Hossein Sedghi

Abstract:

This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.

Keywords: Stochastic models, ARIMA, extreme streamflow, Karkheh River.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694
439 Thermodynamic Optimization of Turboshaft Engine using Multi-Objective Genetic Algorithm

Authors: S. Farahat, E. Khorasani Nejad, S. M. Hoseini Sarvari

Abstract:

In this paper multi-objective genetic algorithms are employed for Pareto approach optimization of ideal Turboshaft engines. In the multi-objective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are specific thrust (F/m& 0), specific fuel consumption ( P S ), output shaft power 0 (& /&) shaft W m and overall efficiency( ) O η . These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters (compressor pressure ratio, turbine temperature ratio and Mach number). At the first stage single objective optimization has been investigated and the method of NSGA-II has been used for multiobjective optimization. Optimization procedures are performed for two and four objective functions and the results are compared for ideal Turboshaft engine. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of four objective optimization the results are given in tables.

Keywords: Multi-objective, Genetic algorithm, Turboshaft Engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
438 Application of GIS-Based Construction Engineering: An Electronic Document Management System

Authors: Mansour N. Jadid

Abstract:

This paper describes the implementation of a GIS to provide decision support for successfully monitoring the movements and storage of materials, hence ensuring that finished products travel from the point of origin to the destination construction site through the supply-chain management (SCM) system. This system ensures the efficient operation of suppliers, manufacturers, and distributors by determining the shortest path from the point of origin to the final destination to reduce construction costs, minimize time, and enhance productivity. These systems are essential to the construction industry because they reduce costs and save time, thereby improve productivity and effectiveness. This study describes a typical supply-chain model and a geographical information system (GIS)-based SCM that focuses on implementing an electronic document management system, which maps the application framework to integrate geodetic support with the supply-chain system. This process provides guidance for locating the nearest suppliers to fill the information needs of project members in different locations. Moreover, this study illustrates the use of a GIS-based SCM as a collaborative tool in innovative methods for implementing Web mapping services, as well as aspects of their integration by generating an interactive GIS for the construction industry platform.

Keywords: Construction, coordinate, engineering, GIS, management, map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1425
437 A Novel Receiver Algorithm for Coherent Underwater Acoustic Communications

Authors: Liang Zhao, Jianhua Ge

Abstract:

In this paper, we proposed a novel receiver algorithm for coherent underwater acoustic communications. The proposed receiver is composed of three parts: (1) Doppler tracking and correction, (2) Time reversal channel estimation and combining, and (3) Joint iterative equalization and decoding (JIED). To reduce computational complexity and optimize the equalization algorithm, Time reversal (TR) channel estimation and combining is adopted to simplify multi-channel adaptive decision feedback equalizer (ADFE) into single channel ADFE without reducing the system performance. Simultaneously, the turbo theory is adopted to form joint iterative ADFE and convolutional decoder (JIED). In JIED scheme, the ADFE and decoder exchange soft information in an iterative manner, which can enhance the equalizer performance using decoding gain. The simulation results show that the proposed algorithm can reduce computational complexity and improve the performance of equalizer. Therefore, the performance of coherent underwater acoustic communications can be improved greatly.

Keywords: Underwater acoustic communication, Time reversal (TR) combining, joint iterative equalization and decoding (JIED)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
436 Study of Compaction in Hot-Mix Asphalt Using Computer Simulations

Authors: Kasthurirangan Gopalakrishnan, Naga Shashidhar, Xiaoxiong Zhong

Abstract:

During the process of compaction in Hot-Mix Asphalt (HMA) mixtures, the distance between aggregate particles decreases as they come together and eliminate air-voids. By measuring the inter-particle distances in a cut-section of a HMA sample the degree of compaction can be estimated. For this, a calibration curve is generated by computer simulation technique when the gradation and asphalt content of the HMA mixture are known. A two-dimensional cross section of HMA specimen was simulated using the mixture design information (gradation, asphalt content and air-void content). Nearest neighbor distance methods such as Delaunay triangulation were used to study the changes in inter-particle distance and area distribution during the process of compaction in HMA. Such computer simulations would enable making several hundreds of repetitions in a short period of time without the necessity to compact and analyze laboratory specimens in order to obtain good statistics on the parameters defined. The distributions for the statistical parameters based on computer simulations showed similar trends as those of laboratory specimens.

Keywords: Computer simulations, Hot-Mix Asphalt (HMA), inter-particle distance, image analysis, nearest neighbor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
435 Adaptive Network Intrusion Detection Learning: Attribute Selection and Classification

Authors: Dewan Md. Farid, Jerome Darmont, Nouria Harbi, Nguyen Huu Hoa, Mohammad Zahidur Rahman

Abstract:

In this paper, a new learning approach for network intrusion detection using naïve Bayesian classifier and ID3 algorithm is presented, which identifies effective attributes from the training dataset, calculates the conditional probabilities for the best attribute values, and then correctly classifies all the examples of training and testing dataset. Most of the current intrusion detection datasets are dynamic, complex and contain large number of attributes. Some of the attributes may be redundant or contribute little for detection making. It has been successfully tested that significant attribute selection is important to design a real world intrusion detection systems (IDS). The purpose of this study is to identify effective attributes from the training dataset to build a classifier for network intrusion detection using data mining algorithms. The experimental results on KDD99 benchmark intrusion detection dataset demonstrate that this new approach achieves high classification rates and reduce false positives using limited computational resources.

Keywords: Attributes selection, Conditional probabilities, information gain, network intrusion detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2669
434 Technique for Processing and Preservation of Human Amniotic Membrane for Ocular Surface Reconstruction

Authors: Irfan Z. Qureshi, Fareeha A., Wajid A. Khan

Abstract:

Human amniotic membrane (HAM) is a useful biological material for the reconstruction of damaged ocular surface. The processing and preservation of HAM is critical to prevent the patients undergoing amniotic membrane transplant (AMT) from cross infections. For HAM preparation human placenta is obtained after an elective cesarean delivery. Before collection, the donor is screened for seronegativity of HCV, Hbs Ag, HIV and Syphilis. After collection, placenta is washed in balanced salt solution (BSS) in sterile environment. Amniotic membrane is then separated from the placenta as well as chorion while keeping the preparation in BSS. Scrapping of HAM is then carried out manually until all the debris is removed and clear transparent membrane is acquired. Nitrocellulose membrane filters are then placed on the stromal side of HAM, cut around the edges with little membrane folded towards other side making it easy to separate during surgery. HAM is finally stored in solution of glycerine and Dulbecco-s Modified Eagle Medium (DMEM) in 1:1 ratio containing antibiotics. The capped borosil vials containing HAM are kept at -80°C until use. This vial is thawed to room temperature and opened under sterile operation theatre conditions at the time of surgery.

Keywords: HAM, AMT, ocular transplant

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3519
433 Combining Gene and Chemo Therapy using Multifunctional Polymeric Micelles

Authors: Hong Yi Huang, Wei Ti Kuo, Yi You Huang

Abstract:

Non-viral gene carriers composed of biodegradable polymers or lipids have been considered as a safer alternative for gene carriers over viral vectors. We have developed multi-functional nano-micelles for both drug and gene delivery application. Polyethyleneimine (PEI) was modified by grafting stearic acid (SA) and formulated to polymeric micelles (PEI-SA) with positive surface charge for gene and drug delivery. Our results showed that PEI-SA micelles provided high siRNA binding efficiency. In addition, siRNA delivered by PEI-SA carriers also demonstrated significantly high cellular uptake even in the presence of serum proteins. The post-transcriptional gene silencing efficiency was greatly improved by the polyplex formulated by 10k PEI-SA/siRNA. The amphiphilic structure of PEI-SA micelles provided advantages for multifunctional tasks; where the hydrophilic shell modified with cationic charges can electrostatically interact with DNA or siRNA, and the hydrophobic core can serve as payloads for hydrophobic drugs, making it a promising multifunctional vehicle for both genetic and chemotherapy application.

Keywords: polyethyleneimine, gene delivery, micelles, siRNA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864
432 Development of a Multi-Factorial Instrument for Accident Analysis Based on Systemic Methods

Authors: C. V. Pietreanu, S. E. Zaharia, C. Dinu

Abstract:

The present research is built on three major pillars, commencing by making some considerations on accident investigation methods and pointing out both defining aspects and differences between linear and non-linear analysis. The traditional linear focus on accident analysis describes accidents as a sequence of events, while the latest systemic models outline interdependencies between different factors and define the processes evolution related to a specific (normal) situation. Linear and non-linear accident analysis methods have specific limitations, so the second point of interest is mirrored by the aim to discover the drawbacks of systemic models which becomes a starting point for developing new directions to identify risks or data closer to the cause of incidents/accidents. Since communication represents a critical issue in the interaction of human factor and has been proved to be the answer of the problems made by possible breakdowns in different communication procedures, from this focus point, on the third pylon a new error-modeling instrument suitable for risk assessment/accident analysis will be elaborated.

Keywords: Accident analysis, multi-factorial error modeling, risk, systemic methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1010
431 Machine Learning for Music Aesthetic Annotation Using MIDI Format: A Harmony-Based Classification Approach

Authors: Lin Yang, Zhian Mi, Jiacheng Xiao, Rong Li

Abstract:

Swimming with the tide of deep learning, the field of music information retrieval (MIR) experiences parallel development and a sheer variety of feature-learning models has been applied to music classification and tagging tasks. Among those learning techniques, the deep convolutional neural networks (CNNs) have been widespreadly used with better performance than the traditional approach especially in music genre classification and prediction. However, regarding the music recommendation, there is a large semantic gap between the corresponding audio genres and the various aspects of a song that influence user preference. In our study, aiming to bridge the gap, we strive to construct an automatic music aesthetic annotation model with MIDI format for better comparison and measurement of the similarity between music pieces in the way of harmonic analysis. We use the matrix of qualification converted from MIDI files as input to train two different classifiers, support vector machine (SVM) and Decision Tree (DT). Experimental results in performance of a tag prediction task have shown that both learning algorithms are capable of extracting high-level properties in an end-to end manner from music information. The proposed model is helpful to learn the audience taste and then the resulting recommendations are likely to appeal to a niche consumer.

Keywords: Harmonic analysis, machine learning, music classification and tagging, MIDI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
430 Exploratory Tests of Crude Bacteriocins from Autochthonous Lactic Acid Bacteria against Food-Borne Pathogens and Spoilage Bacteria

Authors: M. Naimi, M. B. Khaled

Abstract:

The aim of the present work was to test in vitro inhibition of food pathogens and spoilage bacteria by crude bacteriocins from autochthonous lactic acid bacteria. Thirty autochthonous lactic acid bacteria isolated previously, belonging to the genera: Lactobacillus, Carnobacterium, Lactococcus, Vagococcus, Streptococcus, and Pediococcus, have been screened by an agar spot test and a well diffusion assay against Gram-positive and Gram-negative harmful bacteria: Bacillus cereus, Bacillus subtilis ATCC 6633, Escherichia coli ATCC 8739, Salmonella typhimurium ATCC 14028, Staphylococcus aureus ATCC 6538, and Pseudomonas aeruginosa under conditions means to reduce lactic acid and hydrogen peroxide effect to select bacteria with high bacteriocinogenic potential. Furthermore, crude bacteriocins semiquantification and heat sensitivity to different temperatures (80, 95, 110°C, and 121°C) were performed. Another exploratory test concerning the response of St. aureus ATCC 6538 to the presence of crude bacteriocins was realized. It has been observed by the agar spot test that fifteen candidates were active toward Gram-positive targets strains. The secondary screening demonstrated an antagonistic activity oriented only against St. aureus ATCC 6538, leading to the selection of five isolates: Lm14, Lm21, Lm23, Lm24, and Lm25 with a larger inhibition zone compared to the others. The ANOVA statistical analysis reveals a small variation of repeatability: Lm21: 0.56%, Lm23: 0%, Lm25: 1.67%, Lm14: 1.88%, Lm24: 2.14%. Conversely, slight variation was reported in terms of inhibition diameters: 9.58± 0.40, 9.83± 0.46 and 10.16± 0.24 8.5 ± 0.40 10 mm for, Lm21, Lm23, Lm25, Lm14and Lm24, indicating that the observed potential showed a heterogeneous distribution (BMS = 0.383, WMS = 0.117). The repeatability coefficient calculated displayed 7.35%. As for the bacteriocins semiquantification, the five samples exhibited production amounts about 4.16 for Lm21, Lm23, Lm25 and 2.08 AU/ml for Lm14, Lm24. Concerning the sensitivity the crude bacteriocins were fully insensitive to heat inactivation, until 121°C, they preserved the same inhibition diameter. As to, kinetic of growth , the µmax showed reductions in pathogens load for Lm21, Lm23, Lm25, Lm14, Lm24 of about 42.92%, 84.12%, 88.55%, 54.95%, 29.97% in the second trails. Inversely, this pathogen growth after five hours displayed differences of 79.45%, 12.64%, 11.82%, 87.88%, 85.66% in the second trails, compared to the control. This study showed potential inhibition to the growth of this food pathogen, suggesting the possibility to improve the hygienic food quality.

Keywords: Exploratory test, lactic acid bacteria, crude bacteriocins, spoilage, pathogens.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2328
429 Defining a Pathway to Zero Energy Building: A Case Study on Retrofitting an Old Office Building into a Net Zero Energy Building for Hot-Humid Climate

Authors: Kwame B. O. Amoah

Abstract:

This paper focuses on retrofitting an old existing office building to a net-zero energy building (NZEB). An existing small office building in Melbourne, Florida, was chosen as a case study to integrate state-of-the-art design strategies and energy-efficient building systems to improve building performance and reduce energy consumption. The study aimed to explore possible ways to maximize energy savings and renewable energy generation sources to cover the building's remaining energy needs necessary to achieve net-zero energy goals. A series of retrofit options were reviewed and adopted with some significant additional decision considerations. Detailed processes and considerations leading to zero energy are well documented in this study, with lessons learned adequately outlined. Based on building energy simulations, multiple design considerations were investigated, such as emerging state-of-the-art technologies, material selection, improvements to the building envelope, optimization of the HVAC, lighting systems, and occupancy loads analysis, as well as the application of renewable energy sources. The comparative analysis of simulation results was used to determine how specific techniques led to energy saving and cost reductions. The research results indicate that this small office building can meet net-zero energy use after appropriate design manipulations and renewable energy sources.

Keywords: Energy consumption, building energy analysis, energy retrofits, energy-efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 287
428 Application of Machine Learning Methods to Online Test Error Detection in Semiconductor Test

Authors: Matthias Kirmse, Uwe Petersohn, Elief Paffrath

Abstract:

As in today's semiconductor industries test costs can make up to 50 percent of the total production costs, an efficient test error detection becomes more and more important. In this paper, we present a new machine learning approach to test error detection that should provide a faster recognition of test system faults as well as an improved test error recall. The key idea is to learn a classifier ensemble, detecting typical test error patterns in wafer test results immediately after finishing these tests. Since test error detection has not yet been discussed in the machine learning community, we define central problem-relevant terms and provide an analysis of important domain properties. Finally, we present comparative studies reflecting the failure detection performance of three individual classifiers and three ensemble methods based upon them. As base classifiers we chose a decision tree learner, a support vector machine and a Bayesian network, while the compared ensemble methods were simple and weighted majority vote as well as stacking. For the evaluation, we used cross validation and a specially designed practical simulation. By implementing our approach in a semiconductor test department for the observation of two products, we proofed its practical applicability.

Keywords: Ensemble methods, fault detection, machine learning, semiconductor test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2243
427 An Authentication Protocol for Quantum Enabled Mobile Devices

Authors: Natarajan Venkatachalam, Subrahmanya V. R. K. Rao, Vijay Karthikeyan Dhandapani, Swaminathan Saravanavel

Abstract:

The quantum communication technology is an evolving design which connects multiple quantum enabled devices to internet for secret communication or sensitive information exchange. In future, the number of these compact quantum enabled devices will increase immensely making them an integral part of present communication systems. Therefore, safety and security of such devices is also a major concern for us. To ensure the customer sensitive information will not be eavesdropped or deciphered, we need a strong authentications and encryption mechanism. In this paper, we propose a mutual authentication scheme between these smart quantum devices and server based on the secure exchange of information through quantum channel which gives better solutions for symmetric key exchange issues. An important part of this work is to propose a secure mutual authentication protocol over the quantum channel. We show that our approach offers robust authentication protocol and further our solution is lightweight, scalable, cost-effective with optimized computational processing overheads.

Keywords: Quantum cryptography, quantum key distribution, wireless quantum communication, authentication protocol, quantum enabled device, trusted third party.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1182
426 Usage of Military Continuity Management System for Supporting of Emergency Management

Authors: R. Hajkova, J. Palecek, H. Malachova, A. Oulehlova

Abstract:

Ensuring of continuity of business is basic strategy of every company. Continuity of organization activities includes comprehensive procedures that help in solving unexpected situations of natural and anthropogenic character (for example flood, blaze, economic situations). Planning of continuity operations is a process that helps identify critical processes and implement plans for the security and recovery of key processes. The aim of this article is to demonstrate application of system approach to managing business continuity called business continuity management systems in military issues. This article describes the life cycle of business continuity management which is based on the established cycle PDCA (Plan- Do-Check-Act). After this is carried out by activities which are making by University of Defence during activation of forces and means of the integrated rescue system in case of emergencies - accidents at a nuclear power plant in Czech Republic. Activities of various stages of deployment earmarked forces and resources are managed and evaluated by using MCMS application (Military Continuity Management System).

Keywords: Business continuity management system, emergency management, military, nuclear safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
425 Research on the Evaluation of Enterprise-University-Research Cooperation Ability in Hubei Province

Authors: Dongfang Qiu, Yilin Lu

Abstract:

The measurement of enterprise-university-research cooperative efficiency has important meanings in improving the cooperative efficiency, strengthening the effective integration of regional resource, enhancing the ability of regional innovation and promoting the development of regional economy. The paper constructs the DEA method and DEA-Malmquist productivity index method to research the cooperation efficiency of Hubei by making comparisons with other provinces in China. The study found out the index of technology efficiency is 0.52 and the enterprise-universityresearch cooperative efficiency is Non-DEA efficient. To realize the DEA efficiency of Hubei province, the amount of 1652.596 R&D employees and 638.368 R&D employees’ full time equivalence should be reduced or 137.89 billion yuan of new products’ sales income be increased. Finally, it puts forward policy recommendations on existing problems to strengthen the standings of the cooperation, realize the effective application of the research results, and improve the level of management of enterprise-university-research cooperation efficiency.

Keywords: Cooperation Ability, DEA Method, Enterprise-university-research Cooperation, Malmquist Efficiency Index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
424 An Energy Reverse AODV Routing Protocol in Ad Hoc Mobile Networks

Authors: Said Khelifa, Zoulikha Mekkakia Maaza

Abstract:

In this paper we present a full performance analysis of an energy conserving routing protocol in mobile ad hoc network, named ER-AODV (Energy Reverse Ad-hoc On-demand Distance Vector routing). ER-AODV is a reactive routing protocol based on a policy which combines two mechanisms used in the basic AODV protocol. AODV and most of the on demand ad hoc routing protocols use single route reply along reverse path. Rapid change of topology causes that the route reply could not arrive to the source node, i.e. after a source node sends several route request messages, the node obtains a reply message, and this increases in power consumption. To avoid these problems, we propose a mechanism which tries multiple route replies. The second mechanism proposes a new adaptive approach which seeks to incorporate the metric "residual energy " in the process route selection, Indeed the residual energy of mobile nodes were considered when making routing decisions. The results of simulation show that protocol ER-AODV answers a better energy conservation.

Keywords: Ad hoc mobile networks, Energy AODV, Energy consumption, ER-AODV, Reverse AODV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2314
423 Spatial Analysis of Park and Ride Users’ Dynamic Accessibility to Train Station: A Case Study in Perth

Authors: Ting (Grace) Lin, Jianhong (Cecilia) Xia, Todd Robinson

Abstract:

Accessibility analysis, examining people’s ability to access facilities and destinations, is a fundamental assessment for transport planning, policy making, and social exclusion research. Dynamic accessibility which measures accessibility in real-time traffic environment has been an advanced accessibility indicator in transport research. It is also a useful indicator to help travelers to understand travel time daily variability, assists traffic engineers to monitor traffic congestions, and finally develop effective strategies in order to mitigate traffic congestions. This research involved real-time traffic information by collecting travel time data with 15-minute interval via the TomTom® API. A framework for measuring dynamic accessibility was then developed based on the gravity theory and accessibility dichotomy theory through space and time interpolation. Finally, the dynamic accessibility can be derived at any given time and location under dynamic accessibility spatial analysis framework.

Keywords: Dynamic accessibility, space-time continuum, transport research, TomTom® API.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063
422 Influence of Gum Acacia Karroo on Some Mechanical Properties of Cement Mortars and Concrete

Authors: Rose Mbugua, Ramadhan Wanjala, Julius Ndambuki

Abstract:

Natural admixtures provide concrete with enhanced properties but their processing end up making them very expensive resulting in increase to cost of concrete. In this study the effect of Gum from Acacia Karroo (GAK) as set-retarding admixture in cement pastes was studied. The possibility of using GAK as water reducing admixture both in cement mortar concrete was also investigated. Cement pastes with different dosages of GAK were prepared to measure the setting time using different dosages. Compressive strength of cement mortars with 0.7, 0.8 and 0.9% weight of cement and w/c ratio of 0.5 were compared to those with water cement (w/c) ratio of 0.44 but same dosage of GAK. Concrete samples were prepared using higher dosages of GAK (1, 2 and 3% wt of cement) and a water bidder (w/b) of 0.61 were compared to those with the same GAK dosage but with reduced w/b ratio. There was increase in compressive strength of 9.3% at 28 days for cement mortar samples with 0.9% dosage of GAK and reduced w/c ratio.

Keywords: Compressive strength, Gum Acacia Karroo, retarding admixture, setting time, water-reducing admixture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063
421 Nutritional Composition of Iranian Desi and Kabuli Chickpea (Cicer Arietinum L.) Cultivars in Autumn Sowing

Authors: Khosro Mohammadi

Abstract:

The grain quality of chickpea in Iran is low and instable, which may be attributed to the evolution of cultivars with a narrow genetic base making them vulnerable to biotic stresses. Four chickpea varieties from diverse geographic origins were chosen and arranged in a randomized complete block design. Mesorhizobium sp. cicer strain SW7 was added to all the chickpea seeds. Chickpea seeds were planted on October 9, 2013. Each genotype was sown 5 m in length, with 35 cm inter-row spacing, in 3 rows. Weeds were removed manually in all plots. Results showed that Analysis of variance on the studied traits showed significant differences among genotypes for N, P, K and Fe contents of chickpea, but there is not a significant difference among Ca, Zn and Mg continents of chickpea. The experimental coefficient of variation (CV) varied from 7.3 to 15.8. In general, the CV value lower than 20% is considered to be good, indicating the accuracy of conducted experiments. The highest grain N was observed in Hashem and Jam cultivars. The highest grain P was observed in Jam cultivar. Phosphorus content (mg/100g) ranged from 142.3 to 302.3 with a mean value of 221.3. The negative correlation (-0.126) was observed between the N and P of chickpea cultivars. The highest K and Fe contents were observed in Jam cultivar.

Keywords: Cultivar, genotype, nitrogen, nutrient, yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2199
420 Load Balancing in Heterogeneous P2P Systems using Mobile Agents

Authors: Neeraj Nehra, R. B. Patel, V. K. Bhat

Abstract:

Use of the Internet and the World-Wide-Web (WWW) has become widespread in recent years and mobile agent technology has proliferated at an equally rapid rate. In this scenario load balancing becomes important for P2P systems. Beside P2P systems can be highly heterogeneous, i.e., they may consists of peers that range from old desktops to powerful servers connected to internet through high-bandwidth lines. There are various loads balancing policies came into picture. Primitive one is Message Passing Interface (MPI). Its wide availability and portability make it an attractive choice; however the communication requirements are sometimes inefficient when implementing the primitives provided by MPI. In this scenario we use the concept of mobile agent because Mobile agent (MA) based approach have the merits of high flexibility, efficiency, low network traffic, less communication latency as well as highly asynchronous. In this study we present decentralized load balancing scheme using mobile agent technology in which when a node is overloaded, task migrates to less utilized nodes so as to share the workload. However, the decision of which nodes receive migrating task is made in real-time by defining certain load balancing policies. These policies are executed on PMADE (A Platform for Mobile Agent Distribution and Execution) in decentralized manner using JuxtaNet and various load balancing metrics are discussed.

Keywords: Mobile Agents, Agent host, Agent Submitter, PMADE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
419 Learning Classifier Systems Approach for Automated Discovery of Censored Production Rules

Authors: Suraiya Jabin, Kamal K. Bharadwaj

Abstract:

In the recent past Learning Classifier Systems have been successfully used for data mining. Learning Classifier System (LCS) is basically a machine learning technique which combines evolutionary computing, reinforcement learning, supervised or unsupervised learning and heuristics to produce adaptive systems. A LCS learns by interacting with an environment from which it receives feedback in the form of numerical reward. Learning is achieved by trying to maximize the amount of reward received. All LCSs models more or less, comprise four main components; a finite population of condition–action rules, called classifiers; the performance component, which governs the interaction with the environment; the credit assignment component, which distributes the reward received from the environment to the classifiers accountable for the rewards obtained; the discovery component, which is responsible for discovering better rules and improving existing ones through a genetic algorithm. The concatenate of the production rules in the LCS form the genotype, and therefore the GA should operate on a population of classifier systems. This approach is known as the 'Pittsburgh' Classifier Systems. Other LCS that perform their GA at the rule level within a population are known as 'Mitchigan' Classifier Systems. The most predominant representation of the discovered knowledge is the standard production rules (PRs) in the form of IF P THEN D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski and Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: IF P THEN D UNLESS C, where Censor C is an exception to the rule. Such rules are employed in situations, in which conditional statement IF P THEN D holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the IF P THEN D part of CPR expresses important information, while the UNLESS C part acts only as a switch and changes the polarity of D to ~D. In this paper Pittsburgh style LCSs approach is used for automated discovery of CPRs. An appropriate encoding scheme is suggested to represent a chromosome consisting of fixed size set of CPRs. Suitable genetic operators are designed for the set of CPRs and individual CPRs and also appropriate fitness function is proposed that incorporates basic constraints on CPR. Experimental results are presented to demonstrate the performance of the proposed learning classifier system.

Keywords: Censored Production Rule, Data Mining, GeneticAlgorithm, Learning Classifier System, Machine Learning, PittsburgApproach, , Reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
418 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri

Abstract:

Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.

Keywords: Local nonlinear estimation, LWPR algorithm, Online training method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
417 Design of Extremum Seeking Control with PD Accelerator and its Application to Monod and Williams-Otto Models

Authors: Hitoshi Takata, Tomohiro Hachino, Masaki Horai, Kazuo Komatsu

Abstract:

In this paper, we are concerned with the design and its simulation studies of a modified extremum seeking control for nonlinear systems. A standard extremum seeking control has a simple structure, but it takes a long time to reach an optimal operating point. We consider a modification of the standard extremum seeking control which is aimed to reach the optimal operating point more speedily than the standard one. In the modification, PD acceleration term is added before an integrator making a principal control, so that it enables the objects to be regulated to the optimal point smoothly. This proposed method is applied to Monod and Williams-Otto models to investigate its effectiveness. Numerical simulation results show that this modified method can improve the time response to the optimal operating point more speedily than the standard one.

Keywords: Extremum seeking control, Monod model, Williams- Otto model, PD acceleration term, Optimal operating point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
416 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition

Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade

Abstract:

The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.

Keywords: Automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754
415 Cost Effective Real-Time Image Processing Based Optical Mark Reader

Authors: Amit Kumar, Himanshu Singal, Arnav Bhavsar

Abstract:

In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.

Keywords: OMR, image processing, hough circle transform, interpolation, detection, Binary Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
414 A Software Tool Design for Cerebral Infarction of MR Images

Authors: Kyoung-Jong Park, Woong-Gi Jeon, Hee-Cheol Kim, Dong-Eog Kim, Heung-Kook Choi

Abstract:

The brain MR imaging-based clinical research and analysis system were specifically built and the development for a large-scale data was targeted. We used the general clinical data available for building large-scale data. Registration period for the selection of the lesion ROI and the region growing algorithm was used and the Mesh-warp algorithm for matching was implemented. The accuracy of the matching errors was modified individually. Also, the large ROI research data can accumulate by our developed compression method. In this way, the correctly decision criteria to the research result was suggested. The experimental groups were age, sex, MR type, patient ID and smoking which can easily be queries. The result data was visualized of the overlapped images by a color table. Its data was calculated by the statistical package. The evaluation for the utilization of this system in the chronic ischemic damage in the area has done from patients with the acute cerebral infarction. This is the cause of neurologic disability index location in the center portion of the lateral ventricle facing. The corona radiate was found in the position. Finally, the system reliability was measured both inter-user and intra-user registering correlation.

Keywords: Software tool design, Cerebral infarction, Brain MR image, Registration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638
413 Electromagnetic Wave Propagation Equations in 2D by Finite Difference Method

Authors: N. Fusun Oyman Serteller

Abstract:

In this paper, the techniques to solve time dependent electromagnetic wave propagation equations based on the Finite Difference Method (FDM) are proposed by comparing the results with Finite Element Method (FEM) in 2D while discussing some special simulation examples.  Here, 2D dynamical wave equations for lossy media, even with a constant source, are discussed for establishing symbolic manipulation of wave propagation problems. The main objective of this contribution is to introduce a comparative study of two suitable numerical methods and to show that both methods can be applied effectively and efficiently to all types of wave propagation problems, both linear and nonlinear cases, by using symbolic computation. However, the results show that the FDM is more appropriate for solving the nonlinear cases in the symbolic solution. Furthermore, some specific complex domain examples of the comparison of electromagnetic waves equations are considered. Calculations are performed through Mathematica software by making some useful contribution to the programme and leveraging symbolic evaluations of FEM and FDM.

Keywords: Finite difference method, finite element method, linear-nonlinear PDEs, symbolic computation, wave propagation equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683