Search results for: Delivery time uncertainty
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7063

Search results for: Delivery time uncertainty

5743 First Studies of the Influence of Single Gene Perturbations on the Inference of Genetic Networks

Authors: Frank Emmert-Streib, Matthias Dehmer

Abstract:

Inferring the network structure from time series data is a hard problem, especially if the time series is short and noisy. DNA microarray is a technology allowing to monitor the mRNA concentration of thousands of genes simultaneously that produces data of these characteristics. In this study we try to investigate the influence of the experimental design on the quality of the result. More precisely, we investigate the influence of two different types of random single gene perturbations on the inference of genetic networks from time series data. To obtain an objective quality measure for this influence we simulate gene expression values with a biologically plausible model of a known network structure. Within this framework we study the influence of single gene knock-outs in opposite to linearly controlled expression for single genes on the quality of the infered network structure.

Keywords: Dynamic Bayesian networks, microarray data, structure learning, Markov chain Monte Carlo.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
5742 Unscented Transformation for Estimating the Lyapunov Exponents of Chaotic Time Series Corrupted by Random Noise

Authors: K. Kamalanand, P. Mannar Jawahar

Abstract:

Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.

Keywords: Lyapunov exponents, unscented transformation, chaos theory, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956
5741 Time Temperature Dependence of Long Fiber Reinforced Polypropylene Manufactured by Direct Long Fiber Thermoplastic Process

Authors: K. A. Weidenmann, M. Grigo, B. Brylka, P. Elsner, T. Böhlke

Abstract:

In order to reduce fuel consumption, the weight of automobiles has to be reduced. Fiber reinforced polymers offer the potential to reach this aim because of their high stiffness to weight ratio. Additionally, the use of fiber reinforced polymers in automotive applications has to allow for an economic large-scale production. In this regard, long fiber reinforced thermoplastics made by direct processing offer both mechanical performance and processability in injection moulding and compression moulding. The work presented in this contribution deals with long glass fiber reinforced polypropylene directly processed in compression moulding (D-LFT). For the use in automotive applications both the temperature and the time dependency of the materials properties have to be investigated to fulfill performance requirements during crash or the demands of service temperatures ranging from -40 °C to 80 °C. To consider both the influence of temperature and time, quasistatic tensile tests have been carried out at different temperatures. These tests have been complemented by high speed tensile tests at different strain rates. As expected, the increase in strain rate results in an increase of the elastic modulus which correlates to an increase of the stiffness with decreasing service temperature. The results are in good accordance with results determined by dynamic mechanical analysis within the range of 0.1 to 100 Hz. The experimental results from different testing methods were grouped and interpreted by using different time temperature shift approaches. In this regard, Williams-Landel-Ferry and Arrhenius approach based on kinetics have been used. As the theoretical shift factor follows an arctan function, an empirical approach was also taken into consideration. It could be shown that this approach describes best the time and temperature superposition for glass fiber reinforced polypropylene manufactured by D-LFT processing.

Keywords: Composite, long fiber reinforced thermoplastics, mechanical properties, dynamic mechanical analysis, time temperature superposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
5740 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon

Authors: Haniye Dehestani, Yadollah Ordokhani

Abstract:

In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.

Keywords: Collocation method, fractional partial differential equations, Legendre-Laguerre functions, pseudo-operational matrix of integration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991
5739 Formalizing a Procedure for Generating Uncertain Resource Availability Assumptions Based On Real Time Logistic Data Capturing with Auto-ID Systems for Reactive Scheduling

Authors: Lars Laußat, Manfred Helmus, Kamil Szczesny, Markus König

Abstract:

As one result of the project “Reactive Construction Project Scheduling using Real Time Construction Logistic Data and Simulation”, a procedure for using data about uncertain resource availability assumptions in reactive scheduling processes has been developed. Prediction data about resource availability is generated in a formalized way using real-time monitoring data e.g. from auto-ID systems on the construction site and in the supply chains. The paper focusses on the formalization of the procedure for monitoring construction logistic processes, for the detection of disturbance and for generating of new and uncertain scheduling assumptions for the reactive resource constrained simulation procedure that is and will be further described in other papers.

Keywords: Auto-ID, Construction Logistic, Fuzzy, Monitoring, RFID, Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
5738 Visual Text Analytics Technologies for Real-Time Big Data: Chronological Evolution and Issues

Authors: Siti Azrina B. A. Aziz, Siti Hafizah A. Hamid

Abstract:

New approaches to analyze and visualize data stream in real-time basis is important in making a prompt decision by the decision maker. Financial market trading and surveillance, large-scale emergency response and crowd control are some example scenarios that require real-time analytic and data visualization. This situation has led to the development of techniques and tools that support humans in analyzing the source data. With the emergence of Big Data and social media, new techniques and tools are required in order to process the streaming data. Today, ranges of tools which implement some of these functionalities are available. In this paper, we present chronological evolution evaluation of technologies for supporting of real-time analytic and visualization of the data stream. Based on the past research papers published from 2002 to 2014, we gathered the general information, main techniques, challenges and open issues. The techniques for streaming text visualization are identified based on Text Visualization Browser in chronological order. This paper aims to review the evolution of streaming text visualization techniques and tools, as well as to discuss the problems and challenges for each of identified tools.

Keywords: Information visualization, visual analytics, text mining, visual text analytics tools, big data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
5737 Evaluation of the Quality of Education Offered to Students with Special Needs in Public Schools in the City of Bauru, Brazil

Authors: V. L. M. F. Capellini, A. P. P. M. Maturana, N. C. M. Brondino, M. B. C. L. B. M. Peixoto, A. J. Broughton

Abstract:

A paradigm shift is a process. The process of implementing inclusive education, a system constructed to support all learners, requires planning, identification, experimentation, and evaluation. In this vein, the purpose of the present study was to evaluate the capacity of one Brazilian state school systems to provide special education students with a quality inclusive education. This study originated at the behest of concerned families of students with special needs who filed complaints with the Municipality of Bauru, São Paulo. These families claimed, 1) children with learning differences and educational needs had not been identified for services, and 2) those who had been identified had not received sufficient specialized educational assistance (SEA) in schools across the City of Bauru. Hence, the Office of Civil Rights for the state of São Paulo (Ministério Público de São Paulo) summoned the local higher education institution, UNESP, to design a research study to investigate these allegations. In this exploratory study, descriptive data were gathered from all elementary and middle schools including 58 state schools and 17 city schools, for a total of 75 schools overall. Data collection consisted of each school's annual strategic action plan, surveys and interviews with all school stakeholders to determine their perceptions of the inclusive education available to students with Special Education Needs (SEN). The data were collected as one of four stages in a larger study which also included field observations of a focal students' experience and a continuing education course for all teachers and administrators in both state and city schools. For the purposes of this study, the researchers were interested in understanding the perceptions of school staff, parents, and students across all schools. Therefore, documents and surveys from 75 schools were analyzed for adherence to federal legislation guaranteeing students with SEN the right to special education assistance within the regular school setting. Results shows that while some schools recognized the legal rights of SEN students to receive special education, the plans to actually deliver services were absent. In conclusion, the results of this study revealed both school staff and families have insufficient planning and accessibility resources, and the schools have inadequate infrastructure for full-time support to SEN students, i.e., structures and systems to support the identification of SEN and delivery of services within schools of Bauru, SP. Having identified the areas of need, the city is now prepared to take next steps in the process toward preparing all schools to be inclusive.

Keywords: Inclusive education, special education, special needs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
5736 Physical Properties and Resistant Starch Content of Rice Flour Residues Hydrolyzed by α-Amylase

Authors: Waranya Pongpaiboon, Warangkana Srichamnong, Supat Chaiyakul

Abstract:

Enzymatic modification of rice flour can produce highly functional derivatives use in food industries. This study aimed to evaluate the physical properties and resistant starch content of rice flour residues hydrolyzed by α-amylase. Rice flour hydrolyzed by α-amylase (60 and 300 u/g) for 1, 24 and 48 hours were investigated. Increasing enzyme concentration and hydrolysis time resulted in decreased rice flour residue’s lightness (L*) but increased redness (a*) and yellowness (b*) of rice flour residues. The resistant starch content and peak viscosity increased when hydrolysis time increased. Pasting temperature, trough viscosity, breakdown, final viscosity, setback and peak time of the hydrolyzed flours were not significantly different (p>0.05). The morphology of native flour was smooth without observable pores and polygonal with sharp angles and edges. However, after hydrolysis, granules with a slightly rough and porous surface were observed and a rough and porous surface was increased with increasing hydrolyzed time. The X-ray diffraction patterns of native flour showed A-type configuration, which hydrolyzed flour showed almost 0% crystallinity indicated that both amorphous and crystalline structures of starch were simultaneously hydrolyzed by α-amylase.

Keywords: α-Amylase, Enzymatic hydrolysis, Pasting properties, Resistant starch

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3054
5735 Optimization of Wire EDM Parameters for Fabrication of Micro Channels

Authors: Gurinder Singh Brar, Sarbjeet Singh, Harry Garg

Abstract:

Wire Electric Discharge Machining (WEDM) is thermal machining process capable of machining very hard electrically conductive material irrespective of their hardness. WEDM is being widely used to machine micro scale parts with the high dimensional accuracy and surface finish. The objective of this paper is to optimize the process parameters of wire EDM to fabricate the micro channels and to calculate the surface finish and material removal rate of micro channels fabricated using wire EDM. The material used is aluminum 6061 alloy. The experiments were performed using CNC wire cut electric discharge machine. The effect of various parameters of WEDM like pulse on time (TON) with the levels (100, 150, 200), pulse off time (TOFF) with the levels (25, 35, 45) and current (IP) with the levels (105, 110, 115) were investigated to study the effect on output parameter i.e. Surface Roughness and Material Removal Rate (MRR). Each experiment was conducted under different conditions of pulse on time, pulse off time and peak current. For material removal rate, TON and Ip were the most significant process parameter. MRR increases with the increase in TON and Ip and decreases with the increase in TOFF. For surface roughness, TON and Ip have the maximum effect and TOFF was found out to be less effective.

Keywords: Micro Channels, Wire Electric Discharge Machining (WEDM), Metal Removal Rate (MRR), Surface Finish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2669
5734 A Study of Agile-Based Approaches to Improve Software Quality

Authors: Gurmeet Kaur, Jyoti Pruthi

Abstract:

Agile Software development approaches and techniques are being considered as efficient, effective, and popular methods to the development of software. Agile software developments are useful for developing high-quality software that completes client requirements with zero defects, and in short delivery period. In agile software development methodology, quality is related to coding, which means quality, is managed through the use of approaches like refactoring, pair programming, test-driven development, behavior-driven development, acceptance test-driven development, and demand-driven development. The quality of software is measured using metrics like the number of defects during the development and improvement of the software. Usage of the above-mentioned methods or approaches reduces the possibilities of defects in developed software, and hence improves quality. This paper focuses on the study of agile-based quality methods or approaches for software development that ensures improved quality of software as well as reduced cost, and customer satisfaction.

Keywords: Agile software development, ASD, Acceptance test-driven development, ATDD, Behavior-driven development, BDD, Demand-driven development. DDD, Test-driven development, TDD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
5733 A New Approach for Counting Passersby Utilizing Space-Time Images

Authors: A. Elmarhomy, S. Karungaru, K. Terada

Abstract:

Understanding the number of people and the flow of the persons is useful for efficient promotion of the institution managements and company-s sales improvements. This paper introduces an automated method for counting passerby using virtualvertical measurement lines. The process of recognizing a passerby is carried out using an image sequence obtained from the USB camera. Space-time image is representing the human regions which are treated using the segmentation process. To handle the problem of mismatching, different color space are used to perform the template matching which chose automatically the best matching to determine passerby direction and speed. A relation between passerby speed and the human-pixel area is used to distinguish one or two passersby. In the experiment, the camera is fixed at the entrance door of the hall in a side viewing position. Finally, experimental results verify the effectiveness of the presented method by correctly detecting and successfully counting them in order to direction with accuracy of 97%.

Keywords: counting passersby, virtual-vertical measurement line, passerby speed, space-time image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
5732 The Application of Real Options to Capital Budgeting

Authors: George Yungchih Wang

Abstract:

Real options theory suggests that managerial flexibility embedded within irreversible investments can account for a significant value in project valuation. Although the argument has become the dominant focus of capital investment theory over decades, yet recent survey literature in capital budgeting indicates that corporate practitioners still do not explicitly apply real options in investment decisions. In this paper, we explore how real options decision criteria can be transformed into equivalent capital budgeting criteria under the consideration of uncertainty, assuming that underlying stochastic process follows a geometric Brownian motion (GBM), a mixed diffusion-jump (MX), or a mean-reverting process (MR). These equivalent valuation techniques can be readily decomposed into conventional investment rules and “option impacts", the latter of which describe the impacts on optimal investment rules with the option value considered. Based on numerical analysis and Monte Carlo simulation, three major findings are derived. First, it is shown that real options could be successfully integrated into the mindset of conventional capital budgeting. Second, the inclusion of option impacts tends to delay investment. It is indicated that the delay effect is the most significant under a GBM process and the least significant under a MR process. Third, it is optimal to adopt the new capital budgeting criteria in investment decision-making and adopting a suboptimal investment rule without considering real options could lead to a substantial loss in value.

Keywords: real options, capital budgeting, geometric Brownianmotion, mixed diffusion-jump, mean-reverting process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2739
5731 Robust Image Transmission Over Time-varying Channels using Hierarchical Joint Source Channel Coding

Authors: Hatem. Elmeddeb, Noureddine, Hamdi, Ammar. Bouallègue

Abstract:

In this paper, a joint source-channel coding (JSCC) scheme for time-varying channels is presented. The proposed scheme uses hierarchical framework for both source encoder and transmission via QAM modulation. Hierarchical joint source channel codes with hierarchical QAM constellations are designed to track the channel variations which yields to a higher throughput by adapting certain parameters of the receiver to the channel variation. We consider the problem of still image transmission over time-varying channels with channel state information (CSI) available at 1) receiver only and 2) both transmitter and receiver being informed about the state of the channel. We describe an algorithm that optimizes hierarchical source codebooks by minimizing the distortion due to source quantizer and channel impairments. Simulation results, based on image representation, show that, the proposed hierarchical system outperforms the conventional schemes based on a single-modulator and channel optimized source coding.

Keywords: Channel-optimized VQ (COVQ), joint optimization, QAM, hierarchical systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
5730 Examining Effects of Electronic Market Functions on Decrease in Product Unit Cost and Response Time to Customer

Authors: Maziyar Nouraee

Abstract:

Electronic markets in recent decades contribute remarkably in business transactions. Many organizations consider traditional ways of trade non-economical and therefore they do trade only through electronic markets. There are different categorizations of electronic markets functions. In one classification, functions of electronic markets are categorized into classes as information, transactions, and value added. In the present paper, effects of the three classes on the two major elements of the supply chain management are measured. The two elements are decrease in the product unit cost and reduction in response time to the customer. The results of the current research show that among nine minor elements related to the three classes of electronic markets functions, six factors and three factors influence on reduction of the product unit cost and reduction of response time to the customer, respectively.

Keywords: Electronic Commerce, Electronic Market, B2B Trade, Supply Chain Management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984
5729 Enhancing Performance of Bluetooth Piconets Using Priority Scheduling and Exponential Back-Off Mechanism

Authors: Dharmendra Chourishi “Maitraya”, Sridevi Seshadri

Abstract:

Bluetooth is a personal wireless communication technology and is being applied in many scenarios. It is an emerging standard for short range, low cost, low power wireless access technology. Current existing MAC (Medium Access Control) scheduling schemes only provide best-effort service for all masterslave connections. It is very challenging to provide QoS (Quality of Service) support for different connections due to the feature of Master Driven TDD (Time Division Duplex). However, there is no solution available to support both delay and bandwidth guarantees required by real time applications. This paper addresses the issue of how to enhance QoS support in a Bluetooth piconet. The Bluetooth specification proposes a Round Robin scheduler as possible solution for scheduling the transmissions in a Bluetooth Piconet. We propose an algorithm which will reduce the bandwidth waste and enhance the efficiency of network. We define token counters to estimate traffic of real-time slaves. To increase bandwidth utilization, a back-off mechanism is then presented for best-effort slaves to decrease the frequency of polling idle slaves. Simulation results demonstrate that our scheme achieves better performance over the Round Robin scheduling.

Keywords: Piconet, Medium Access Control, Polling algorithm, Scheduling, QoS, Time Division Duplex (TDD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
5728 Investigating Performance of Numerical Distance Relay with Higher Order Antialiasing Filter

Authors: Venkatesh C., K. Shanti Swarup

Abstract:

This paper investigates the impact on operating time delay and relay maloperation when 1st,2nd and 3rd order analog antialiasing filters are used in numerical distance protection. RC filter with cut-off frequency 90 Hz is used. Simulations are carried out for different SIR (Source to line Impedance Ratio), load, fault type and fault conditions using SIMULINK, where the voltage and current signals are fed online to the developed numerical distance relay model. Matlab is used for plotting the impedance trajectory. Investigation results shows that, about 75 % of the simulated cases, numerical distance relay operating time is not increased even-though there is a time delay when higher order filters are used. Relay maloperation (selectivity) also reduces (increases) when higher order filters are used in numerical distance protection.

Keywords: Antialiasing, capacitive voltage transformers, delay estimation, discrete Fourier transform (DFT), distance measurement, low-pass filters, source to line impedance ratio (SIR), protective relaying.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2770
5727 Optimal Sliding Mode Controller for Knee Flexion During Walking

Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem

Abstract:

This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.

Keywords: Optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119
5726 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran

Authors: M. Ahmadi, M. Kafil, H. Ebrahimi

Abstract:

Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.

Keywords: Broken bar, condition monitoring, diagnostics, empirical mode decomposition, Fourier transform, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
5725 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment

Authors: Isabela Moreira Queiroz

Abstract:

Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management. 

Keywords: Probabilistic methods, risk assessment, risk management, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
5724 Enhanced Shell Sorting Algorithm

Authors: Basit Shahzad, Muhammad Tanvir Afzal

Abstract:

Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.

Keywords: Algorithm, Computation, Shell, Sorting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3111
5723 Locating Center Points for Radial Basis Function Networks Using Instance Reduction Techniques

Authors: Rana Yousef, Khalil el Hindi

Abstract:

The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.

Keywords: Radial basis function networks, Instance-based reduction, PNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
5722 Computational Fluid Dynamics Expert System using Artificial Neural Networks

Authors: Gonzalo Rubio, Eusebio Valero, Sven Lanzan

Abstract:

The design of a modern aircraft is based on three pillars: theoretical results, experimental test and computational simulations. As a results of this, Computational Fluid Dynamic (CFD) solvers are widely used in the aeronautical field. These solvers require the correct selection of many parameters in order to obtain successful results. Besides, the computational time spent in the simulation depends on the proper choice of these parameters. In this paper we create an expert system capable of making an accurate prediction of the number of iterations and time required for the convergence of a computational fluid dynamic (CFD) solver. Artificial neural network (ANN) has been used to design the expert system. It is shown that the developed expert system is capable of making an accurate prediction the number of iterations and time required for the convergence of a CFD solver.

Keywords: Artificial Neural Network, Computational Fluid Dynamics, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2927
5721 A Modified Speech Enhancement Using Adaptive Gain Equalizer with Non linear Spectral Subtraction for Robust Speech Recognition

Authors: C. Ganesh Babu, P. T. Vanathi

Abstract:

In this paper we present an enhanced noise reduction method for robust speech recognition using Adaptive Gain Equalizer with Non linear Spectral Subtraction. In Adaptive Gain Equalizer method (AGE), the input signal is divided into a number of subbands that are individually weighed in time domain, in accordance to the short time Signal-to-Noise Ratio (SNR) in each subband estimation at every time instant. Instead of focusing on suppression the noise on speech enhancement is focused. When analysis was done under various noise conditions for speech recognition, it was found that Adaptive Gain Equalizer method algorithm has an obvious failing point for a SNR of -5 dB, with inadequate levels of noise suppression for SNR less than this point. This work proposes the implementation of AGE when coupled with Non linear Spectral Subtraction (AGE-NSS) for robust speech recognition. The experimental result shows that out AGE-NSS performs the AGE when SNR drops below -5db level.

Keywords: Adaptive Gain Equalizer, Non Linear Spectral Subtraction, Speech Enhancement, and Speech Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
5720 The Application of an Experimental Design for the Defect Reduction of Electrodeposition Painting on Stainless Steel Washers

Authors: Chansiri Singhtaun, Nattaporn Prasartthong

Abstract:

The purpose of this research is to reduce the amount of incomplete coating of stainless steel washers in the electrodeposition painting process by using an experimental design technique. The surface preparation was found to be a major cause of painted surface quality. The influence of pretreating and painting process parameters, which are cleaning time, chemical concentration and shape of hanger were studied. A 23 factorial design with two replications was performed. The analysis of variance for the designed experiment showed the great influence of cleaning time and shape of hanger. From this study, optimized cleaning time was determined and a newly designed electrical conductive hanger was proved to be superior to the original one. The experimental verification results showed that the amount of incomplete coating defects decreased from 4% to 1.02% and operation cost decreased by 10.5%.

Keywords: Defect reduction, design of experiments, electrodeposition painting, stainless steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2240
5719 WLAN Positioning Based on Joint TOA and RSS Characteristics

Authors: Peerapong Uthansakul, Monthippa Uthansakul

Abstract:

WLAN Positioning has been presented by many approaches in literatures using the characteristics of Received Signal Strength (RSS), Time of Arrival (TOA) or Time Difference of Arrival (TDOA), Angle of Arrival (AOA) and cell ID. Among these, RSS approach is the simplest method to implement because there is no need of modification on both access points and client devices whereas its accuracy is terrible due to physical environments. For TOA or TDOA approach, the accuracy is quite acceptable but most researches have to modify either software or hardware on existing WLAN infrastructure. The scales of modifications are made on only access card up to the changes in protocol of WLAN. Hence, it is an unattractive approach to use TOA or TDOA for positioning system. In this paper, the new concept of merging both RSS and TOA positioning techniques is proposed. In addition, the method to achieve TOA characteristic for positioning WLAN user without any extra modification necessarily appended in the existing system is presented. The measurement results confirm that the proposed technique using both RSS and TOA characteristics provides better accuracy than using only either RSS or TOA approach.

Keywords: Received signal strength, Time of arrival, Positioning system, WLAN, Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
5718 A Novel Prediction Method for Tag SNP Selection using Genetic Algorithm based on KNN

Authors: Li-Yeh Chuang, Yu-Jen Hou, Jr., Cheng-Hong Yang

Abstract:

Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.

Keywords: Genetic Algorithm (GA), Genotype, Single nucleotide polymorphism (SNP), tag SNPs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
5717 Vehicle Routing Problem with Mixed Fleet of Conventional and Heterogenous Electric Vehicles and Time Dependent Charging Costs

Authors: Ons Sassi, Wahiba Ramdane Cherif-Khettaf, Ammar Oulamara

Abstract:

In this paper, we consider the vehicle routing problem with mixed fleet of conventional and heterogenous electric vehicles and time dependent charging costs, denoted VRP-HFCC, in which a set of geographically scattered customers have to be served by a mixed fleet of vehicles composed of a heterogenous fleet of Electric Vehicles (EVs), having different battery capacities and operating costs, and Conventional Vehicles (CVs). We include the possibility of charging EVs in the available charging stations during the routes in order to serve all customers. Each charging station offers charging service with a known technology of chargers and time dependent charging costs. Charging stations are also subject to operating time windows constraints. EVs are not necessarily compatible with all available charging technologies and a partial charging is allowed. Intermittent charging at the depot is also allowed provided that constraints related to the electricity grid are satisfied. The objective is to minimize the number of employed vehicles and then minimize the total travel and charging costs. In this study, we present a Mixed Integer Programming Model and develop a Charging Routing Heuristic and a Local Search Heuristic based on the Inject-Eject routine with different insertion methods. All heuristics are tested on real data instances.

Keywords: charging problem, electric vehicle, heuristics, local search, optimization, routing problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2642
5716 Comparisons of Fine Motor Functions in Subjects with Parkinson’s Disease and Essential Tremor

Authors: Nan-Ying Yu, Shao-Hsia Chang

Abstract:

This study explores the clinical features of neurodegenerative disease patients with tremor. We study the motor impairments in patients with Parkinson’s disease (PD) and essential tremor (ET). Since uncertainty exists on whether Parkinson's disease (PD) and essential tremor (ET) patients have similar degree of impairment during motor tasks, this study based on the self-developed computerized handwriting movement analysis to characterize motor functions of these two impairments. The recruited subjects were diagnosed and confirmed one of neurodegenerative diseases. They were undergone general clinical evaluations by physicians in the first year. We recruited 8 participants with PD and 10 with ET. Additional 12 participants without any neuromuscular dysfunction were recruited as control group. This study used fine motor control of penmanship on digital tablet for sensorimotor function tests. The movement speed in PD/ET group is found significant slower than subjects in normal control group. In movement intensity and speed, the result found subject with ET has similar clinical feature with PD subjects. The ET group shows smaller and slower movements than control group but not to the same extent as PD group. The results of this study contribute to the early screening and detection of diseases and the evaluation of disease progression.

Keywords: Parkinson’s disease, essential tremor, motor function, fine motor movement, computerized handwriting evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2245
5715 Transient Thermal Modeling of an Axial Flux Permanent Magnet (AFPM) Machine Using a Hybrid Thermal Model

Authors: J. Hey, D. A. Howey, R. Martinez-Botas, M. Lamperth

Abstract:

This paper presents the development of a hybrid thermal model for the EVO Electric AFM 140 Axial Flux Permanent Magnet (AFPM) machine as used in hybrid and electric vehicles. The adopted approach is based on a hybrid lumped parameter and finite difference method. The proposed method divides each motor component into regular elements which are connected together in a thermal resistance network representing all the physical connections in all three dimensions. The element shape and size are chosen according to the component geometry to ensure consistency. The fluid domain is lumped into one region with averaged heat transfer parameters connecting it to the solid domain. Some model parameters are obtained from Computation Fluid Dynamic (CFD) simulation and empirical data. The hybrid thermal model is described by a set of coupled linear first order differential equations which is discretised and solved iteratively to obtain the temperature profile. The computation involved is low and thus the model is suitable for transient temperature predictions. The maximum error in temperature prediction is 3.4% and the mean error is consistently lower than the mean error due to uncertainty in measurements. The details of the model development, temperature predictions and suggestions for design improvements are presented in this paper.

Keywords: Electric vehicle, hybrid thermal model, transient temperature prediction, Axial Flux Permanent Magnet machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2119
5714 Performance Evaluation of Prioritized Limited Processor-Sharing System

Authors: Yoshiaki Shikata, Wataru Katagiri, Yoshitaka Takahashi

Abstract:

We propose a novel prioritized limited processor-sharing (PS) rule and a simulation algorithm for the performance evaluation of this rule. The performance measures of practical interest are evaluated using this algorithm. Suppose that there are two classes and that an arriving (class-1 or class-2) request encounters n1 class-1 and n2 class-2 requests (including the arriving one) in a single-server system. According to the proposed rule, class-1 requests individually and simultaneously receive m / (m * n1+ n2) of the service-facility capacity, whereas class-2 requests receive 1 / (m *n1 + n2) of it, if m * n1 + n2 ≤ C. Otherwise (m * n1 + n2 > C), the arriving request will be queued in the corresponding class waiting room or rejected. Here, m (1) denotes the priority ratio, and C ( ∞), the service-facility capacity. In this rule, when a request arrives at [or departs from] the system, the extension [shortening] of the remaining sojourn time of each request receiving service can be calculated using the number of requests of each class and the priority ratio. Employing a simulation program to execute these events and calculations enables us to analyze the performance of the proposed prioritized limited PS rule, which is realistic in a time-sharing system (TSS) with a sufficiently small time slot. Moreover, this simulation algorithm is expanded for the evaluation of the prioritized limited PS system with N  3 priority classes.

Keywords: PS rule, priority ratio, service-facility capacity, simulation algorithm, sojourn time, performance measures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158