Search results for: 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES
542 Granger Causal Nexus between Financial Development and Energy Consumption: Evidence from Cross Country Panel Data
Authors: Rudra P. Pradhan
Abstract:
This paper examines the Granger causal nexus between financial development and energy consumption in the group of 35 Financial Action Task Force (FATF) Countries over the period 1988-2012. The study uses two financial development indicators such as private sector credit and stock market capitalization and seven energy consumption indicators such as coal, oil, gas, electricity, hydro-electrical, nuclear and biomass. Using panel cointegration tests, the study finds that financial development and energy consumption are cointegrated, indicating the presence of a long-run relationship between the two. Using a panel vector error correction model (VECM), the study detects both bidirectional and unidirectional causality between financial development and energy consumption. The variation of this causality is due to the use of different proxies for both financial development and energy consumption. The policy implication of this study is that economic policies should recognize the differences in the financial development-energy consumption nexus in order to maintain sustainable development in the selected 35 FATF countries.Keywords: Financial development, energy consumption, Panel VECM, FATF countries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513541 Numerical Study of Airfoils Aerodynamic Performance in Heavy Rain Environment
Authors: M. Ismail, Cao Yihua, Zhao Ming, Abu Bakar
Abstract:
Heavy rainfall greatly affects the aerodynamic performance of the aircraft. There are many accidents of aircraft caused by aerodynamic efficiency degradation by heavy rain. In this Paper we have studied the heavy rain effects on the aerodynamic efficiency of cambered NACA 64-210 and symmetric NACA 0012 airfoils. Our results show significant increase in drag and decrease in lift. We used preprocessing software gridgen for creation of geometry and mesh, used fluent as solver and techplot as postprocessor. Discrete phase modeling called DPM is used to model the rain particles using two phase flow approach. The rain particles are assumed to be inert. Both airfoils showed significant decrease in lift and increase in drag in simulated rain environment. The most significant difference between these two airfoils was the NACA 64-210 more sensitivity than NACA 0012 to liquid water content (LWC). We believe that the results showed in this paper will be useful for the designer of the commercial aircrafts and UAVs, and will be helpful for training of the pilots to control the airplanes in heavy rain.
Keywords: airfoil, discrete phase modeling, heavy rain, Reynolds
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273540 A Simple Epidemiological Model for Typhoid with Saturated Incidence Rate and Treatment Effect
Authors: Steady Mushayabasa
Abstract:
Typhoid fever is a communicable disease, found only in man and occurs due to systemic infection mainly by Salmonella typhi organism. The disease is endemic in many developing countries and remains a substantial public health problem despite recent progress in water and sanitation coverage. Globally, it is estimated that typhoid causes over 16 million cases of illness each year, resulting in over 600,000 deaths. A mathematical model for assessing the impact of educational campaigns on controlling the transmission dynamics of typhoid in the community, has been formulated and analyzed. The reproductive number has been computed. Stability of the model steady-states has been examined. The impact of educational campaigns on controlling the transmission dynamics of typhoid has been discussed through the basic reproductive number and numerical simulations. At its best the study suggests that targeted education campaigns, which are effective at stopping transmission of typhoid more than 40% of the time, will be highly effective at controlling the disease in the community.
Keywords: Mathematical model, Typhoid, saturated incidence rate, treatment, reproductive number, sensitivity analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3519539 A Comparison and Analysis of Name Matching Algorithms
Authors: Chakkrit Snae
Abstract:
Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.Keywords: Data mining, name matching algorithm, nominaldata, searching system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11090538 Performance Improvement of Information System of a Banking System Based on Integrated Resilience Engineering Design
Authors: S. H. Iranmanesh, L. Aliabadi, A. Mollajan
Abstract:
Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.
Keywords: Banking system, data envelopment analysis, DEA, integrated resilience engineering, IRE, performance evaluation, perturbation analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 842537 Kinetic Modeling of Transesterification of Triacetin Using Synthesized Ion Exchange Resin (SIERs)
Authors: Hafizuddin W. Yussof, Syamsutajri S. Bahri, Adam P. Harvey
Abstract:
Strong anion exchange resins with QN+OH-, have the potential to be developed and employed as heterogeneous catalyst for transesterification, as they are chemically stable to leaching of the functional group. Nine different SIERs (SIER1-9) with QN+OH-were prepared by suspension polymerization of vinylbenzyl chloridedivinylbenzene (VBC-DVB) copolymers in the presence of n-heptane (pore-forming agent). The amine group was successfully grafted into the polymeric resin beads through functionalization with trimethylamine. These SIERs are then used as a catalyst for the transesterification of triacetin with methanol. A set of differential equations that represents the Langmuir-Hinshelwood-Hougen- Watson (LHHW) and Eley-Rideal (ER) models for the transesterification reaction were developed. These kinetic models of LHHW and ER were fitted to the experimental data. Overall, the synthesized ion exchange resin-catalyzed reaction were welldescribed by the Eley-Rideal model compared to LHHW models, with sum of square error (SSE) of 0.742 and 0.996, respectively.
Keywords: Anion exchange resin, Eley-Rideal, Langmuir-Hinshelwood-Hougen-Watson, transesterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2392536 Electrical Impedance Imaging Using Eddy Current
Authors: A. Ambia, T. Takemae, Y. Kosugi, M. Hongo
Abstract:
Electric impedance imaging is a method of reconstructing spatial distribution of electrical conductivity inside a subject. In this paper, a new method of electrical impedance imaging using eddy current is proposed. The eddy current distribution in the body depends on the conductivity distribution and the magnetic field pattern. By changing the position of magnetic core, a set of voltage differences is measured with a pair of electrodes. This set of voltage differences is used in image reconstruction of conductivity distribution. The least square error minimization method is used as a reconstruction algorithm. The back projection algorithm is used to get two dimensional images. Based on this principle, a measurement system is developed and some model experiments were performed with a saline filled phantom. The shape of each model in the reconstructed image is similar to the corresponding model, respectively. From the results of these experiments, it is confirmed that the proposed method is applicable in the realization of electrical imaging.Keywords: Back projection algorithm, electrical impedancetomography, eddy current, magnetic inductance tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696535 Gain Tuning Fuzzy Controller for an Optical Disk Drive
Authors: Shiuh-Jer Huang, Ming-Tien Su
Abstract:
Since the driving speed and control accuracy of commercial optical disk are increasing significantly, it needs an efficient controller to monitor the track seeking and following operations of the servo system for achieving the desired data extracting response. The nonlinear behaviors of the actuator and servo system of the optical disk drive will influence the laser spot positioning. Here, the model-free fuzzy control scheme is employed to design the track seeking servo controller for a d.c. motor driving optical disk drive system. In addition, the sliding model control strategy is introduced into the fuzzy control structure to construct a 1-D adaptive fuzzy rule intelligent controller for simplifying the implementation problem and improving the control performance. The experimental results show that the steady state error of the track seeking by using this fuzzy controller can maintain within the track width (1.6 μm ). It can be used in the track seeking and track following servo control operations.Keywords: Fuzzy control, gain tuning and optical disk drive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1587534 Solution of Density Dependent Nonlinear Reaction-Diffusion Equation Using Differential Quadrature Method
Authors: Gülnihal Meral
Abstract:
In this study, the density dependent nonlinear reactiondiffusion equation, which arises in the insect dispersal models, is solved using the combined application of differential quadrature method(DQM) and implicit Euler method. The polynomial based DQM is used to discretize the spatial derivatives of the problem. The resulting time-dependent nonlinear system of ordinary differential equations(ODE-s) is solved by using implicit Euler method. The computations are carried out for a Cauchy problem defined by a onedimensional density dependent nonlinear reaction-diffusion equation which has an exact solution. The DQM solution is found to be in a very good agreement with the exact solution in terms of maximum absolute error. The DQM solution exhibits superior accuracy at large time levels tending to steady-state. Furthermore, using an implicit method in the solution procedure leads to stable solutions and larger time steps could be used.Keywords: Density Dependent Nonlinear Reaction-Diffusion Equation, Differential Quadrature Method, Implicit Euler Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273533 A Study of Islamic Stock Indices and Macroeconomic Variables
Authors: Mohammad Irfan
Abstract:
The purpose of this paper is to investigate the relationship among the key macroeconomic variables and Islamic stock market in India. This study is based on the time series data of financial years 2009-2015 to explore the consistency of relationship between macroeconomic variables and Shariah Indices. The ADF (Augmented Dickey–Fuller Test Statistic) and PP (Phillips–Perron Test Statistic) tests are employed to check stationarity of the data. The study depicts the long run relationship between Shariah indices and macroeconomic variables by using the Johansen Co-integration test. BSE Shariah and Nifty Shariah have uni-direct Granger causality. The outcome of VECM is significantly confirming the applicability of best fitted model. Thus, Islamic stock indices are proficiently working for the development of Indian economy. It suggests that by keeping eyes on Islamic stock market which will be more interactive in the future with other macroeconomic variables.Keywords: Indian shariah indices, macroeconomic variables, co-integration, Granger causality, Vector error correction model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1218532 Small Signal Stability Enhancement for Hybrid Power Systems by SVC
Authors: Ali Dehghani, Mojtaba Hakimzadeh, Amir Habibi, Navid Mehdizadeh Afroozi
Abstract:
In this paper an isolated wind-diesel hybrid power system has been considered for reactive power control study having an induction generator for wind power conversion and synchronous alternator with automatic voltage regulator (AVR) for diesel unit is presented. The dynamic voltage stability evaluation is dependent on small signal analysis considering a Static VAR Compensator (SVC) and IEEE type -I excitation system. It's shown that the variable reactive power source like SVC is crucial to meet the varying demand of reactive power by induction generator and load and to acquire an excellent voltage regulation of the system with minimum fluctuations. Integral square error (ISE) criterion can be used to evaluate the optimum setting of gain parameters. Finally the dynamic responses of the power systems considered with optimum gain setting will also be presented.
Keywords: SVC, Small Signal Stability, Reactive Power, Control, Hybrid System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2458531 Market Segmentation and Conjoint Analysis for Apple Family Design
Authors: Abbas Al-Refaie, Nour Bata
Abstract:
A distributor of Apple products' experiences numerous difficulties in developing marketing strategies for new and existing mobile product entries that maximize customer satisfaction and the firm's profitability. This research, therefore, integrates market segmentation in platform-based product family design and conjoint analysis to identify iSystem combinations that increase customer satisfaction and business profits. First, the enhanced market segmentation grid is created. Then, the estimated demand model is formulated. Finally, the profit models are constructed then used to determine the ideal product family design that maximizes profit. Conjoint analysis is used to explore customer preferences with their satisfaction levels. A total of 200 surveys are collected about customer preferences. Then, simulation is used to determine the importance values for each attribute. Finally, sensitivity analysis is conducted to determine the product family design that maximizes both objectives. In conclusion, the results of this research shall provide great support to Apple distributors in determining the best marketing strategies that enhance their market share.
Keywords: Market segmentation, conjoint analysis, market strategies, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2518530 A Detection Method of Faults in Railway Pantographs Based on Dynamic Phase Plots
Authors: G. Santamato, M. Solazzi, A. Frisoli
Abstract:
Systems for detection of damages in railway pantographs effectively reduce the cost of maintenance and improve time scheduling. In this paper, we present an approach to design a monitoring tool fitting strong customer requirements such as portability and ease of use. Pantograph has been modeled to estimate its dynamical properties, since no data are available. With the aim to focus on suspensions health, a two Degrees of Freedom (DOF) scheme has been adopted. Parameters have been calculated by means of analytical dynamics. A Finite Element Method (FEM) modal analysis verified the former model with an acceptable error. The detection strategy seeks phase-plots topology alteration, induced by defects. In order to test the suitability of the method, leakage in the dashpot was simulated on the lumped model. Results are interesting because changes in phase plots are more appreciable than frequency-shift. Further calculations as well as experimental tests will support future developments of this smart strategy.Keywords: Pantograph models, phase-plots, structural health monitoring, vibration-based condition monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486529 Predicting Extrusion Process Parameters Using Neural Networks
Authors: Sachin Man Bajimaya, SangChul Park, Gi-Nam Wang
Abstract:
The objective of this paper is to estimate realistic principal extrusion process parameters by means of artificial neural network. Conventionally, finite element analysis is used to derive process parameters. However, the finite element analysis of the extrusion model does not consider the manufacturing process constraints in its modeling. Therefore, the process parameters obtained through such an analysis remains highly theoretical. Alternatively, process development in industrial extrusion is to a great extent based on trial and error and often involves full-size experiments, which are both expensive and time-consuming. The artificial neural network-based estimation of the extrusion process parameters prior to plant execution helps to make the actual extrusion operation more efficient because more realistic parameters may be obtained. And so, it bridges the gap between simulation and real manufacturing execution system. In this work, a suitable neural network is designed which is trained using an appropriate learning algorithm. The network so trained is used to predict the manufacturing process parameters.Keywords: Artificial Neural Network (ANN), Indirect Extrusion, Finite Element Analysis, MES.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2368528 Improved Processing Speed for Text Watermarking Algorithm in Color Images
Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari
Abstract:
Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.
Keywords: Steganography, watermarking, private keys, time complexity measurements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816527 Personal Authentication Using FDOST in Finger Knuckle-Print Biometrics
Authors: N. B. Mahesh Kumar, K. Premalatha
Abstract:
The inherent skin patterns created at the joints in the finger exterior are referred as finger knuckle-print. It is exploited to identify a person in a unique manner because the finger knuckle print is greatly affluent in textures. In biometric system, the region of interest is utilized for the feature extraction algorithm. In this paper, local and global features are extracted separately. Fast Discrete Orthonormal Stockwell Transform is exploited to extract the local features. Global feature is attained by escalating the size of Fast Discrete Orthonormal Stockwell Transform to infinity. Two features are fused to increase the recognition accuracy. A matching distance is calculated for both the features individually. Then two distances are merged mutually to acquire the final matching distance. The proposed scheme gives the better performance in terms of equal error rate and correct recognition rate.
Keywords: Hamming distance, Instantaneous phase, Region of Interest, Recognition accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2759526 Neural Network Ensemble-based Solar Power Generation Short-Term Forecasting
Authors: A. Chaouachi, R.M. Kamel, R. Ichikawa, H. Hayashi, K. Nagasaka
Abstract:
This paper presents the applicability of artificial neural networks for 24 hour ahead solar power generation forecasting of a 20 kW photovoltaic system, the developed forecasting is suitable for a reliable Microgrid energy management. In total four neural networks were proposed, namely: multi-layred perceptron, radial basis function, recurrent and a neural network ensemble consisting in ensemble of bagged networks. Forecasting reliability of the proposed neural networks was carried out in terms forecasting error performance basing on statistical and graphical methods. The experimental results showed that all the proposed networks achieved an acceptable forecasting accuracy. In term of comparison the neural network ensemble gives the highest precision forecasting comparing to the conventional networks. In fact, each network of the ensemble over-fits to some extent and leads to a diversity which enhances the noise tolerance and the forecasting generalization performance comparing to the conventional networks.Keywords: Neural network ensemble, Solar power generation, 24 hour forecasting, Comparative study
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3277525 A Bayesian Network Reliability Modeling for FlexRay Systems
Authors: Kuen-Long Leu, Yung-Yuan Chen, Chin-Long Wey, Jwu-E Chen, Chung-Hsien Hsu
Abstract:
The increasing importance of FlexRay systems in automotive domain inspires unceasingly relative researches. One primary issue among researches is to verify the reliability of FlexRay systems either from protocol aspect or from system design aspect. However, research rarely discusses the effect of network topology on the system reliability. In this paper, we will illustrate how to model the reliability of FlexRay systems with various network topologies by a well-known probabilistic reasoning technology, Bayesian Network. In this illustration, we especially investigate the effectiveness of error containment built in star topology and fault-tolerant midpoint synchronization algorithm adopted in FlexRay communication protocol. Through a FlexRay steer-by-wire case study, the influence of different topologies on the failure probability of the FlexRay steerby- wire system is demonstrated. The notable value of this research is to show that the Bayesian Network inference is a powerful and feasible method for the reliability assessment of FlexRay systems.Keywords: Bayesian Network, FlexRay, fault tolerance, network topology, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2029524 Intelligent Agent Approach to the Control of Critical Infrastructure Networks
Authors: James D. Gadze, Niki Pissinou, Kia Makki
Abstract:
In this paper we propose an intelligent agent approach to control the electric power grid at a smaller granularity in order to give it self-healing capabilities. We develop a method using the influence model to transform transmission substations into information processing, analyzing and decision making (intelligent behavior) units. We also develop a wireless communication method to deliver real-time uncorrupted information to an intelligent controller in a power system environment. A combined networking and information theoretic approach is adopted in meeting both the delay and error probability requirements. We use a mobile agent approach in optimizing the achievable information rate vector and in the distribution of rates to users (sensors). We developed the concept and the quantitative tools require in the creation of cooperating semiautonomous subsystems which puts the electric grid on the path towards intelligent and self-healing system.Keywords: Mobile agent, power system operation and control, real time, wireless communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673523 Fung’s Model Constants for Intracranial Blood Vessel of Human Using Biaxial Tensile Test Results
Authors: Mohammad Shafigh, Nasser Fatouraee, Amirsaied Seddighi
Abstract:
Mechanical properties of cerebral arteries are, due to their relationship with cerebrovascular diseases, of clinical worth. To acquire these properties, eight samples were obtained from middle cerebral arteries of human cadavers, whose death were not due to injuries or diseases of cerebral vessels, and tested within twelve hours after resection, by a precise biaxial tensile test device specially developed for the present study considering the dimensions, sensitivity and anisotropic nature of samples. The resulting stress-stretch curve was plotted and subsequently fitted to a hyperelastic three-parameter Fung model. It was found that the arteries were noticeably stiffer in circumferential than in axial direction. It was also demonstrated that the use of multi-parameter hyperelastic constitutive models is useful for mathematical description of behavior of cerebral vessel tissue. The reported material properties are a proper reference for numerical modeling of cerebral arteries and computational analysis of healthy or diseased intracranial arteries.
Keywords: Anisotropic Tissue, Cerebral Blood Vessels, Fung Model, Nonlinear Material, Plain Stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3359522 Application of Artificial Neural Network to Forecast Actual Cost of a Project to Improve Earned Value Management System
Authors: Seyed Hossein Iranmanesh, Mansoureh Zarezadeh
Abstract:
This paper presents an application of Artificial Neural Network (ANN) to forecast actual cost of a project based on the earned value management system (EVMS). For this purpose, some projects randomly selected based on the standard data set , and it is produced necessary progress data such as actual cost ,actual percent complete , baseline cost and percent complete for five periods of project. Then an ANN with five inputs and five outputs and one hidden layer is trained to produce forecasted actual costs. The comparison between real and forecasted data show better performance based on the Mean Absolute Percentage Error (MAPE) criterion. This approach could be applicable to better forecasting the project cost and result in decreasing the risk of project cost overrun, and therefore it is beneficial for planning preventive actions.
Keywords: Earned Value Management System (EVMS), Artificial Neural Network (ANN), Estimate At Completion, Forecasting Methods, Project Performance Measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2768521 Quantifying and Adjusting the Effects of Publication Bias in Continuous Meta-Analysis
Authors: N.R.N. Idris
Abstract:
This study uses simulated meta-analysis to assess the effects of publication bias on meta-analysis estimates and to evaluate the efficacy of the trim and fill method in adjusting for these biases. The estimated effect sizes and the standard error were evaluated in terms of the statistical bias and the coverage probability. The results demonstrate that if publication bias is not adjusted it could lead to up to 40% bias in the treatment effect estimates. Utilization of the trim and fill method could reduce the bias in the overall estimate by more than half. The method is optimum in presence of moderate underlying bias but has minimal effects in presence of low and severe publication bias. Additionally, the trim and fill method improves the coverage probability by more than half when subjected to the same level of publication bias as those of the unadjusted data. The method however tends to produce false positive results and will incorrectly adjust the data for publication bias up to 45 % of the time. Nonetheless, the bias introduced into the estimates due to this adjustment is minimal
Keywords: Publication bias, Trim and Fill method, percentage relative bias, coverage probability
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558520 Blind Identification Channel Using Higher Order Cumulants with Application to Equalization for MC−CDMA System
Authors: Mohammed Zidane, Said Safi, Mohamed Sabri, Ahmed Boumezzough
Abstract:
In this paper we propose an algorithm based on higher order cumulants, for blind impulse response identification of frequency radio channels and downlink (MC−CDMA) system Equalization. In order to test its efficiency, we have compared with another algorithm proposed in the literature, for that we considered on theoretical channel as the Proakis’s ‘B’ channel and practical frequency selective fading channel, called Broadband Radio Access Network (BRAN C), normalized for (MC−CDMA) systems, excited by non-Gaussian sequences. In the part of (MC−CDMA), we use the Minimum Mean Square Error (MMSE) equalizer after the channel identification to correct the channel’s distortion. The simulation results, in noisy environment and for different signal to noise ratio (SNR), are presented to illustrate the accuracy of the proposed algorithm.
Keywords: Blind identification and equalization, Higher Order Cumulants, (MC−CDMA) system, MMSE equalizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781519 Real Time Compensation of Machining Errors for Machine Tools NC based on Systematic Dispersion
Authors: M. Rahou, A. Cheikh, F. Sebaa
Abstract:
Manufacturing tolerancing is intended to determine the intermediate geometrical and dimensional states of the part during its manufacturing process. These manufacturing dimensions also serve to satisfy not only the functional requirements given in the definition drawing, but also the manufacturing constraints, for example geometrical defects of the machine, vibration and the wear of the cutting tool. In this paper, an experimental study on the influence of the wear of the cutting tool (systematic dispersions) is explored. This study was carried out on three stages .The first stage allows machining without elimination of dispersions (random, systematic) so the tolerances of manufacture according to total dispersions. In the second stage, the results of the first stage are filtered in such way to obtain the tolerances according to random dispersions. Finally, from the two previous stages, the systematic dispersions are generated. The objective of this study is to model by the least squares method the error of manufacture based on systematic dispersion. Finally, an approach of optimization of the manufacturing tolerances was developed for machining on a CNC machine toolKeywords: Dispersions, Compensation, modeling, manufacturing Tolerance, machine tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336518 An Energy Efficient Digital Baseband for Batteryless Remote Control
Authors: Wei-Da Toh, Yuan Gao, Minkyu Je
Abstract:
In this paper, an energy efficient digital baseband circuit for piezoelectric (PE) harvester powered batteryless remote control system is presented. Pulse mode PE harvester, which provides short duration of energy, is adopted to replace conventional chemical battery in wireless remote controller. The transmitter digital baseband repeats the control command transmission once the digital circuit is initiated by the power-on-reset. A power efficient data frame format is proposed to maximize the transmission repetition time. By using the proposed frame format and receiver clock and data recovery method, the receiver baseband is able to decode the command even when the received data has 20% error. The proposed transmitter and receiver baseband are implemented using FPGA and simulation results are presented.
Keywords: Clock and Data Recovery (CDR), Correlator, Digital Baseband, Gold Code, Power-On-Reset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023517 Structural Evaluation of Airfield Pavement Using Finite Element Analysis Based Methodology
Authors: Richard Ji
Abstract:
Nondestructive deflection testing has been accepted widely as a cost-effective tool for evaluating the structural condition of airfield pavements. Backcalculation of pavement layer moduli can be used to characterize the pavement existing condition in order to compute the load bearing capacity of pavement. This paper presents an improved best-fit backcalculation methodology based on deflection predictions obtained using finite element method (FEM). The best-fit approach is based on minimizing the squared error between falling weight deflectometer (FWD) measured deflections and FEM predicted deflections. Then, concrete elastic modulus and modulus of subgrade reaction were back-calculated using Heavy Weight Deflectometer (HWD) deflections collected at the National Airport Pavement Testing Facility (NAPTF) test site. It is an alternative and more versatile method in considering concrete slab geometry and HWD testing locations compared to methods currently available.
Keywords: Nondestructive testing, Pavement moduli backcalculation, Finite Element Method, FEM, concrete pavements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801516 Airfoils Aerodynamic Efficiency Study in Heavy Rain via Two Phase Flow Approach
Authors: M. Ismail, Cao Yihua, Zhao Ming
Abstract:
Heavy rainfall greatly affects the aerodynamic performance of the aircraft. There are many accidents of aircraft caused by aerodynamic efficiency degradation by heavy rain. In this Paper we have studied the heavy rain effects on the aerodynamic efficiency of NACA 64-210 & NACA 0012 airfoils. For our analysis, CFD method and preprocessing grid generator are used as our main analytical tools, and the simulation of rain is accomplished via two phase flow approach-s Discrete Phase Model (DPM). Raindrops are assumed to be non-interacting, non-deforming, non-evaporating and non-spinning spheres. Both airfoil sections exhibited significant reduction in lift and increase in drag for a given lift condition in simulated rain. The most significant difference between these two airfoils was the sensitivity of the NACA 64-210 to liquid water content (LWC), while NACA 0012 performance losses in the rain environment is not a function of LWC . It is expected that the quantitative information gained in this paper will be useful to the operational airline industry and greater effort such as small scale and full scale flight tests should put in this direction to further improve aviation safety.
Keywords: airfoil, discrete phase modeling, heavy rain, Reynolds number
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3641515 End-to-End Pyramid Based Method for MRI Reconstruction
Authors: Omer Cahana, Maya Herman, Ofer Levi
Abstract:
Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.
Keywords: Accelerate MRI scans, image reconstruction, pyramid network, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 336514 Variable Step-Size Affine Projection Algorithm With a Weighted and Regularized Projection Matrix
Authors: Tao Dai, Andy Adler, Behnam Shahrrava
Abstract:
This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.
Keywords: Adaptive signal processing, affine projection algorithms, variable step-size adaptive algorithms, regularization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631513 Evaluating per-user Fairness of Goal-Oriented Parallel Computer Job Scheduling Policies
Authors: Sangsuree Vasupongayya
Abstract:
Fair share objective has been included into the goaloriented parallel computer job scheduling policy recently. However, the previous work only presented the overall scheduling performance. Thus, the per-user performance of the policy is still lacking. In this work, the details of per-user fair share performance under the Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair share priority backfill policy namely RelShare(1d) is also studied. The performance of all policies is collected using an event-driven simulator with three real job traces as input. The experimental results show that the high demand users are usually benefited under most policies because their jobs are large or they have a lot of jobs. In the large job case, one job executed may result in over-share during that period. In the other case, the jobs may be backfilled for performances. However, the users with a mixture of jobs may suffer because if the smaller jobs are executing the priority of the remaining jobs from the same user will be lower. Further analysis does not show any significant impact of users with a lot of jobs or users with a large runtime approximation error.Keywords: deviation, fair share, discrepancy search, priority scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352