Search results for: uniform error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2789

Search results for: uniform error

1529 Improved Performance of Cooperative Scheme in the Cellular and Broadcasting System

Authors: Hyun-Jee Yang, Bit-Na Kwon, Yong-Jun Kim, Hyoung-Kyu Song

Abstract:

In the cooperative transmission scheme, both the cellular system and broadcasting system are composed. Two cellular base stations (CBSs) communicating with a user in the cell edge use cooperative transmission scheme in the conventional scheme. In the case that the distance between two CBSs and the user is distant, the conventional scheme does not guarantee the quality of the communication because the channel condition is bad. Therefore, if the distance between CBSs and a user is distant, the performance of the conventional scheme is decreased. Also, the bad channel condition has bad effects on the performance. The proposed scheme uses two relays to communicate well with CBSs when the channel condition between CBSs and the user is poor. Using the relay in the high attenuation environment can obtain both advantages of the high bit error rate (BER) and throughput performance.

Keywords: cooperative communications, diversity gain, OFDM, interworking system

Procedia PDF Downloads 576
1528 Diminishing Voices of Children in Mandatory Mediation Schemes

Authors: Yuliya Radanova, Agnė Tvaronavičienė

Abstract:

With the growing trend for mandating parties of family conflicts to out-of-court processes, the adopted statutory regulations often remain silent on the way the voice of the child is integrated into the procedure. Convention on the Rights of the Child (Art. 12) clearly states the obligation to assure to the child who can form his or her own views the right to express those views freely in all matters affecting him. This article seeks to explore the way children participate in the mandatory mediation schemes applicable to family disputes in the European Union. A review of scientific literature and empirical data has been conducted on those EU Member States that coerce parties to family mediation to establish that different models of practice are deployed, and there is a lack of synchronicity on how children’s role in mediation is viewed. Child-inclusive mediation processes are deemed to produce sustainable results over time but necessitate professional qualifications and skills for the purpose of mediators to accommodate that such discussions are aligned with the best interest of the child. However, there is no unanimous guidance, standards or protocols on the peculiar characteristics and manner through which children are involved in mediation. Herewith, it is suggested that the lack of such rigorous approaches and coherence in an ever-changing mediation setting transitioning towards mandatory mediation models jeopardizes the importance of children’s voices in the process. Thus, it is suggested that there is a need to consider the adoption of uniform guidelines on the specific role children have in mediation, particularly in its mandatory models.

Keywords: family mediation, child involvement, mandatory mediation, child-inclusive, child-focused

Procedia PDF Downloads 74
1527 Cubic Trigonometric B-Spline Approach to Numerical Solution of Wave Equation

Authors: Shazalina Mat Zin, Ahmad Abd. Majid, Ahmad Izani Md. Ismail, Muhammad Abbas

Abstract:

The generalized wave equation models various problems in sciences and engineering. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline for the approximate solution of wave equation is developed. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Von Neumann stability analysis is used to analyze the proposed method. Two problems are discussed to exhibit the feasibility and capability of the method. The absolute errors and maximum error are computed to assess the performance of the proposed method. The results were found to be in good agreement with known solutions and with existing schemes in literature.

Keywords: collocation method, cubic trigonometric B-spline, finite difference, wave equation

Procedia PDF Downloads 542
1526 ANFIS Approach for Locating Faults in Underground Cables

Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat

Abstract:

This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.

Keywords: ANFIS, fault location, underground cable, wavelet transform

Procedia PDF Downloads 513
1525 Preparation of Bead-On-String Alginate/Soy Protein Isolated Nanofibers via Water-Based Electrospinning and Its Application for Drug Loading

Authors: Patcharakamon Nooeaid, Piyachat Chuysrinuan

Abstract:

Electrospun natural polymers-based nanofibers are one of the most interesting materials used in tissue engineering and drug delivery applications. Bead-on-string nanofibers have gained considerable interest for sustained drug release. Vancomycin was used as the model drug and sodium alginate (SA)/soy protein isolated (SPI) as the polymer blend to fabricate the bead-on-string nanofibers by aqueous-based electrospinning. The bead-on-string SA/SPI nanofibers were successfully fabricated by the addition of poly(ethylene oxide) (PEO) as a co-blending polymer. SA-PEO with mass ratio of 70/30 showed the best spinnability with continuous nanofibers without the occurrence of beads. Bead structure formed with the addition of SPI and bead number increased with increasing SPI content. The electrospinning of 80/20 SA-PEO/SPI was obtained as a great promising bead-on-string nanofibers for drug loading, while the solution of 50/50 was not able to obtain continuous fibers. In vitro release tests showed that a more sustainable release profile up to 14 days with less initial burst release on day 1 could be obtained from the bead-on-string fibers than from smooth fibers with uniform diameter. In addition, vancomycin-loaded beaded fibers inhibited the growth of Staphylococcus aureus (S. aureus) bacteria. Therefore, the SA-PEO/SPI nanofibers showed the potential to be used as biomaterials for tissue engineering and drug delivery.

Keywords: bead-on-string fibers, electrospinning, drug delivery, tissue engineering

Procedia PDF Downloads 334
1524 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 302
1523 Hydrogen Sulfide Removal from Biogas Using Biofilm on Packed Bed of Salak Fruit Seeds

Authors: Retno A. S. Lestari, Wahyudi B. Sediawan, Siti Syamsiah, Sarto

Abstract:

Sulfur-oxidizing bacteria were isolated and then grown on snakefruits seeds forming biofilm. Their performance in sulfide removal were experimentally observed. Snakefruit seeds were then used as packing material in a cylindrical tube. Biological treatment of hydrogen sulfide from biogas was investigated using biofilm on packed bed of snakefruits seeds. Biogas containing 27,9512 ppm of hydrogen sulfide was flown through the bed. Then the hydrogen sulfide concentrations in the outlet at various times were analyzed. A set of simple kinetics model for the rate of the sulfide removal and the bacterial growth was proposed. The axial sulfide concentration gradient in the flowing liquid are assumed to be steady-state. Mean while the biofilm grows on the surface of the seeds and the oxidation takes place in the biofilm. Since the biofilm is very thin, the sulfide concentration in the biofilm is assumed to be uniform. The simultaneous ordinary differential equations obtained were then solved numerically using Runge-Kutta method. The acuracy of the model proposed was tested by comparing the calcultion results using the model with the experimental data obtained. It turned out that the model proposed can be applied to describe the removal of sulfide liquid using bio-filter in packed bed. The values of the parameters were also obtained by curve-fitting. The biofilter could remove 89,83 % of the inlet of hydrogen sulfide from biogas for 2.5 h, and optimum loading of 8.33 ml/h.

Keywords: Sulfur-oxidizing bacteria, snakefruits seeds, biofilm, packing material, biogas

Procedia PDF Downloads 408
1522 The Fiscal-Monetary Policy and Economic Growth in Algeria: VECM Approach

Authors: K. Bokreta, D. Benanaya

Abstract:

The objective of this study is to examine the relative effectiveness of monetary and fiscal policy in Algeria using the econometric modelling techniques of cointegration and vector error correction modelling to analyse and draw policy inferences. The chosen variables of fiscal policy are government expenditure and net taxes on products, while the effect of monetary policy is presented by the inflation rate and the official exchange rate. From the results, we find that in the long-run, the impact of government expenditures is positive, while the effect of taxes is negative on growth. Additionally, we find that the inflation rate is found to have little effect on GDP per capita but the impact of the exchange rate is insignificant. We conclude that fiscal policy is more powerful then monetary policy in promoting economic growth in Algeria.

Keywords: economic growth, monetary policy, fiscal policy, VECM

Procedia PDF Downloads 310
1521 Automated End-to-End Pipeline Processing Solution for Autonomous Driving

Authors: Ashish Kumar, Munesh Raghuraj Varma, Nisarg Joshi, Gujjula Vishwa Teja, Srikanth Sambi, Arpit Awasthi

Abstract:

Autonomous driving vehicles are revolutionizing the transportation system of the 21st century. This has been possible due to intensive research put into making a robust, reliable, and intelligent program that can perceive and understand its environment and make decisions based on the understanding. It is a very data-intensive task with data coming from multiple sensors and the amount of data directly reflects on the performance of the system. Researchers have to design the preprocessing pipeline for different datasets with different sensor orientations and alignments before the dataset can be fed to the model. This paper proposes a solution that provides a method to unify all the data from different sources into a uniform format using the intrinsic and extrinsic parameters of the sensor used to capture the data allowing the same pipeline to use data from multiple sources at a time. This also means easy adoption of new datasets or In-house generated datasets. The solution also automates the complete deep learning pipeline from preprocessing to post-processing for various tasks allowing researchers to design multiple custom end-to-end pipelines. Thus, the solution takes care of the input and output data handling, saving the time and effort spent on it and allowing more time for model improvement.

Keywords: augmentation, autonomous driving, camera, custom end-to-end pipeline, data unification, lidar, post-processing, preprocessing

Procedia PDF Downloads 123
1520 Secure Optical Communication System Using Quantum Cryptography

Authors: Ehab AbdulRazzaq Hussein

Abstract:

Quantum cryptography (QC) is an emerging technology for secure key distribution with single-photon transmissions. In contrast to classical cryptographic schemes, the security of QC schemes is guaranteed by the fundamental laws of nature. Their security stems from the impossibility to distinguish non-orthogonal quantum states with certainty. A potential eavesdropper introduces errors in the transmissions, which can later be discovered by the legitimate participants of the communication. In this paper, the modeling approach is proposed for QC protocol BB84 using polarization coding. The single-photon system is assumed to be used in the designed models. Thus, Eve cannot use beam-splitting strategy to eavesdrop on the quantum channel transmission. The only eavesdropping strategy possible to Eve is the intercept/resend strategy. After quantum transmission of the QC protocol, the quantum bit error rate (QBER) is estimated and compared with a threshold value. If it is above this value the procedure must be stopped and performed later again.

Keywords: security, key distribution, cryptography, quantum protocols, Quantum Cryptography (QC), Quantum Key Distribution (QKD).

Procedia PDF Downloads 406
1519 Behavioral and EEG Reactions in Native Turkic-Speaking Inhabitants of Siberia and Siberian Russians during Recognition of Syntactic Errors in Sentences in Native and Foreign Languages

Authors: Tatiana N. Astakhova, Alexander E. Saprygin, Tatyana A. Golovko, Alexander N. Savostyanov, Mikhail S. Vlasov, Natalia V. Borisova, Alexandera G. Karpova, Urana N. Kavai-ool, Elena D. Mokur-ool, Nikolay A. Kolchanov, Lubomir I. Aftanas

Abstract:

The aim of the study is to compare behaviorally and EEG reactions in Turkic-speaking inhabitants of Siberia (Tuvinians and Yakuts) and Russians during the recognition of syntax errors in native and foreign languages. 63 healthy aboriginals of the Tyva Republic, 29 inhabitants of the Sakha (Yakutia) Republic, and 55 Russians from Novosibirsk participated in the study. All participants completed a linguistic task, in which they had to find a syntax error in the written sentences. Russian participants completed the task in Russian and in English. Tuvinian and Yakut participants completed the task in Russian, English, and Tuvinian or Yakut, respectively. EEG’s were recorded during the solving of tasks. For Russian participants, EEG's were recorded using 128-channels. The electrodes were placed according to the extended International 10-10 system, and the signals were amplified using ‘Neuroscan (USA)’ amplifiers. For Tuvinians and Yakuts EEG's were recorded using 64-channels and amplifiers Brain Products, Germany. In all groups 0.3-100 Hz analog filtering, sampling rate 1000 Hz were used. Response speed and the accuracy of recognition error were used as parameters of behavioral reactions. Event-related potentials (ERP) responses P300 and P600 were used as indicators of brain activity. The accuracy of solving tasks and response speed in Russians were higher for Russian than for English. The P300 amplitudes in Russians were higher for English; the P600 amplitudes in the left temporal cortex were higher for the Russian language. Both Tuvinians and Yakuts have no difference in accuracy of solving tasks in Russian and in their respective national languages (Tuvinian and Yakut). However, the response speed was faster for tasks in Russian than for tasks in their national language. Tuvinians and Yakuts showed bad accuracy in English, but the response speed was higher for English than for Russian and the national languages. With Tuvinians, there were no differences in the P300 and P600 amplitudes and in cortical topology for Russian and Tuvinian, but there was a difference for English. In Yakuts, the P300 and P600 amplitudes and topology of ERP for Russian were the same as Russians had for Russian. In Yakuts, brain reactions during Yakut and English comprehension had no difference and were reflected foreign language comprehension -while the Russian language comprehension was reflected native language comprehension. We found out that the Tuvinians recognized both Russian and Tuvinian as native languages, and English as a foreign language. The Yakuts recognized both English and Yakut as a foreign language, only Russian as a native language. According to the inquirer, both Tuvinians and Yakuts use the national language as a spoken language, whereas they don’t use it for writing. It can well be a reason that Yakuts perceive the Yakut writing language as a foreign language while writing Russian as their native.

Keywords: EEG, language comprehension, native and foreign languages, Siberian inhabitants

Procedia PDF Downloads 532
1518 A More Powerful Test Procedure for Multiple Hypothesis Testing

Authors: Shunpu Zhang

Abstract:

We propose a new multiple test called the minPOP test for testing multiple hypotheses simultaneously. Under the assumption that the test statistics are independent, we show that the minPOP test has higher global power than the existing multiple testing methods. We further propose a stepwise multiple-testing procedure based on the minPOP test and two of its modified versions (the Double Truncated and Left Truncated minPOP tests). We show that these multiple tests have strong control of the family-wise error rate (FWER). A method for finding the p-values of the proposed tests after adjusting for multiplicity is also developed. Simulation results show that the Double Truncated and Left Truncated minPOP tests, in general, have a higher number of rejections than the existing multiple testing procedures.

Keywords: multiple test, single-step procedure, stepwise procedure, p-value for multiple testing

Procedia PDF Downloads 83
1517 Synthesis of Mesoporous In₂O₃-TiO₂ Nanocomposites as Efficient Photocatalyst for Treatment Industrial Wastewater under Visible Light and UV Illumination

Authors: Ibrahim Abdelfattah, Adel Ismail, Ahmed Helal, Mohamed Faisal

Abstract:

Advanced oxidation technologies are an environment friendly approach for the remediation of industrial wastewaters. Here, one pot synthesis of mesoporous In₂O₃-TiO₂ nanocomposites at different In₂O₃ contents (0-3 wt%) have been synthesized through a facile sol-gel method to evaluate their photocatalytic performance for the degradation of the imazapyr herbicide and phenol under visible light and UV illumination compared with commercially available either Degussa P-25 or UV-100 Hombikat. The prepared mesoporous In₂O₃-TiO₂ nanocomposites were characterized by TEM, STEM, XRD, Raman FT-IR, Raman spectra and diffuse reflectance UV-visible. The bandgap energy of the prepared photocatalysts was derived from the diffuse reflectance spectra. XRD Raman's spectra confirmed that highly crystalline anatase TiO₂ phase was formed. TEM images show TiO₂ particles are quite uniform with 10±2 nm sizes with mesoporous structure. The mesoporous TiO₂ exhibits large pore volumes of 0.267 cm³g⁻¹ and high surface areas of 178 m²g⁻¹, but they become reduced to 0.211 cm³g⁻¹ and 112 m²g⁻¹, respectively upon In₂O₃ incorporation, with tunable mesopore diameter in the range of 5 - 7 nm. The 0.5% In₂O₃-TiO₂ nanocomposite is considered to be the optimum photocatalyst which is able to degrade 90% of imazapyr herbicide and phenol along 180 min and 60 min respectively. The proposed mechanism of this system and the role of In₂O₃ are explained by details.

Keywords: In₂O₃-TiO₂ nanocomposites, sol-gel method, visible light illumination, UV illumination, herbicide and phenol wastewater, removal

Procedia PDF Downloads 297
1516 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique

Authors: Ghada A. Alfattni

Abstract:

Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates. 

Keywords: imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour

Procedia PDF Downloads 350
1515 Methanol Steam Reforming with Heat Recovery for Hydrogen-Rich Gas Production

Authors: Horng-Wen Wu, Yi Chao, Rong-Fang Horng

Abstract:

This study is to develop a methanol steam reformer with a heat recovery zone, which recovers heat from exhaust gas of a diesel engine, and to investigate waste heat recovery ratio at the required reaction temperature. The operation conditions of the reformer are reaction temperature (200 °C, 250 °C, and 300 °C), steam to carbonate (S/C) ratio (0.9, 1.1, and 1.3), and N2 volume flow rate (40 cm3/min, 70 cm3/min, and 100 cm3/min). Finally, the hydrogen concentration, the CO, CO2, and N2 concentrations are measured and recorded to calculate methanol conversion efficiency, hydrogen flow rate, and assisting combustion gas and impeding combustion gas ratio. The heat source of this reformer comes from electric heater and waste heat of exhaust gas from diesel engines. The objective is to recover waste heat from the engine and to make more uniform temperature distribution within the reformer. It is beneficial for the reformer to enhance the methanol conversion efficiency and hydrogen-rich gas production. Experimental results show that the highest hydrogen flow rate exists at N2 of the volume rate 40 cm3/min and reforming reaction temperature of 300 °C and the value is 19.6 l/min. With the electric heater and heat recovery from exhaust gas, the maximum heat recovery ratio is 13.18 % occurring at water-methanol (S/C) ratio of 1.3 and the reforming reaction temperature of 300 °C.

Keywords: heat recovery, hydrogen-rich production, methanol steam reformer, methanol conversion efficiency

Procedia PDF Downloads 466
1514 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks

Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf

Abstract:

Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.

Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks

Procedia PDF Downloads 168
1513 Design of Membership Ranges for Fuzzy Logic Control of Refrigeration Cycle Driven by a Variable Speed Compressor

Authors: Changho Han, Jaemin Lee, Li Hua, Seokkwon Jeong

Abstract:

Design of membership function ranges in fuzzy logic control (FLC) is presented for robust control of a variable speed refrigeration system (VSRS). The criterion values of the membership function ranges can be carried out from the static experimental data, and two different values are offered to compare control performance. Some simulations and real experiments for the VSRS were conducted to verify the validity of the designed membership functions. The experimental results showed good agreement with the simulation results, and the error change rate and its sampling time strongly affected the control performance at transient state of the VSRS.

Keywords: variable speed refrigeration system, fuzzy logic control, membership function range, control performance

Procedia PDF Downloads 265
1512 Sensorless Controller of Induction Motor Using Backstepping Approach and Fuzzy MRAS

Authors: Ahmed Abbou

Abstract:

This paper present a sensorless controller designed by the backstepping approach for the speed control of induction motor. In this strategy of control, we also combined the method Fuzzy MRAS to estimate the rotor speed and the observer type Luenburger to observe Rotor flux. The control model involves a division by the flux variable that may lead to unbounded solutions. Such a risk is avoided by basing the controller design on Lyapunov function that accounts for the model singularity. On the other hand, this mixed method gives better results in Sensorless operation and especially at low speed. The response time at 5% of the flux is 20ms while the error between the speed with sensor and the estimated speed remains in the range of ±0.8 rad/s for the rated functioning and ±1.5 rad/s for low speed.

Keywords: backstepping approach, fuzzy logic, induction motor, luenburger observer, sensorless MRAS

Procedia PDF Downloads 373
1511 Failure Load Investigations in Adhesively Bonded Single-Strap Joints of Dissimilar Materials Using Cohesive Zone Model

Authors: B. Paygozar, S.A. Dizaji

Abstract:

Adhesive bonding is a highly valued type of fastening mechanical parts in complex structures, where joining some simple components is always needed. This method is of several merits, such as uniform stress distribution, appropriate bonding strength, and fatigue performance, and lightness, thereby outweighing other sorts of bonding methods. This study is to investigate the failure load of adhesive single-strap joints, including adherends of different sizes and materials. This kind of adhesive joint is very practical in different industries, especially when repairing the existing joints or attaching substrates of dissimilar materials. In this research, experimentally validated numerical analyses carried out in a commercial finite element package, ABAQUS, are utilized to extract the failure loads of the joints, based on the cohesive zone model. In addition, the stress analyses of the substrates are performed in order to acquire the effects of lowering the thickness of the substrates on the stress distribution inside them to avoid designs suffering from the necking or failure of the adherends. It was found out that this method of bonding is really feasible in joining dissimilar materials which can be utilized in a variety of applications. Moreover, the stress analyses indicated the minimum thickness for the adherends so as to avoid the failure of them.

Keywords: cohesive zone model, dissimilar materials, failure load, single strap joint

Procedia PDF Downloads 123
1510 Studying the Dynamical Response of Nano-Microelectromechanical Devices for Nanomechanical Testing of Nanostructures

Authors: Mohammad Reza Zamani Kouhpanji

Abstract:

Characterizing the fatigue and fracture properties of nanostructures is one of the most challenging tasks in nanoscience and nanotechnology due to lack of a MEMS/NEMS device for generating uniform cyclic loadings at high frequencies. Here, the dynamic response of a recently proposed MEMS/NEMS device under different inputs signals is completely investigated. This MEMS/NEMS device is designed and modeled based on the electromagnetic force induced between paired parallel wires carrying electrical currents, known as Ampere’s Force Law (AFL). Since this MEMS/NEMS device only uses two paired wires for actuation part and sensing part, it represents highly sensitive and linear response for nanostructures with any stiffness and shapes (single or arrays of nanowires, nanotubes, nanosheets or nanowalls). In addition to studying the maximum gains at different resonance frequencies of the MEMS/NEMS device, its dynamical responses are investigated for different inputs and nanostructure properties to demonstrate the capability, usability, and reliability of the device for wide range of nanostructures. This MEMS/NEMS device can be readily integrated into SEM/TEM instruments to provide real time study of the fatigue and fracture properties of nanostructures as well as their softening or hardening behaviors, and initiation and/or propagation of nanocracks in them.

Keywords: MEMS/NEMS devices, paired wire actuators and sensors, dynamical response, fatigue and fracture characterization, Ampere’s force law

Procedia PDF Downloads 400
1509 Mean Velocity Modeling of Open-Channel Flow with Submerged Vegetation

Authors: Mabrouka Morri, Amel Soualmia, Philippe Belleudy

Abstract:

Vegetation affects the mean and turbulent flow structure. It may increase flood risks and sediment transport. Therefore, it is important to develop analytical approaches for the bed shear stress on vegetated bed, to predict resistance caused by vegetation. In the recent years, experimental and numerical models have both been developed to model the effects of submerged vegetation on open-channel flow. In this paper, different analytic models are compared and tested using the criteria of deviation, to explore their capacity for predicting the mean velocity and select the suitable one that will be applied in real case of rivers. The comparison between the measured data in vegetated flume and simulated mean velocities indicated, a good performance, in the case of rigid vegetation, whereas, Huthoff model shows the best agreement with a high coefficient of determination (R2=80%) and the smallest error in the prediction of the average velocities.

Keywords: analytic models, comparison, mean velocity, vegetation

Procedia PDF Downloads 276
1508 A Study of Effective Stereo Matching Method for Long-Wave Infrared Camera Module

Authors: Hyun-Koo Kim, Yonghun Kim, Yong-Hoon Kim, Ju Hee Lee, Myungho Song

Abstract:

In this paper, we have described an efficient stereo matching method and pedestrian detection method using stereo types LWIR camera. We compared with three types stereo camera algorithm as block matching, ELAS, and SGM. For pedestrian detection using stereo LWIR camera, we used that SGM stereo matching method, free space detection method using u/v-disparity, and HOG feature based pedestrian detection. According to testing result, SGM method has better performance than block matching and ELAS algorithm. Combination of SGM, free space detection, and pedestrian detection using HOG features and SVM classification can detect pedestrian of 30m distance and has a distance error about 30 cm.

Keywords: advanced driver assistance system, pedestrian detection, stereo matching method, stereo long-wave IR camera

Procedia PDF Downloads 415
1507 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 115
1506 Capacity Estimation of Hybrid Automated Repeat Request Protocol for Low Earth Orbit Mega-Constellations

Authors: Arif Armagan Gozutok, Alper Kule, Burak Tos, Selman Demirel

Abstract:

Wireless communication chain requires effective ways to keep throughput efficiency high while it suffers location-dependent, time-varying burst errors. Several techniques are developed in order to assure that the receiver recovers the transmitted information without errors. The most fundamental approaches are error checking and correction besides re-transmission of the non-acknowledged packets. In this paper, stop & wait (SAW) and chase combined (CC) hybrid automated repeat request (HARQ) protocols are compared and analyzed in terms of throughput and average delay for the usage of low earth orbit (LEO) mega-constellations case. Several assumptions and technological implementations are considered as well as usage of low-density parity check (LDPC) codes together with several constellation orbit configurations.

Keywords: HARQ, LEO, satellite constellation, throughput

Procedia PDF Downloads 145
1505 Polarization Effects in Cosmic-Ray Acceleration by Cyclotron Auto-Resonance

Authors: Yousef I. Salamin

Abstract:

Theoretical investigations, analytical as well as numerical, have shown that electrons can be accelerated to GeV energies by the process of cyclotron auto-resonance acceleration (CARA). In CARA, the particle would be injected along the lines of a uniform magnetic field aligned parallel to the direction of propagation of a plane-wave radiation field. Unfortunately, an accelerator based on CARA would be prohibitively too long and too expensive to build and maintain. However, the process stands a better chance of success near the polar cap of a compact object (such as a neutron star, a black hole or a magnetar) or in an environment created in the wake of a binary neutron-star or blackhole merger. Dynamics of the nuclides ₁H¹, ₂He⁴, ₂₆Fe⁵⁶, and ₂₈Ni⁶², in such astrophysical conditions, have been investigated by single-particle calculations and many-particle simulations. The investigations show that these nuclides can reach ZeV energies (1 ZeV = 10²¹ eV) due to interaction with super-intense radiation of wavelengths = 1 and 10 m and = 50 pm and magnetic fields of strengths at the mega- and giga-tesla levels. Examples employing radiation intensities in the range 10³²-10⁴² W/m² have been used. Employing a two-parameter model for representing the radiation field, CARA is analytically generalized to include any state of polarization, and the basic working equations are derived rigorously and in closed analytic form.

Keywords: compact objects, cosmic-ray acceleration, cyclotron auto-resonance, polarization effects, zevatron

Procedia PDF Downloads 123
1504 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application

Authors: A. Mihoc, K. Cater

Abstract:

On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.

Keywords: compass error, GPS, maritime navigation, mobile augmented reality

Procedia PDF Downloads 330
1503 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 143
1502 Cancellation of Transducer Effects from Frequency Response Functions: Experimental Case Study on the Steel Plate

Authors: P. Zamani, A. Taleshi Anbouhi, M. R. Ashory, S. Mohajerzadeh, M. M. Khatibi

Abstract:

Modal analysis is a developing science in the experimental evaluation of dynamic properties of the structures. Mechanical devices such as accelerometers are one of the sources of lack of quality in measuring modal testing parameters. In this paper, eliminating the accelerometer’s mass effect of the frequency response of the structure is studied. So, a strategy is used for eliminating the mass effect by using sensitivity analysis. In this method, the amount of mass change and the place to measure the structure’s response with least error in frequency correction is chosen. Experimental modal testing is carried out on a steel plate and the effect of accelerometer’s mass is omitted using this strategy. Finally, a good agreement is achieved between numerical and experimental results.

Keywords: accelerometer mass, frequency response function, modal analysis, sensitivity analysis

Procedia PDF Downloads 446
1501 Application of Artificial Immune Systems Combined with Collaborative Filtering in Movie Recommendation System

Authors: Pei-Chann Chang, Jhen-Fu Liao, Chin-Hung Teng, Meng-Hui Chen

Abstract:

This research combines artificial immune system with user and item based collaborative filtering to create an efficient and accurate recommendation system. By applying the characteristic of antibodies and antigens in the artificial immune system and using Pearson correlation coefficient as the affinity threshold to cluster the data, our collaborative filtering can effectively find useful users and items for rating prediction. This research uses MovieLens dataset as our testing target to evaluate the effectiveness of the algorithm developed in this study. The experimental results show that the algorithm can effectively and accurately predict the movie ratings. Compared to some state of the art collaborative filtering systems, our system outperforms them in terms of the mean absolute error on the MovieLens dataset.

Keywords: artificial immune system, collaborative filtering, recommendation system, similarity

Procedia PDF Downloads 536
1500 Equity Risk Premiums and Risk Free Rates in Modelling and Prediction of Financial Markets

Authors: Mohammad Ghavami, Reza S. Dilmaghani

Abstract:

This paper presents an adaptive framework for modelling financial markets using equity risk premiums, risk free rates and volatilities. The recorded economic factors are initially used to train four adaptive filters for a certain limited period of time in the past. Once the systems are trained, the adjusted coefficients are used for modelling and prediction of an important financial market index. Two different approaches based on least mean squares (LMS) and recursive least squares (RLS) algorithms are investigated. Performance analysis of each method in terms of the mean squared error (MSE) is presented and the results are discussed. Computer simulations carried out using recorded data show MSEs of 4% and 3.4% for the next month prediction using LMS and RLS adaptive algorithms, respectively. In terms of twelve months prediction, RLS method shows a better tendency estimation compared to the LMS algorithm.

Keywords: adaptive methods, LSE, MSE, prediction of financial Markets

Procedia PDF Downloads 336