Search results for: dimensional error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3942

Search results for: dimensional error

3552 Reduction of Impulsive Noise in OFDM System using Adaptive Algorithm

Authors: Alina Mirza, Sumrin M. Kabir, Shahzad A. Sheikh

Abstract:

The Orthogonal Frequency Division Multiplexing (OFDM) with high data rate, high spectral efficiency and its ability to mitigate the effects of multipath makes them most suitable in wireless application. Impulsive noise distorts the OFDM transmission and therefore methods must be investigated to suppress this noise. In this paper, a State Space Recursive Least Square (SSRLS) algorithm based adaptive impulsive noise suppressor for OFDM communication system is proposed. And a comparison with another adaptive algorithm is conducted. The state space model-dependent recursive parameters of proposed scheme enables to achieve steady state mean squared error (MSE), low bit error rate (BER), and faster convergence than that of some of existing algorithm.

Keywords: OFDM, impulsive noise, SSRLS, BER

Procedia PDF Downloads 440
3551 Protein Tertiary Structure Prediction by a Multiobjective Optimization and Neural Network Approach

Authors: Alexandre Barbosa de Almeida, Telma Woerle de Lima Soares

Abstract:

Protein structure prediction is a challenging task in the bioinformatics field. The biological function of all proteins majorly relies on the shape of their three-dimensional conformational structure, but less than 1% of all known proteins in the world have their structure solved. This work proposes a deep learning model to address this problem, attempting to predict some aspects of the protein conformations. Throughout a process of multiobjective dominance, a recurrent neural network was trained to abstract the particular bias of each individual multiobjective algorithm, generating a heuristic that could be useful to predict some of the relevant aspects of the three-dimensional conformation process formation, known as protein folding.

Keywords: Ab initio heuristic modeling, multiobjective optimization, protein structure prediction, recurrent neural network

Procedia PDF Downloads 193
3550 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network

Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui

Abstract:

Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.

Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN

Procedia PDF Downloads 119
3549 Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles

Authors: Nacer Eddine Chelbi, Ayet Bagane, Annie Saleh, Claude Sauvageau, Denis Gingras

Abstract:

As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.

Keywords: automated driving, autonomous emergency braking (AEB), autonomous vehicles, certification, evaluation, importance sampling, metropolis-hastings sampling, tests

Procedia PDF Downloads 273
3548 Employing Bayesian Artificial Neural Network for Evaluation of Cold Rolling Force

Authors: P. Kooche Baghy, S. Eskandari, E.javanmard

Abstract:

Neural network has been used as a predictive means of cold rolling force in this dissertation. Thus, imposed average force on rollers as a mere input and five pertaining parameters to its as a outputs are regarded. According to our study, feed-forward multilayer perceptron network has been selected. Besides, Bayesian algorithm based on the feed-forward back propagation method has been selected due to noisy data. Further, 470 out of 585 all tests were used for network learning and others (115 tests) were considered as assessment criteria. Eventually, by 30 times running the MATLAB software, mean error was obtained 3.84 percent as a criteria of network learning. As a consequence, this the mentioned error on par with other approaches such as numerical and empirical methods is acceptable admittedly.

Keywords: artificial neural network, Bayesian, cold rolling, force evaluation

Procedia PDF Downloads 424
3547 Fabrication of Optical Tissue Phantoms Simulating Human Skin and Their Application

Authors: Jihoon Park, Sungkon Yu, Byungjo Jung

Abstract:

Although various optical tissue phantoms (OTPs) simulating human skin have been actively studied, their completeness is unclear because skin tissue has the intricate optical property and complicated structure disturbing the optical simulation. In this study, we designed multilayer OTP mimicking skin structure, and fabricated OTP models simulating skin-blood vessel and skin pigmentation in the skin, which are useful in Biomedical optics filed. The OTPs were characterized with the optical property and the cross-sectional structure, and analyzed by using various optical tools such as a laser speckle imaging system, OCT and a digital microscope to show the practicality. The measured optical property was within 5% error, and the thickness of each layer was uniform within 10% error in micrometer scale.

Keywords: blood vessel, optical tissue phantom, optical property, skin tissue, pigmentation

Procedia PDF Downloads 437
3546 Albanian Students’ Errors in Spoken and Written English and the Role of Error Correction in Assessment and Self-Assessment

Authors: Arburim Iseni, Afrim Aliti, Nagri Rexhepi

Abstract:

This paper focuses mainly on an important aspect of student-linguistic errors. It aims to explore the nature of Albanian intermediate level or B1 students’ language errors and mistakes and attempts to trace the possible sources or causes by classifying the error samples into both inter lingual and intra lingual errors. The hypothesis that intra lingua errors may be determined or induced somehow by the native language influence seems to be confirmed by the significant number of errors found in Albanian EFL students in the Study Program of the English Language and Literature at the State University of Tetova. Findings of this study have revealed that L1 interference first and then ignorance of the English Language grammar rules constitute the main sources or causes of errors, even though carelessness cannot be ruled out. Although we have conducted our study with 300 students of intermediate or B1 level, we believe that this hypothesis would need to be confirmed by further research, maybe with a larger number of students with different levels in order to draw more steady and accurate conclusions. The analysis of the questionnaires was done according to quantitative and qualitative research methods. This study was also conducted by taking written samples on different topics from our students and then distributing them with comments to the students and University teachers as well. These questionnaires were designed to gather information among 300 students and 48 EFL teachers, all of whom teach in the Study Program of English Language and Literature at the State University of Tetova. From the analyzed written samples of the students and face-to-face interviews, we could get useful insights into some important aspects of students’ error-making and error-correction. These different research methodologies were used in order to comprise a holistic research and the findings of the questionnaires helped us to come up with some more steady solutions in order to minimize the potential gap between students and teachers.

Keywords: L1 & L2, Linguistics, Applied linguistics, SLA, Albanian EFL students and teachers, Errors and Mistakes, Students’ Assessment and Self-Assessment

Procedia PDF Downloads 475
3545 3-D Visualization and Optimization for SISO Linear Systems Using Parametrization of Two-Stage Compensator Design

Authors: Kazuyoshi Mori, Keisuke Hashimoto

Abstract:

In this paper, we consider the two-stage compensator designs of SISO plants. As an investigation of the characteristics of the two-stage compensator designs, which is not well investigated yet, of SISO plants, we implement three dimensional visualization systems of output signals and optimization system for SISO plants by the parametrization of stabilizing controllers based on the two-stage compensator design. The system runs on Mathematica by using “Three Dimensional Surface Plots,” so that the visualization can be interactively manipulated by users. In this paper, we use the discrete-time LTI system model. Even so, our approach is the factorization approach, so that the result can be applied to many linear models.

Keywords: linear systems, visualization, optimization, Mathematica

Procedia PDF Downloads 284
3544 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 36
3543 Effect of Core Stability Exercises on Trunk Proprioception in Healthy Adult Individuals

Authors: Omaima E. S. Mohammed, Amira A. A. Abdallah, Amal A. M. El Borady

Abstract:

Background: Core stability training has recently attracted attention for improving muscle performance. Purpose: This study investigated the effect of beginners' core stability exercises on trunk active repositioning error at 30° and 60° trunk flexion. Methods: Forty healthy males participated in the study. They were divided into two equal groups; experimental “group I” and control “group II”. Their mean age, weight and height were 19.35±1.11 vs 20.45±1.64 years, 70.15±6.44 vs 72.45±6.91 kg and 174.7±7.02 vs 176.3±7.24 cm for group I vs group II. Data were collected using the Biodex Isokinetic system at an angular velocity of 60º/s. The participants were tested twice; before and after a 6-week period during which group I performed a core stability training program. Results: The Mixed 3-way ANOVA revealed significant increases (p<0.05) in the absolute error (AE) at 30˚ compared with 60˚ flexion in the pre-test condition of group I and II and the post-test condition of group II. Moreover, there were significant decreases (p<0.05) in the AE in the post-test condition compared with the pre-test in group I at both 30˚ and 60˚ flexion with no significant differences for group II. Finally, there were significant decreases (p<0.05) in the AE in group I compared with group II in the post-test condition at 30˚ and 60˚ flexion with no significant differences for the pre-test condition Interpretation/Conclusion: The improvement in trunk proprioception indicated by the decrease in the active repositioning error in the experimental group recommends including core stability training in the exercise programs that aim to improve trunk proprioception.

Keywords: core stability, isokinetic, trunk proprioception, biomechanics

Procedia PDF Downloads 463
3542 Estimation of Stress Intensity Factors from near Crack Tip Field

Authors: Zhuang He, Andrei Kotousov

Abstract:

All current experimental methods for determination of stress intensity factors are based on the assumption that the state of stress near the crack tip is plane stress. Therefore, these methods rely on strain and displacement measurements made outside the near crack tip region affected by the three-dimensional effects or by process zone. In this paper, we develop and validate an experimental procedure for the evaluation of stress intensity factors from the measurements of the out-of-plane displacements in the surface area controlled by 3D effects. The evaluation of stress intensity factors is possible when the process zone is sufficiently small, and the displacement field generated by the 3D effects is fully encapsulated by K-dominance region.

Keywords: digital image correlation, stress intensity factors, three-dimensional effects, transverse displacement

Procedia PDF Downloads 598
3541 Behavior Consistency Analysis for Workflow Nets Based on Branching Processes

Authors: Wang Mimi, Jiang Changjun, Liu Guanjun, Fang Xianwen

Abstract:

Loop structure often appears in the business process modeling, analyzing the consistency of corresponding workflow net models containing loop structure is a problem, the existing behavior consistency methods cannot analyze effectively the process models with the loop structure. In the paper, by analyzing five kinds of behavior relations of transitions, a three-dimensional figure and two-dimensional behavior relation matrix are proposed. Based on this, analysis method of behavior consistency of business process based on Petri net branching processes is proposed. Finally, an example is given out, which shows the method is effective.

Keywords: workflow net, behavior consistency measures, loop, branching process

Procedia PDF Downloads 371
3540 Detecting Logical Errors in Haskell

Authors: Vanessa Vasconcelos, Mariza A. S. Bigonha

Abstract:

In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying functional programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against functional programming assignments submitted by students enrolled at the functional programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available on GitHub. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. Results also showed that the Ochiai method was more effective than Tarantula.

Keywords: debug, fault localization, functional programming, Haskell

Procedia PDF Downloads 284
3539 Storage Assignment Strategies to Reduce Manual Picking Errors with an Emphasis on an Ageing Workforce

Authors: Heiko Diefenbach, Christoph H. Glock

Abstract:

Order picking, i.e., the order-based retrieval of items in a warehouse, is an important time- and cost-intensive process for many logistic systems. Despite the ongoing trend of automation, most order picking systems are still manual picker-to-parts systems, where human pickers walk through the warehouse to collect ordered items. Human work in warehouses is not free from errors, and order pickers may at times pick the wrong or the incorrect number of items. Errors can cause additional costs and significant correction efforts. Moreover, age might increase a person’s likelihood to make mistakes. Hence, the negative impact of picking errors might increase for an aging workforce currently witnessed in many regions globally. A significant amount of research has focused on making order picking systems more efficient. Among other factors, storage assignment, i.e., the assignment of items to storage locations (e.g., shelves) within the warehouse, has been subject to optimization. Usually, the objective is to assign items to storage locations such that order picking times are minimized. Surprisingly, there is a lack of research concerned with picking errors and respective prevention approaches. This paper hypothesize that the storage assignment of items can affect the probability of pick errors. For example, storing similar-looking items apart from one other might reduce confusion. Moreover, storing items that are hard to count or require a lot of counting at easy-to-access and easy-to-comprehend self heights might reduce the probability to pick the wrong number of items. Based on this hypothesis, the paper discusses how to incorporate error-prevention measures into mathematical models for storage assignment optimization. Various approaches with respective benefits and shortcomings are presented and mathematically modeled. To investigate the newly developed models further, they are compared to conventional storage assignment strategies in a computational study. The study specifically investigates how the importance of error prevention increases with pickers being more prone to errors due to age, for example. The results suggest that considering error-prevention measures for storage assignment can reduce error probabilities with only minor decreases in picking efficiency. The results might be especially relevant for an aging workforce.

Keywords: an aging workforce, error prevention, order picking, storage assignment

Procedia PDF Downloads 191
3538 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel

Authors: Joseph C. Chen, Joshua Cox

Abstract:

This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.

Keywords: Taguchi Parameter Design, surface roughness, Wire EDM, dimensional accuracy

Procedia PDF Downloads 361
3537 A Study on the Solutions of the 2-Dimensional and Forth-Order Partial Differential Equations

Authors: O. Acan, Y. Keskin

Abstract:

In this study, we will carry out a comparative study between the reduced differential transform method, the adomian decomposition method, the variational iteration method and the homotopy analysis method. These methods are used in many fields of engineering. This is been achieved by handling a kind of 2-Dimensional and forth-order partial differential equations called the Kuramoto–Sivashinsky equations. Three numerical examples have also been carried out to validate and demonstrate efficiency of the four methods. Furthermost, it is shown that the reduced differential transform method has advantage over other methods. This method is very effective and simple and could be applied for nonlinear problems which used in engineering.

Keywords: reduced differential transform method, adomian decomposition method, variational iteration method, homotopy analysis method

Procedia PDF Downloads 418
3536 Usability Testing on Information Design through Single-Lens Wearable Device

Authors: Jae-Hyun Choi, Sung-Soo Bae, Sangyoung Yoon, Hong-Ku Yun, Jiyoung Kwahk

Abstract:

This study was conducted to investigate the effect of ocular dominance on recognition performance using a single-lens smart display designed for cycling. A total of 36 bicycle riders who have been cycling consistently were recruited and participated in the experiment. The participants were asked to perform tasks riding a bicycle on a stationary stand for safety reasons. Independent variables of interest include ocular dominance, bike usage, age group, and information layout. Recognition time (i.e., the time required to identify specific information measured with an eye-tracker), error rate (i.e. false answer or failure to identify the information in 5 seconds), and user preference scores were measured and statistical tests were conducted to identify significant results. Recognition time and error ratio showed significant difference by ocular dominance factor, while the preference score did not. Recognition time was faster when the single-lens see-through display on the dominant eye (average 1.12sec) than on the non-dominant eye (average 1.38sec). Error ratio of the information recognition task was significantly lower when the see-through display was worn on the dominant eye (average 4.86%) than on the non-dominant eye (average 14.04%). The interaction effect of ocular dominance and age group was significant with respect to recognition time and error ratio. The recognition time of the users in their 40s was significantly longer than the other age groups when the display was placed on the non-dominant eye, while no difference was observed on the dominant eye. Error ratio also showed the same pattern. Although no difference was observed for the main effect of ocular dominance and bike usage, the interaction effect between the two variables was significant with respect to preference score. Preference score of daily bike users was higher when the display was placed on the dominant eye, whereas participants who use bikes for leisure purposes showed the opposite preference patterns. It was found more effective and efficient to wear a see-through display on the dominant eye than on the non-dominant eye, although user preference was not affected by ocular dominance. It is recommended to wear a see-through display on the dominant eye since it is safer by helping the user recognize the presented information faster and more accurately, even if the user may not notice the difference.

Keywords: eye tracking, information recognition, ocular dominance, smart headware, wearable device

Procedia PDF Downloads 263
3535 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 124
3534 Enhanced Bit Error Rate in Visible Light Communication: A New LED Hexagonal Array Distribution

Authors: Karim Matter, Heba Fayed, Ahmed Abd-Elaziz, Moustafa Hussein

Abstract:

Due to the exponential growth of mobile devices and wireless services, a huge demand for radiofrequency has increased. The presence of several frequencies causes interference between cells, which must be minimized to get the lower Bit Error Rate (BER). For this reason, it is of great interest to use visible light communication (VLC). This paper suggests a VLC system that decreases the BER by applying a new LED distribution with a hexagonal shape using a Frequency Reuse (FR) concept to mitigate the interference between the reused frequencies inside the hexagonal shape. The BER is measured in two scenarios, Line of Sight (LoS) and Non-Line of Sight (Non-LoS), for each technique that we used. The recommended values of BER in the proposed model for Soft Frequency Reuse (SFR) in the case of Los at 4, 8, and 10 dB signal to noise ratio (SNR), are 3.6×10⁻⁶, 6.03×10⁻¹³, and 2.66×10⁻¹⁸, respectively.

Keywords: visible light communication (VLC), field of view (FoV), hexagonal array, frequency reuse

Procedia PDF Downloads 149
3533 Research and Application of Multi-Scale Three Dimensional Plant Modeling

Authors: Weiliang Wen, Xinyu Guo, Ying Zhang, Jianjun Du, Boxiang Xiao

Abstract:

Reconstructing and analyzing three-dimensional (3D) models from situ measured data is important for a number of researches and applications in plant science, including plant phenotyping, functional-structural plant modeling (FSPM), plant germplasm resources protection, agricultural technology popularization. It has many scales like cell, tissue, organ, plant and canopy from micro to macroscopic. The techniques currently used for data capture, feature analysis, and 3D reconstruction are quite different of different scales. In this context, morphological data acquisition, 3D analysis and modeling of plants on different scales are introduced systematically. The commonly used data capture equipment for these multiscale is introduced. Then hot issues and difficulties of different scales are described respectively. Some examples are also given, such as Micron-scale phenotyping quantification and 3D microstructure reconstruction of vascular bundles within maize stalks based on micro-CT scanning, 3D reconstruction of leaf surfaces and feature extraction from point cloud acquired by using 3D handheld scanner, plant modeling by combining parameter driven 3D organ templates. Several application examples by using the 3D models and analysis results of plants are also introduced. A 3D maize canopy was constructed, and light distribution was simulated within the canopy, which was used for the designation of ideal plant type. A grape tree model was constructed from 3D digital and point cloud data, which was used for the production of science content of 11th international conference on grapevine breeding and genetics. By using the tissue models of plants, a Google glass was used to look around visually inside the plant to understand the internal structure of plants. With the development of information technology, 3D data acquisition, and data processing techniques will play a greater role in plant science.

Keywords: plant, three dimensional modeling, multi-scale, plant phenotyping, three dimensional data acquisition

Procedia PDF Downloads 267
3532 Gas Sensor Based On a One-Dimensional Nano-Grating Au/ Co/ Au/ TiO2 Magneto-Plasmonic Structure

Authors: S. M. Hamidi, M. Afsharnia

Abstract:

Gas sensors based on magneto-plasmonic (MP) structures have attracted much attention due to the high signal to noise ratio in these type of sensors. In these sensors, both the plasmonic and the MO properties of the resulting MP structure become interrelated because the surface Plasmon resonance (SPR) of the metallic medium. This interconnection can be modified the sensor responses and enhanced the signal to noise ratio. So far the sensor features of multilayered structures made of noble and ferromagnetic metals as Au/Co/Au MP multilayer with TiO2 sensor layer have been extensively studied, but their SPR assisted sensor response need to the krestchmann configuration. Here, we present a systematic study on the new MP structure based on one-dimensional nano-grating Au/ Co/ Au/ TiO2 multilayer to utilize as an inexpensive and easy to use gas sensor.

Keywords: Magneto-plasmonic structures, Gas sensor, nano-garting

Procedia PDF Downloads 436
3531 Using Historical Data for Stock Prediction

Authors: Sofia Stoica

Abstract:

In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.

Keywords: finance, machine learning, opening price, stock market

Procedia PDF Downloads 167
3530 Performance Assessment of GSO Satellites before and after Enhancing the Pointing Effect

Authors: Amr Emam, Joseph Victor, Mohamed Abd Elghany

Abstract:

The paper presents the effect of the orbit inclination on the pointing error of the satellite antenna and consequently on its footprint on earth for a typical Ku- band payload system. The performance assessment is examined both theoretically and by means of practical measurements, taking also into account all additional sources of pointing errors, such as East-West station keeping, orbit eccentricity and actual attitude control performance. An implementation and computation of the sinusoidal biases in satellite roll and pitch used to compensate the pointing error of the satellite antenna coverage is studied and evaluated before and after the pointing corrections performed. A method for evaluation of the performance of the implemented biases has been introduced through measuring satellite received level from a tracking 11m and fixed 4.8m transmitting antenna before and after the implementation of the pointing corrections.

Keywords: satellite, inclined orbit, pointing errors, coverage optimization

Procedia PDF Downloads 380
3529 Ambiguity Resolution for Ground-based Pulse Doppler Radars Using Multiple Medium Pulse Repetition Frequency

Authors: Khue Nguyen Dinh, Loi Nguyen Van, Thanh Nguyen Nhu

Abstract:

In this paper, we propose an adaptive method to resolve ambiguities and a ghost target removal process to extract targets detected by a ground-based pulse-Doppler radar using medium pulse repetition frequency (PRF) waveforms. The ambiguity resolution method is an adaptive implementation of the coincidence algorithm, which is implemented on a two-dimensional (2D) range-velocity matrix to resolve range and velocity ambiguities simultaneously, with a proposed clustering filter to enhance the anti-error ability of the system. Here we consider the scenario of multiple target environments. The ghost target removal process, which is based on the power after Doppler processing, is proposed to mitigate ghosting detections to enhance the performance of ground-based radars using a short PRF schedule in multiple target environments. Simulation results on a ground-based pulsed Doppler radar model will be presented to show the effectiveness of the proposed approach.

Keywords: ambiguity resolution, coincidence algorithm, medium PRF, ghosting removal

Procedia PDF Downloads 138
3528 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model

Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon

Abstract:

Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.

Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators

Procedia PDF Downloads 381
3527 An Adaptive Cooperative Scheme for Reliability of Transmission Using STBC and CDD in Wireless Communications

Authors: Hyun-Jun Shin, Jae-Jeong Kim, Hyoung-Kyu Song

Abstract:

In broadcasting and cellular system, a cooperative scheme is proposed for the improvement of performance of bit error rate. Up to date, the coverage of broadcasting system coexists with the coverage of cellular system. Therefore each user in a cellular coverage is frequently involved in a broadcasting coverage. The proposed cooperative scheme is derived from the shared areas. The users receive signals from both broadcasting base station and cellular base station. The proposed scheme selects a cellular base station of a worse channel to achieve better performance of bit error rate in cooperation. The performance of the proposed scheme is evaluated in fading channel.

Keywords: cooperative communication, diversity, STBC, CDD, channel condition, broadcasting system, cellular system

Procedia PDF Downloads 494
3526 High Accuracy Analytic Approximation for Special Functions Applied to Bessel Functions J₀(x) and Its Zeros

Authors: Fernando Maass, Pablo Martin, Jorge Olivares

Abstract:

The Bessel function J₀(x) is very important in Electrodynamics and Physics, as well as its zeros. In this work, a method to obtain high accuracy approximation is presented through an application to that function. In most of the applications of this function, the values of the zeros are very important. In this work, analytic approximations for this function have been obtained valid for all positive values of the variable x, which have high accuracy for the function as well as for the zeros. The approximation is determined by the simultaneous used of the power series and asymptotic expansion. The structure of the approximation is a combination of two rational functions with elementary functions as trigonometric and fractional powers. Here us in Pade method, rational functions are used, but now there combined with elementary functions us fractional powers hyperbolic or trigonometric functions, and others. The reason of this is that now power series of the exact function are used, but together with the asymptotic expansion, which usually includes fractional powers trigonometric functions and other type of elementary functions. The approximation must be a bridge between both expansions, and this can not be accomplished using only with rational functions. In the simplest approximation using 4 parameters the maximum absolute error is less than 0.006 at x ∼ 4.9. In this case also the maximum relative error for the zeros is less than 0.003 which is for the second zero, but that value decreases rapidly for the other zeros. The same kind of behaviour happens for the relative error of the maximum and minimum of the functions. Approximations with higher accuracy and more parameters will be also shown. All the approximations are valid for any positive value of x, and they can be calculated easily.

Keywords: analytic approximations, asymptotic approximations, Bessel functions, quasirational approximations

Procedia PDF Downloads 239
3525 Strategy in Practice: Strategy Development, Strategic Error and Project Delivery

Authors: Nipun Agarwal, David Paul, Fareed Un Din

Abstract:

Strategy development and implementation is the key to an organization’s success in today’s competitive marketplace. Many organizations develop excellent strategy but are unable to implement this strategy in order to succeed. The difference between strategic goals and its implementation is called strategic error. Strategic error occurs when an organization does not have structures in place to implement their strategy. Strategy implementation happens through projects and having a project management method that provides certainty and agility will help an organization become more competitive in implementing strategy. Numerous project management methods exist in theory and practice. However, projects mainly used the Waterfall method in the past that provides certainty in terms of budget, delivery date and resourcing. It is common practice now to utilise Agile based methods. However, Agile based methods do not provide specific deadlines and budgets. But provide agility in product design and project delivery, which is useful to companies. Both Waterfall and Agile methods in some forms are the opposites of each other. Executive management prefer agility in delivery projects as the competitive landscape changes frequently. However, they also appreciate certainty in the projects being able to quantify budgets, deadlines and resources that is harder for an Agile based method to provide. This paper attempts to develop a hybrid project management method that attempts to merge these Waterfall and Agile methods to provide the positives from both these approaches.

Keywords: strategy, project management, strategy implementation, agile

Procedia PDF Downloads 101
3524 Detailed Microzonation Studies around Denizli, Turkey

Authors: A. Aydin, E. Akyol, N. Soyatik

Abstract:

This study has been presented which is a detailed work of seismic microzonation of the city center. For seismic microzonation area of 225 km2 has been selected as the study area. MASW (Multichannel analysis of surface wave) and seismic refraction methods have been used to generate one-dimensional shear wave velocity profile at 250 locations and two-dimensional profile at 60 locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 2 and 5 m intervals up to a depth of 60 m. Levels of equivalent shear wave velocity of soil are used the classified of the study area. After the results of the study, it must be considered as components of urban planning and building design of Denizli and the application and use of these results should be required and enforced by municipal authorities.

Keywords: seismic microzonation, liquefaction, land use management, seismic refraction

Procedia PDF Downloads 267
3523 Modeling and Simulation for 3D Eddy Current Testing in Conducting Materials

Authors: S. Bennoud, M. Zergoug

Abstract:

The numerical simulation of electromagnetic interactions is still a challenging problem, especially in problems that result in fully three dimensional mathematical models. The goal of this work is to use mathematical modeling to characterize the reliability and capacity of eddy current technique to detect and characterize defects embedded in aeronautical in-service pieces. The finite element method is used for describing the eddy current technique in a mathematical model by the prediction of the eddy current interaction with defects. However, this model is an approximation of the full Maxwell equations. In this study, the analysis of the problem is based on a three dimensional finite element model that computes directly the electromagnetic field distortions due to defects.

Keywords: eddy current, finite element method, non destructive testing, numerical simulations

Procedia PDF Downloads 432