Search results for: parallelism error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1931

Search results for: parallelism error

1121 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 143
1120 Cancellation of Transducer Effects from Frequency Response Functions: Experimental Case Study on the Steel Plate

Authors: P. Zamani, A. Taleshi Anbouhi, M. R. Ashory, S. Mohajerzadeh, M. M. Khatibi

Abstract:

Modal analysis is a developing science in the experimental evaluation of dynamic properties of the structures. Mechanical devices such as accelerometers are one of the sources of lack of quality in measuring modal testing parameters. In this paper, eliminating the accelerometer’s mass effect of the frequency response of the structure is studied. So, a strategy is used for eliminating the mass effect by using sensitivity analysis. In this method, the amount of mass change and the place to measure the structure’s response with least error in frequency correction is chosen. Experimental modal testing is carried out on a steel plate and the effect of accelerometer’s mass is omitted using this strategy. Finally, a good agreement is achieved between numerical and experimental results.

Keywords: accelerometer mass, frequency response function, modal analysis, sensitivity analysis

Procedia PDF Downloads 446
1119 Application of Artificial Immune Systems Combined with Collaborative Filtering in Movie Recommendation System

Authors: Pei-Chann Chang, Jhen-Fu Liao, Chin-Hung Teng, Meng-Hui Chen

Abstract:

This research combines artificial immune system with user and item based collaborative filtering to create an efficient and accurate recommendation system. By applying the characteristic of antibodies and antigens in the artificial immune system and using Pearson correlation coefficient as the affinity threshold to cluster the data, our collaborative filtering can effectively find useful users and items for rating prediction. This research uses MovieLens dataset as our testing target to evaluate the effectiveness of the algorithm developed in this study. The experimental results show that the algorithm can effectively and accurately predict the movie ratings. Compared to some state of the art collaborative filtering systems, our system outperforms them in terms of the mean absolute error on the MovieLens dataset.

Keywords: artificial immune system, collaborative filtering, recommendation system, similarity

Procedia PDF Downloads 535
1118 Equity Risk Premiums and Risk Free Rates in Modelling and Prediction of Financial Markets

Authors: Mohammad Ghavami, Reza S. Dilmaghani

Abstract:

This paper presents an adaptive framework for modelling financial markets using equity risk premiums, risk free rates and volatilities. The recorded economic factors are initially used to train four adaptive filters for a certain limited period of time in the past. Once the systems are trained, the adjusted coefficients are used for modelling and prediction of an important financial market index. Two different approaches based on least mean squares (LMS) and recursive least squares (RLS) algorithms are investigated. Performance analysis of each method in terms of the mean squared error (MSE) is presented and the results are discussed. Computer simulations carried out using recorded data show MSEs of 4% and 3.4% for the next month prediction using LMS and RLS adaptive algorithms, respectively. In terms of twelve months prediction, RLS method shows a better tendency estimation compared to the LMS algorithm.

Keywords: adaptive methods, LSE, MSE, prediction of financial Markets

Procedia PDF Downloads 336
1117 A Novel Image Steganography Scheme Based on Mandelbrot Fractal

Authors: Adnan H. M. Al-Helali, Hamza A. Ali

Abstract:

Growth of censorship and pervasive monitoring on the Internet, Steganography arises as a new means of achieving secret communication. Steganography is the art and science of embedding information within electronic media used by common applications and systems. Generally, hiding information of multimedia within images will change some of their properties that may introduce few degradation or unusual characteristics. This paper presents a new image steganography approach for hiding information of multimedia (images, text, and audio) using generated Mandelbrot Fractal image as a cover. The proposed technique has been extensively tested with different images. The results show that the method is a very secure means of hiding and retrieving steganographic information. Experimental results demonstrate that an effective improvement in the values of the Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Normalized Cross Correlation (NCC) and Image Fidelity (IF) over the previous techniques.

Keywords: fractal image, information hiding, Mandelbrot et fractal, steganography

Procedia PDF Downloads 539
1116 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders

Authors: Christian Andrä, Luisa Zimmermann, Christina Müller

Abstract:

Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.

Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity

Procedia PDF Downloads 315
1115 Particle Filter Implementation of a Non-Linear Dynamic Fall Model

Authors: T. Kobayashi, K. Shiba, T. Kaburagi, Y. Kurihara

Abstract:

For the elderly living alone, falls can be a serious problem encountered in daily life. Some elderly people are unable to stand up without the assistance of a caregiver. They may become unconscious after a fall, which can lead to serious aftereffects such as hypothermia, dehydration, and sometimes even death. We treat the subject as an inverted pendulum and model its angle from the equilibrium position and its angular velocity. As the model is non-linear, we implement the filtering method with a particle filter which can estimate true states of the non-linear model. In order to evaluate the accuracy of the particle filter estimation results, we calculate the root mean square error (RMSE) between the estimated angle/angular velocity and the true values generated by the simulation. The experimental results give the highest accuracy RMSE of 0.0141 rad and 0.1311 rad/s for the angle and angular velocity, respectively.

Keywords: fall, microwave Doppler sensor, non-linear dynamics model, particle filter

Procedia PDF Downloads 211
1114 Challenges of Cryogenic Fluid Metering by Coriolis Flowmeter

Authors: Evgeniia Shavrina, Yan Zeng, Boo Cheong Khoo, Vinh-Tan Nguyen

Abstract:

The present paper is aimed at providing a review of error sources in cryogenic metering by Coriolis flowmeters (CFMs). Whereas these flowmeters allow accurate water metering, high uncertainty and low repeatability are commonly observed at cryogenic fluid metering, which is often necessary for effective renewable energy production and storage. The sources of these issues might be classified as general and cryogenic specific challenges. A conducted analysis of experimental and theoretical studies shows that material behaviour at cryogenic temperatures, composition variety, and multiphase presence are the most significant cryogenic challenges. At the same time, pipeline diameter limitation, ambient vibration impact, and drawbacks of the installation may be highlighted as the most important general challenges of cryogenic metering by CFM. Finally, the techniques, which mitigate the impact of these challenges are reviewed, and future development direction is indicated.

Keywords: Coriolis flowmeter, cryogenic, multicomponent flow, multiphase flow

Procedia PDF Downloads 152
1113 3D Object Model Reconstruction Based on Polywogs Wavelet Network Parametrization

Authors: Mohamed Othmani, Yassine Khlifi

Abstract:

This paper presents a technique for compact three dimensional (3D) object model reconstruction using wavelet networks. It consists to transform an input surface vertices into signals,and uses wavelet network parameters for signal approximations. To prove this, we use a wavelet network architecture founded on several mother wavelet families. POLYnomials WindOwed with Gaussians (POLYWOG) wavelet families are used to maximize the probability to select the best wavelets which ensure the good generalization of the network. To achieve a better reconstruction, the network is trained several iterations to optimize the wavelet network parameters until the error criterion is small enough. Experimental results will shown that our proposed technique can effectively reconstruct an irregular 3D object models when using the optimized wavelet network parameters. We will prove that an accurateness reconstruction depends on the best choice of the mother wavelets.

Keywords: 3d object, optimization, parametrization, polywog wavelets, reconstruction, wavelet networks

Procedia PDF Downloads 284
1112 A Kolmogorov-Smirnov Type Goodness-Of-Fit Test of Multinomial Logistic Regression Model in Case-Control Studies

Authors: Chen Li-Ching

Abstract:

The multinomial logistic regression model is used popularly for inferring the relationship of risk factors and disease with multiple categories. This study based on the discrepancy between the nonparametric maximum likelihood estimator and semiparametric maximum likelihood estimator of the cumulative distribution function to propose a Kolmogorov-Smirnov type test statistic to assess adequacy of the multinomial logistic regression model for case-control data. A bootstrap procedure is presented to calculate the critical value of the proposed test statistic. Empirical type I error rates and powers of the test are performed by simulation studies. Some examples will be illustrated the implementation of the test.

Keywords: case-control studies, goodness-of-fit test, Kolmogorov-Smirnov test, multinomial logistic regression

Procedia PDF Downloads 456
1111 Multi-Agent Coverage Control with Bounded Gain Forgetting Composite Adaptive Controller

Authors: Mert Turanli, Hakan Temeltas

Abstract:

In this paper, we present an adaptive controller for decentralized coordination problem of multiple non-holonomic agents. The performance of the presented Multi-Agent Bounded Gain Forgetting (BGF) Composite Adaptive controller is compared against the tracking error criterion with a Feedback Linearization controller. By using the method, the sensor nodes move and reconfigure themselves in a coordinated way in response to a sensed environment. The multi-agent coordination is achieved through Centroidal Voronoi Tessellations and Coverage Control. Also, a consensus protocol is used for synchronization of the parameter vectors. The two controllers are given with their Lyapunov stability analysis and their stability is verified with simulation results. The simulations are carried out in MATLAB and ROS environments. Better performance is obtained with BGF Adaptive Controller.

Keywords: adaptive control, centroidal voronoi tessellations, composite adaptation, coordination, multi robots

Procedia PDF Downloads 348
1110 Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy

Authors: Tatiana A. Dolenko, Sergey A. Burikov, Alexander O. Efitorov, Sergey A. Dolenko

Abstract:

In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.

Keywords: inverse problems, multi-component solutions, neural networks, Raman spectroscopy

Procedia PDF Downloads 528
1109 Efficient Relay Selection Scheme Utilizing OVSF Code in Cooperative Communication System

Authors: Yeong-Seop Ahn, Myoung-Jin Kim, Young-Min Ko, Hyoung-Kyu Song

Abstract:

This paper proposes a relay selection scheme utilizing an orthogonal variable spreading factor (OVSF) code in a cooperative communication. The relay selection scheme influences on the communication performance in the cooperative communication. Conventional relay selection schemes such as the best harmonic mean relay selection scheme or the threshold-based relay selection scheme should know information such as channel state information (CSI) in advance. The proposed relay selection scheme does not require information in advance by using a reference signal utilizing the OVSF code. The simulation result shows that bit error rate (BER) performance of proposed relay selection scheme is similar to the best harmonic mean relay selection scheme that is known as one of the optimal relay selection schemes.

Keywords: cooperative communication, relay selection, OFDM, OVSF code

Procedia PDF Downloads 637
1108 An AK-Chart for the Non-Normal Data

Authors: Chia-Hau Liu, Tai-Yue Wang

Abstract:

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data

Procedia PDF Downloads 422
1107 Spelling Errors of EFL Students: An Insight into Curriculum Development

Authors: Sheikha Ali Salim Al-Breiki

Abstract:

The purpose of this study was to explore the types of the spelling errors students of grade ten make and to find out whether there were any significant differences between males and females with respect to the types of the spelling errors made. The sample of the study included 90 grade ten students from four different schools in North Batinah. The researcher manipulated the use of a test that consisted of two questions: an oral dictation test of 70 words with a contextualizing sentence and a free writing task. The misspellings were classified into nine different types. The findings revealed that the most common spelling errors among Omani grade ten students were vowel substitution, then came vowel omission in the second place and consonant substitution in the third place. Male students omitted more vowels than female students while females made more true word errors than their male counterparts. In light of the findings, the study presents some recommendations and suggestions for further studies.

Keywords: types of spelling errors, errors, ESL/EFL, error analysis

Procedia PDF Downloads 372
1106 Fuzzy Logic and Control Strategies on a Sump

Authors: Nasser Mohamed Ramli, Nurul Izzati Zulkifli

Abstract:

Sump can be defined as a reservoir which contains slurry; a mixture of solid and liquid or water, in it. Sump system is an unsteady process owing to the level response. Sump level shall be monitored carefully by using a good controller to avoid overflow. The current conventional controllers would not be able to solve problems with large time delay and nonlinearities, Fuzzy Logic controller is tested to prove its ability in solving the listed problems of slurry sump. Therefore, in order to justify the effectiveness and reliability of these controllers, simulation of the sump system was created by using MATLAB and the results were compared. According to the result obtained, instead of Proportional-Integral (PI) and Proportional-Integral and Derivative (PID), Fuzzy Logic controller showed the best result by offering quick response of 0.32 s for step input and 5 s for pulse generator, by producing small Integral Absolute Error (IAE) values that are 0.66 and 0.36 respectively.

Keywords: fuzzy, sump, level, controller

Procedia PDF Downloads 243
1105 Automatic Battery Charging for Rotor Wings Type Unmanned Aerial Vehicle

Authors: Jeyeon Kim

Abstract:

This paper describes the development of the automatic battery charging device for the rotor wings type unmanned aerial vehicle (UAV) and the positioning method that can be accurately landed on the charging device when landing. The developed automatic battery charging device is considered by simple maintenance, durability, cost and error of the positioning when landing. In order to for the UAV accurately land on the charging device, two kinds of markers (a color marker and a light marker) installed on the charging device is detected by the camera mounted on the UAV. And then, the UAV is controlled so that the detected marker becomes the center of the image and is landed on the device. We conduct the performance evaluation of the proposal positioning method by the outdoor experiments at day and night, and show the effectiveness of the system.

Keywords: unmanned aerial vehicle, automatic battery charging, positioning

Procedia PDF Downloads 363
1104 A Novel Image Steganography Method Based on Mandelbrot Fractal

Authors: Adnan H. M. Al-Helali, Hamza A. Ali

Abstract:

The growth of censorship and pervasive monitoring on the Internet, Steganography arises as a new means of achieving secret communication. Steganography is the art and science of embedding information within electronic media used by common applications and systems. Generally, hiding information of multimedia within images will change some of their properties that may introduce few degradation or unusual characteristics. This paper presents a new image steganography approach for hiding information of multimedia (images, text, and audio) using generated Mandelbrot Fractal image as a cover. The proposed technique has been extensively tested with different images. The results show that the method is a very secure means of hiding and retrieving steganographic information. Experimental results demonstrate that an effective improvement in the values of the Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Normalized Cross Correlation (NCC), and Image Fidelity (IF) over the pervious techniques.

Keywords: fractal image, information hiding, Mandelbrot set fractal, steganography

Procedia PDF Downloads 618
1103 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy

Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu

Abstract:

The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.

Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis

Procedia PDF Downloads 65
1102 Understanding and Improving Neural Network Weight Initialization

Authors: Diego Aguirre, Olac Fuentes

Abstract:

In this paper, we present a taxonomy of weight initialization schemes used in deep learning. We survey the most representative techniques in each class and compare them in terms of overhead cost, convergence rate, and applicability. We also introduce a new weight initialization scheme. In this technique, we perform an initial feedforward pass through the network using an initialization mini-batch. Using statistics obtained from this pass, we initialize the weights of the network, so the following properties are met: 1) weight matrices are orthogonal; 2) ReLU layers produce a predetermined number of non-zero activations; 3) the output produced by each internal layer has a unit variance; 4) weights in the last layer are chosen to minimize the error in the initial mini-batch. We evaluate our method on three popular architectures, and a faster converge rates are achieved on the MNIST, CIFAR-10/100, and ImageNet datasets when compared to state-of-the-art initialization techniques.

Keywords: deep learning, image classification, supervised learning, weight initialization

Procedia PDF Downloads 135
1101 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation

Authors: Calorine Twebaze, Jesca Balinga

Abstract:

Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.

Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches

Procedia PDF Downloads 59
1100 An Approach for Modeling CMOS Gates

Authors: Spyridon Nikolaidis

Abstract:

A modeling approach for CMOS gates is presented based on the use of the equivalent inverter. A new model for the inverter has been developed using a simplified transistor current model which incorporates the nanoscale effects for the planar technology. Parametric expressions for the output voltage are provided as well as the values of the output and supply current to be compatible with the CCS technology. The model is parametric according the input signal slew, output load, transistor widths, supply voltage, temperature and process. The transistor widths of the equivalent inverter are determined by HSPICE simulations and parametric expressions are developed for that using a fitting procedure. Results for the NAND gate shows that the proposed approach offers sufficient accuracy with an average error in propagation delay about 5%.

Keywords: CMOS gate modeling, inverter modeling, transistor current mode, timing model

Procedia PDF Downloads 423
1099 Sentiment Analysis of Consumers’ Perceptions on Social Media about the Main Mobile Providers in Jamaica

Authors: Sherrene Bogle, Verlia Bogle, Tyrone Anderson

Abstract:

In recent years, organizations have become increasingly interested in the possibility of analyzing social media as a means of gaining meaningful feedback about their products and services. The aspect based sentiment analysis approach is used to predict the sentiment for Twitter datasets for Digicel and Lime, the main mobile companies in Jamaica, using supervised learning classification techniques. The results indicate an average of 82.2 percent accuracy in classifying tweets when comparing three separate classification algorithms against the purported baseline of 70 percent and an average root mean squared error of 0.31. These results indicate that the analysis of sentiment on social media in order to gain customer feedback can be a viable solution for mobile companies looking to improve business performance.

Keywords: machine learning, sentiment analysis, social media, supervised learning

Procedia PDF Downloads 444
1098 Measuring How Brightness Mediates Auditory Salience

Authors: Baptiste Bouvier

Abstract:

While we are constantly flooded with stimuli in daily life, attention allows us to select the ones we specifically process and ignore the others. Some salient stimuli may sometimes pass this filter independently of our will, in a "bottom-up" way. The role of the acoustic properties of the timbre of a sound on its salience, i.e., its ability to capture the attention of a listener, is still not well understood. We implemented a paradigm called the "additional singleton paradigm", in which participants have to discriminate targets according to their duration. This task is perturbed (higher error rates and longer response times) by the presence of an irrelevant additional sound, of which we can manipulate a feature of our choice at equal loudness. This allows us to highlight the influence of the timbre features of a sound stimulus on its salience at equal loudness. We have shown that a stimulus that is brighter than the others but not louder leads to an attentional capture phenomenon in this framework. This work opens the door to the study of the influence of any timbre feature on salience.

Keywords: attention, audition, bottom-up attention, psychoacoustics, salience, timbre

Procedia PDF Downloads 170
1097 Effect of Clinical Depression on Automatic Speaker Verification

Authors: Sheeraz Memon, Namunu C. Maddage, Margaret Lech, Nicholas Allen

Abstract:

The effect of a clinical environment on the accuracy of the speaker verification was tested. The speaker verification tests were performed within homogeneous environments containing clinically depressed speakers only, and non-depresses speakers only, as well as within mixed environments containing different mixtures of both climatically depressed and non-depressed speakers. The speaker verification framework included the MFCCs features and the GMM modeling and classification method. The speaker verification experiments within homogeneous environments showed 5.1% increase of the EER within the clinically depressed environment when compared to the non-depressed environment. It indicated that the clinical depression increases the intra-speaker variability and makes the speaker verification task more challenging. Experiments with mixed environments indicated that the increase of the percentage of the depressed individuals within a mixed environment increases the speaker verification equal error rates.

Keywords: speaker verification, GMM, EM, clinical environment, clinical depression

Procedia PDF Downloads 375
1096 Influence of Chirp of High-Speed Laser Diodes and Fiber Dispersion on Performance of Non-Amplified 40-Gbps Optical Fiber Links

Authors: Ahmed Bakry, Moustafa Ahmed

Abstract:

We model and simulate the combined effect of fiber dispersion and frequency chirp of a directly modulated high-speed laser diode on the figures of merit of a non-amplified 40-Gbps optical fiber link. We consider both the return to zero (RZ) and non-return to zero (NRZ) patterns of the pseudorandom modulation bits. The performance of the fiber communication system is assessed by the fiber-length limitation due to the fiber dispersion. We study the influence of replacing standard single-mode fibers by non-zero dispersion-shifted fibers on the maximum fiber length and evaluate the associated power penalty. We introduce new dispersion tolerances for 1-dB power penalty of the RZ and NRZ 40-Gbps optical fiber links.

Keywords: bit error rate, dispersion, frequency chirp, fiber communications, semiconductor laser

Procedia PDF Downloads 641
1095 Numerical Modeling for Water Engineering and Obstacle Theory

Authors: Mounir Adal, Baalal Azeddine, Afifi Moulay Larbi

Abstract:

Numerical analysis is a branch of mathematics devoted to the development of iterative matrix calculation techniques. We are searching for operations optimization as objective to calculate and solve systems of equations of order n with time and energy saving for computers that are conducted to calculate and analyze big data by solving matrix equations. Furthermore, this scientific discipline is producing results with a margin of error of approximation called rates. Thus, the results obtained from the numerical analysis techniques that are held on computer software such as MATLAB or Simulink offers a preliminary diagnosis of the situation of the environment or space targets. By this we can offer technical procedures needed for engineering or scientific studies exploitable by engineers for water.

Keywords: numerical analysis methods, obstacles solving, engineering, simulation, numerical modeling, iteration, computer, MATLAB, water, underground, velocity

Procedia PDF Downloads 462
1094 Joint Discrete Hartley Transform-Clipping for Peak to Average Power Ratio Reduction in Orthogonal Frequency Division Multiplexing System

Authors: Selcuk Comlekci, Mohammed Aboajmaa

Abstract:

Orthogonal frequency division multiplexing (OFDM) is promising technique for the modern wireless communications systems due to its robustness against multipath environment. The high peak to average power ratio (PAPR) of the transmitted signal is one of the major drawbacks of OFDM system, PAPR degrade the performance of bit error rate (BER) and effect on the linear characteristics of high power amplifier (HPA). In this paper, we proposed DHT-Clipping reduction technique to reduce the high PAPR by the combination between discrete Hartley transform (DHT) and Clipping techniques. From the simulation results, we notified that DHT-Clipping technique offers better PAPR reduction than DHT and Clipping, as well as DHT-Clipping introduce improved BER performance better than clipping.

Keywords: ISI, cyclic prefix, BER, PAPR, HPA, DHT, subcarrier

Procedia PDF Downloads 439
1093 Dynamic Modeling of a Robot for Playing a Curved 3D Percussion Instrument Utilizing a Finite Element Method

Authors: Prakash Persad, Kelvin Loutan, Trichelle Seepersad

Abstract:

The Finite Element Method is commonly used in the analysis of flexible manipulators to predict elastic displacements and develop joint control schemes for reducing positioning error. In order to preserve simplicity, regular geometries, ideal joints and connections are assumed. This paper presents the dynamic FE analysis of a 4- degrees of freedom open chain manipulator, intended for striking a curved 3D surface percussion musical instrument. This was done utilizing the new MultiBody Dynamics Module in COMSOL, capable of modeling the elastic behavior of a body undergoing rigid body type motion.

Keywords: dynamic modeling, entertainment robots, finite element method, flexible robot manipulators, multibody dynamics, musical robots

Procedia PDF Downloads 336
1092 Fast Short-Term Electrical Load Forecasting under High Meteorological Variability with a Multiple Equation Time Series Approach

Authors: Charline David, Alexandre Blondin Massé, Arnaud Zinflou

Abstract:

In 2016, Clements, Hurn, and Li proposed a multiple equation time series approach for the short-term load forecasting, reporting an average mean absolute percentage error (MAPE) of 1.36% on an 11-years dataset for the Queensland region in Australia. We present an adaptation of their model to the electrical power load consumption for the whole Quebec province in Canada. More precisely, we take into account two additional meteorological variables — cloudiness and wind speed — on top of temperature, as well as the use of multiple meteorological measurements taken at different locations on the territory. We also consider other minor improvements. Our final model shows an average MAPE score of 1:79% over an 8-years dataset.

Keywords: short-term load forecasting, special days, time series, multiple equations, parallelization, clustering

Procedia PDF Downloads 103