Search results for: propagation of error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2552

Search results for: propagation of error

2162 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 108
2161 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 343
2160 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 216
2159 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 134
2158 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 647
2157 Banking Sector Development and Economic Growth: Evidence from the State of Qatar

Authors: Fekri Shawtari

Abstract:

The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.

Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM

Procedia PDF Downloads 147
2156 Virtual Assessment of Measurement Error in the Fractional Flow Reserve

Authors: Keltoum Chahour, Mickael Binois

Abstract:

Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.

Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift

Procedia PDF Downloads 102
2155 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 48
2154 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 629
2153 Tectonic Inversion Manifestations in the Jebel Rouas-Ruissate (Northeastern Tunisia)

Authors: Aymen Arfaoui, Abdelkader Soumaya, Noureddine Ben Ayed

Abstract:

The Rouas-Ruissateis a part of TunisianAtlas system. Analyze of the collected field data allowed us to propose a new interpretation for the main structural features of thisregion. Tectonic inversions along NE-SW trending fault of Zaghouan and holokinetic movements are the main factors controlling the architecture and geometry of the Jebel Rouas-Ruissate. The presence of breccias, Slumps, and synsedimentaryfaults along NW-SE and N-S trending major faults show that they were active during the Mesozoicextensionalepisodes. During Cenozoic inversion period, this structurewas shaped as imbricatefansformed byNE-SW trending thrust faults. The angularunconformitybetweenupperEocene- Oligocene, and Cretaceousdeposits reveals a compressive Eocene tectonic phase (called Pyrenean phase)occurred duringPaleocene-lower Eocene.The Triassicsaltsacted as a decollementlevel in the NE-SW trendingfault propagation fold model of the Rouas-Ruissate.The inversion of fault-slip data along the main regional fault zones reveals a coexistence of strike-slip and reverse fault stress regimes with NW-SE maximum horizontal stress(SHmax) characterizing the Alpine compressive phase (Upper Tortonian).

Keywords: tunisia, imbricate fans, triassic decollement level, fault propagation fold

Procedia PDF Downloads 128
2152 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 166
2151 MCERTL: Mutation-Based Correction Engine for Register-Transfer Level Designs

Authors: Khaled Salah

Abstract:

In this paper, we present MCERTL (mutation-based correction engine for RTL designs) as an automatic error correction technique based on mutation analysis. A mutation-based correction methodology is proposed to automatically fix the erroneous RTL designs. The proposed strategy combines the processes of mutation and assertion-based localization. The erroneous statements are mutated to produce possible fixes for the failed RTL code. A concurrent mutation engine is proposed to mitigate the computational cost of running sequential mutants operators. The proposed methodology is evaluated against some benchmarks. The experimental results demonstrate that our proposed method enables us to automatically locate and correct multiple bugs at reasonable time.

Keywords: bug localization, error correction, mutation, mutants

Procedia PDF Downloads 253
2150 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions

Authors: Hannah F. Opayinka, Adedayo A. Adepoju

Abstract:

This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.

Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution

Procedia PDF Downloads 163
2149 Improvement of Model for SIMMER Code for SFR Corium Relocation Studies

Authors: A. Bachrata, N. Marie, F. Bertrand, J. B. Droin

Abstract:

The in-depth understanding of severe accident propagation in Generation IV of nuclear reactors is important so that appropriate risk management can be undertaken early in their design process. This paper is focused on model improvements in the SIMMER code in order to perform studies of severe accident mitigation of Sodium Fast Reactor. During the design process of the mitigation devices dedicated to extraction of molten fuel from the core region, the molten fuel propagation from the core up to the core catcher has to be studied. In this aim, analytical as well as the complex thermo-hydraulic simulations with SIMMER-III code are performed. The studies presented in this paper focus on physical phenomena and associated physical models that influence the corium relocation. Firstly, the molten pool heat exchange with surrounding structures is analysed since it influences directly the instant of rupture of the dedicated tubes favouring the corium relocation for mitigation purpose. After the corium penetration into mitigation tubes, the fuel-coolant interactions result in formation of debris bed. Analyses of debris bed fluidization as well as sinking into a fluid are presented in this paper.

Keywords: corium, mitigation tubes, SIMMER-III, sodium fast reactor

Procedia PDF Downloads 357
2148 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible

Procedia PDF Downloads 241
2147 Implication of the Exchange-Correlation on Electromagnetic Wave Propagation in Single-Wall Carbon Nanotubes

Authors: A. Abdikian

Abstract:

Using the linearized quantum hydrodynamic model (QHD) and by considering the role of quantum parameter (Bohm’s potential) and electron exchange-correlation potential in conjunction with Maxwell’s equations, electromagnetic wave propagation in a single-walled carbon nanotubes was studied. The electronic excitations are described. By solving the mentioned equations with appropriate boundary conditions and by assuming the low-frequency electromagnetic waves, two general expressions of dispersion relations are derived for the transverse magnetic (TM) and transverse electric (TE) modes, respectively. The dispersion relations are analyzed numerically and it was found that the dependency of dispersion curves with the exchange-correlation effects (which have been ignored in previous works) in the low frequency would be limited. Moreover, it has been realized that asymptotic behaviors of the TE and TM modes are similar in single wall carbon nanotubes (SWCNTs). The results show that by adding the function of electron exchange-correlation potential lead to the phenomena and make to extend the validity range of QHD model. The results can be important in the study of collective phenomena in nanostructures.

Keywords: transverse magnetic, transverse electric, quantum hydrodynamic model, electron exchange-correlation potential, single-wall carbon nanotubes

Procedia PDF Downloads 427
2146 Effect of Wind and Humidity on Microwave Links in West North Libya

Authors: M. S. Agha, A. M. Eshahiry, S. A. Aldabbar, Z. M. Alshahri

Abstract:

The propagation of microwave is affected by rain and dust particles by way of signal attenuation and de-polarization. Computations of these effects require knowledge of the propagation characteristics of microwave and millimeter wave energy in the climate conditions of the studied region. This paper presents the effect of wind and humidity on wireless communication such as microwave links in the west north region of Libya (Al-Khoms), experimental procedure to study the effects mentioned above. The experimental procedure is done on three selected antennae towers (Nagaza stations, Al-Khoms center stations, Al-Khoms gateway stations) to determining of the attenuation loss per unit length and cross-polarization discrimination (XPD) change which coverage in the studied region, it is required to collect the dust particles carried out by the wind, measure the particles size distribution (PSD), calculate the concentration, and carry chemical analysis of the contents, then the dielectric constant can be calculated. The result showed that effect of the humidity and dust, the antenna height, the visibility, on the complex permittivity effects both attenuation and phase shift, there is some consideration that has to be taken into account in the communication power budget.

Keywords: attenuation, de-polarization, scattering, transmission loss

Procedia PDF Downloads 129
2145 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers

Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang

Abstract:

One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.

Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis

Procedia PDF Downloads 416
2144 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 66
2143 Classification of Myoelectric Signals Using Multilayer Perceptron Neural Network with Back-Propagation Algorithm in a Wireless Surface Myoelectric Prosthesis of the Upper-Limb

Authors: Kevin D. Manalo, Jumelyn L. Torres, Noel B. Linsangan

Abstract:

This paper focuses on a wireless myoelectric prosthesis of the upper-limb that uses a Multilayer Perceptron Neural network with back propagation. The algorithm is widely used in pattern recognition. The network can be used to train signals and be able to use it in performing a function on their own based on sample inputs. The paper makes use of the Neural Network in classifying the electromyography signal that is produced by the muscle in the amputee’s skin surface. The gathered data will be passed on through the Classification Stage wirelessly through Zigbee Technology. The signal will be classified and trained to be used in performing the arm positions in the prosthesis. Through programming using Verilog and using a Field Programmable Gate Array (FPGA) with Zigbee, the EMG signals will be acquired and will be used for classification. The classified signal is used to produce the corresponding Hand Movements (Open, Pick, Hold, and Grip) through the Zigbee controller. The data will then be processed through the MLP Neural Network using MATLAB which then be used for the surface myoelectric prosthesis. Z-test will be used to display the output acquired from using the neural network.

Keywords: field programmable gate array, multilayer perceptron neural network, verilog, zigbee

Procedia PDF Downloads 368
2142 Influence of Optical Fluence Distribution on Photoacoustic Imaging

Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim

Abstract:

Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.

Keywords: finite element method, fluence distribution, Monte Carlo method, photoacoustic imaging

Procedia PDF Downloads 358
2141 Hybrid Localization Schemes for Wireless Sensor Networks

Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir

Abstract:

This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.

Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks

Procedia PDF Downloads 444
2140 New HCI Design Process Education

Authors: Jongwan Kim

Abstract:

Human Computer Interaction (HCI) is a subject covering the study, plan, and design of interactions between humans and computers. The prevalent use of digital mobile devices is increasing the need for education and research on HCI. This work is focused on a new education method geared towards reducing errors while developing application programs that incorporate role-changing brainstorming techniques during HCI design process. The proposed method has been applied to a capstone design course in the last spring semester. Students discovered some examples about UI design improvement and their error discovering and reducing capability was promoted. An UI design improvement, PC voice control for people with disabilities as an assistive technology examplar, will be presented. The improvement of these students' design ability will be helpful to the real field work.

Keywords: HCI, design process, error reducing education, role-changing brainstorming, assistive technology

Procedia PDF Downloads 471
2139 Income-Consumption Relationships in Pakistan (1980-2011): A Cointegration Approach

Authors: Himayatullah Khan, Alena Fedorova

Abstract:

The present paper analyses the income-consumption relationships in Pakistan using annual time series data from 1980-81 to 2010-1. The paper uses the Augmented Dickey-Fuller test to check the unit root and stationarity in these two time series. The paper finds that the two time series are nonstationary but stationary at their first difference levels. The Augmented Engle-Granger test and the Cointegrating Regression Durbin-Watson test imply that the two time series of consumption and income are cointegrated and that long-run marginal propensity to consume is 0.88 which is given by the estimated (static) equilibrium relation. The paper also used the error correction mechanism to find out to model dynamic relationship. The purpose of the ECM is to indicate the speed of adjustment from the short-run equilibrium to the long-run equilibrium state. The results show that MPC is equal to 0.93 and is highly significant. The coefficient of Engle-Granger residuals is negative but insignificant. Statistically, the equilibrium error term is zero, which suggests that consumption adjusts to changes in GDP in the same period. The short-run changes in GDP have a positive impact on short-run changes in consumption. The paper concludes that we may interpret 0.93 as the short-run MPC. The pair-wise Granger Causality test shows that both GDP and consumption Granger cause each other.

Keywords: cointegrating regression, Augmented Dickey Fuller test, Augmented Engle-Granger test, Granger causality, error correction mechanism

Procedia PDF Downloads 390
2138 EMI Radiation Prediction and Final Measurement Process Optimization by Neural Network

Authors: Hussam Elias, Ninovic Perez, Holger Hirsch

Abstract:

The completion of the EMC regulations worldwide is growing steadily as the usage of electronics in our daily lives is increasing more than ever. In this paper, we introduce a novel method to perform the final phase of Electromagnetic compatibility (EMC) measurement and to reduce the required test time according to the norm EN 55032 by using a developed tool and the conventional neural network(CNN). The neural network was trained using real EMC measurements, which were performed in the Semi Anechoic Chamber (SAC) by CETECOM GmbH in Essen, Germany. To implement our proposed method, we wrote software to perform the radiated electromagnetic interference (EMI) measurements and use the CNN to predict and determine the position of the turntable that meets the maximum radiation value.

Keywords: conventional neural network, electromagnetic compatibility measurement, mean absolute error, position error

Procedia PDF Downloads 173
2137 Sterilization of Potato Explants for in vitro Propagation

Authors: D. R. Masvodza, G. Coetzer, E. van der Watt

Abstract:

Microorganisms usually have a prolific growth nature and may cause major problems on in-vitro cultures. For in vitro propagation to be successful explants need to be sterile. In order to determine the best sterilization method for potato explants cv. Amerthyst, five sterilization methods were applied separately to 24 shoots. The first sterilization method was the use of 20% sodium hypochlorite with 1 ml Tween 20 for 15 minutes. The second, third and fourth sterilization methods were the immersion of explants in 70% ethanol in a beaker for either 30 seconds, 1 minute or 2 minutes, followed by 1% sodium hypochlorite with 1 ml Tween 20 for 5 minutes. For the control treatment, no chemicals were used. Finally, all the explants were rinsed three times with autoclaved distilled water and trimmed to 1-2 cm. Explants were then cultured on MS medium with 0.01 mg L-1 NAA and 0.1 mg L-1 GA3 and supplemented with 2 mg L-1 D-calcium pentothenate. The trial was laid out as a complete randomized design, and each treatment combination was replicated 24 times. At 7, 14 and 21 days after culture, data on explant color, survival, and presence or absence of contamination was recorded. Best results were obtained when 20% sodium hypochlorite was used with 1 ml Tween 20 for 15 minutes which is sterilization method 1. Method 2 was comparable to method 1 when explants were cultured in glass vessels. Explants in glass vessels were significantly less contaminated than explants in polypropylene vessel. Therefore at times, ideal methods for sterilization should be coupled with ideal culture conditions such as good quality culture vessel, rather than the addition of more stringent sterilants.

Keywords: culture containers, explants, sodium hypochlororite, sterilization

Procedia PDF Downloads 293
2136 Modelling and Management of Vegetal Pest Based On Case of Xylella Fastidiosa in Alicante

Authors: Maria Teresa Signes Pont, Jose Juan Cortes Plana

Abstract:

Our proposal provides suitable modelling to the spread of plant pest and particularly to the propagation of Xylella fastidiosa in the almond trees. We compared the impact of temperature and humidity on the propagation of Xylella fastidiosa in various subspecies. Comparison between Balearic Islands and Alicante (Spain). Most sharpshooter and spittlebug species showed peaks in population density during the month of higher mean temperature and relative humidity (April-October), except for the splittlebug Clastoptera sp.1, whose adult population peaked from September-October (late summer and early autumn). The critical season is from when they hatch from the eggs until they are in the pre-reproductive season (January -April) to expand. We focused on winters in the egg state, which normally hatches in early March. The nymphs secrete a foam (mucilage) in which they live and that protects them from natural enemies of temperature changes and prevents dry as long as the humidity is above 75%. The interaction between the life cycles of vectors and vegetation influences the food preferences of vectors and is responsible for the general seasonal shift of the population from vegetation to trees and vice versa, In addition to the temperature maps, we have observed humidity as it affects the spread of the pest Xylella fastidiosa (Xf).

Keywords: xylella fastidiosa, almod tree, temperature, humidity, environmental model

Procedia PDF Downloads 144
2135 Development of a Complete Single Jet Common Rail Injection System Gas Dynamic Model for Hydrogen Fueled Engine with Port Injection Feeding System

Authors: Mohammed Kamil, M. M. Rahman, Rosli A. Bakar

Abstract:

Modeling of hydrogen fueled engine (H2ICE) injection system is a very important tool that can be used for explaining or predicting the effect of advanced injection strategies on combustion and emissions. In this paper, a common rail injection system (CRIS) is proposed for 4-strokes 4-cylinders hydrogen fueled engine with port injection feeding system (PIH2ICE). For this system, a numerical one-dimensional gas dynamic model is developed considering single injection event for each injector per a cycle. One-dimensional flow equations in conservation form are used to simulate wave propagation phenomenon throughout the CR (accumulator). Using this model, the effect of common rail on the injection system characteristics is clarified. These characteristics include: rail pressure, sound velocity, rail mass flow rate, injected mass flow rate and pressure drop across injectors. The interaction effects of operational conditions (engine speed and rail pressure) and geometrical features (injector hole diameter) are illustrated; and the required compromised solutions are highlighted. The CRIS is shown to be a promising enhancement for PIH2ICE.

Keywords: common rail, hydrogen engine, port injection, wave propagation

Procedia PDF Downloads 397
2134 A Joint Possibilistic-Probabilistic Tool for Load Flow Uncertainty Assessment-Part II: Case Studies

Authors: Morteza Aien, Masoud Rashidinejad, Mahmud Fotuhi-Firuzabad

Abstract:

Power systems are innately uncertain systems. To face with such uncertain systems, robust uncertainty assessment tools are appealed. This paper inspects the uncertainty assessment formulation of the load flow (LF) problem considering different kinds of uncertainties, developed in its companion paper through some case studies. The proposed methodology is based on the evidence theory and joint propagation of possibilistic and probabilistic uncertainties. The load and wind power generation are considered as probabilistic uncertain variables and the electric vehicles (EVs) and gas turbine distributed generation (DG) units are considered as possibilistic uncertain variables. The cumulative distribution function (CDF) of the system output parameters obtained by the pure probabilistic method lies within the belief and plausibility functions obtained by the joint propagation approach. Furthermore, the imprecision in the DG parameters is explicitly reflected by the gap between the belief and plausibility functions. This gap, due to the epistemic uncertainty on the DG resources parameters grows as the penetration level increases.

Keywords: electric vehicles, joint possibilistic- probabilistic uncertainty modeling, uncertain load flow, wind turbine generator

Procedia PDF Downloads 406
2133 Effects of Canned Cycles and Cutting Parameters on Hole Quality in Cryogenic Drilling of Aluminum 6061-6T

Authors: M. N. Islam, B. Boswell, Y. R. Ginting

Abstract:

The influence of canned cycles and cutting parameters on hole quality in cryogenic drilling has been investigated experimentally and analytically. A three-level, three-parameter experiment was conducted by using the design-of-experiment methodology. The three levels of independent input parameters were the following: for canned cycles—a chip-breaking canned cycle (G73), a spot drilling canned cycle (G81), and a deep hole canned cycle (G83); for feed rates—0.2, 0.3, and 0.4 mm/rev; and for cutting speeds—60, 75, and 100 m/min. The selected work and tool materials were aluminum 6061-6T and high-speed steel (HSS), respectively. For cryogenic cooling, liquid nitrogen (LN2) was used and was applied externally. The measured output parameters were the three widely used quality characteristics of drilled holes—diameter error, circularity, and surface roughness. Pareto ANOVA was applied for analyzing the results. The findings revealed that the canned cycle has a significant effect on diameter error (contribution ratio 44.09%) and small effects on circularity and surface finish (contribution ratio 7.25% and 6.60%, respectively). The best results for the dimensional accuracy and surface roughness were achieved by G81. G73 produced the best circularity results; however, for dimensional accuracy, it was the worst level.

Keywords: circularity, diameter error, drilling canned cycle, pareto ANOVA, surface roughness

Procedia PDF Downloads 261