Search results for: error floors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2016

Search results for: error floors

1806 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 130
1805 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 368
1804 Experimental Study of Different Types of Concrete in Uniaxial Compression Test

Authors: Khashayar Jafari, Mostafa Jafarian Abyaneh, Vahab Toufigh

Abstract:

Polymer concrete (PC) is a distinct concrete with superior characteristics in comparison to ordinary cement concrete. It has become well-known for its applications in thin overlays, floors and precast components. In this investigation, the mechanical properties of PC with different epoxy resin contents, ordinary cement concrete (OCC) and lightweight concrete (LC) have been studied under uniaxial compression test. The study involves five types of concrete, with each type being tested four times. Their complete elastic-plastic behavior was compared with each other through the measurement of volumetric strain during the tests. According to the results, PC showed higher strength, ductility and energy absorption with respect to OCC and LC.

Keywords: polymer concrete, ordinary cement concrete, lightweight concrete, uniaxial compression test, volumetric strain

Procedia PDF Downloads 395
1803 Termite Mound Floors: Ready-to-Use Ecological Materials

Authors: Yanné Etienne

Abstract:

The current climatic conditions necessarily impose the development and use of construction materials with low or no carbon footprint. The Far North Region of Cameroon has huge deposits of termite mounds. Various tests in this work have been carried out on these soils with the aim of using them as construction materials. They are mainly geotechnical tests, physical and mechanical tests. The different tests gave the following values: uniformity coefficient (4.95), curvature coefficient (1.80), plasticity index (12.85%), optimum moisture content (6.70%), maximum dry density (2.05 g.cm-³), friction angles (14.07°), and cohesion of 100.29 kN.m2. The results obtained show that termite mound soils, which are ecological materials, are plastic and water-stable can be used for the production of load-bearing elements in construction.

Keywords: termite mound soil, ecological materials, building materials, geotechnical tests, physical and mechanical tests

Procedia PDF Downloads 187
1802 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 244
1801 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 161
1800 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 674
1799 Banking Sector Development and Economic Growth: Evidence from the State of Qatar

Authors: Fekri Shawtari

Abstract:

The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.

Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM

Procedia PDF Downloads 170
1798 Influence of Slenderness Ratio on the Ductility of Reinforced Concrete Portal Structures

Authors: Kahil Amar, Nekmouche Aghiles, Titouche Billal, Hamizi Mohand, Hannachi Naceur Eddine

Abstract:

The ductility is an important parameter in the nonlinear behavior of portal structures reinforced concrete. It may be explained by the ability of the structure to deform in the plastic range, or the geometric characteristics in the map may influence the overall ductility. Our study is based on the influence of geometric slenderness (Lx / Ly) on the overall ductility of these structures, a study is made on a structure has 05 floors with varying the column section of 900 cm², 1600 cm² and 1225 cm². A slight variation in global ductility is noticed as (Lx/Ly) varies; however, column sections can control satisfactorily the plastic behavior of buildings.

Keywords: ductility, nonlinear behavior, pushover analysis, geometric slenderness, structural behavior

Procedia PDF Downloads 389
1797 Virtual Assessment of Measurement Error in the Fractional Flow Reserve

Authors: Keltoum Chahour, Mickael Binois

Abstract:

Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.

Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift

Procedia PDF Downloads 138
1796 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 92
1795 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 653
1794 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 191
1793 MCERTL: Mutation-Based Correction Engine for Register-Transfer Level Designs

Authors: Khaled Salah

Abstract:

In this paper, we present MCERTL (mutation-based correction engine for RTL designs) as an automatic error correction technique based on mutation analysis. A mutation-based correction methodology is proposed to automatically fix the erroneous RTL designs. The proposed strategy combines the processes of mutation and assertion-based localization. The erroneous statements are mutated to produce possible fixes for the failed RTL code. A concurrent mutation engine is proposed to mitigate the computational cost of running sequential mutants operators. The proposed methodology is evaluated against some benchmarks. The experimental results demonstrate that our proposed method enables us to automatically locate and correct multiple bugs at reasonable time.

Keywords: bug localization, error correction, mutation, mutants

Procedia PDF Downloads 282
1792 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions

Authors: Hannah F. Opayinka, Adedayo A. Adepoju

Abstract:

This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.

Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution

Procedia PDF Downloads 187
1791 Comparison of Wind Fragility for Window System in the Simplified 10 and 15-Story Building Considering Exposure Category

Authors: Viriyavudh Sim, WooYoung Jung

Abstract:

Window system in high rise building is occasionally subjected to an excessive wind intensity, particularly during typhoon. The failure of window system did not affect overall safety of structural performance; however, it could endanger the safety of the residents. In this paper, comparison of fragility curves for window system of two residential buildings was studied. The probability of failure for individual window was determined with Monte Carlo Simulation method. Then, lognormal cumulative distribution function was used to represent the fragility. The results showed that windows located on the edge of leeward wall were more susceptible to wind load and the probability of failure for each window panel increased at higher floors.

Keywords: wind fragility, window system, high rise building, wind disaster

Procedia PDF Downloads 314
1790 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible

Procedia PDF Downloads 267
1789 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers

Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang

Abstract:

One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.

Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis

Procedia PDF Downloads 438
1788 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 93
1787 Hybrid Localization Schemes for Wireless Sensor Networks

Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir

Abstract:

This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.

Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks

Procedia PDF Downloads 470
1786 New HCI Design Process Education

Authors: Jongwan Kim

Abstract:

Human Computer Interaction (HCI) is a subject covering the study, plan, and design of interactions between humans and computers. The prevalent use of digital mobile devices is increasing the need for education and research on HCI. This work is focused on a new education method geared towards reducing errors while developing application programs that incorporate role-changing brainstorming techniques during HCI design process. The proposed method has been applied to a capstone design course in the last spring semester. Students discovered some examples about UI design improvement and their error discovering and reducing capability was promoted. An UI design improvement, PC voice control for people with disabilities as an assistive technology examplar, will be presented. The improvement of these students' design ability will be helpful to the real field work.

Keywords: HCI, design process, error reducing education, role-changing brainstorming, assistive technology

Procedia PDF Downloads 490
1785 Income-Consumption Relationships in Pakistan (1980-2011): A Cointegration Approach

Authors: Himayatullah Khan, Alena Fedorova

Abstract:

The present paper analyses the income-consumption relationships in Pakistan using annual time series data from 1980-81 to 2010-1. The paper uses the Augmented Dickey-Fuller test to check the unit root and stationarity in these two time series. The paper finds that the two time series are nonstationary but stationary at their first difference levels. The Augmented Engle-Granger test and the Cointegrating Regression Durbin-Watson test imply that the two time series of consumption and income are cointegrated and that long-run marginal propensity to consume is 0.88 which is given by the estimated (static) equilibrium relation. The paper also used the error correction mechanism to find out to model dynamic relationship. The purpose of the ECM is to indicate the speed of adjustment from the short-run equilibrium to the long-run equilibrium state. The results show that MPC is equal to 0.93 and is highly significant. The coefficient of Engle-Granger residuals is negative but insignificant. Statistically, the equilibrium error term is zero, which suggests that consumption adjusts to changes in GDP in the same period. The short-run changes in GDP have a positive impact on short-run changes in consumption. The paper concludes that we may interpret 0.93 as the short-run MPC. The pair-wise Granger Causality test shows that both GDP and consumption Granger cause each other.

Keywords: cointegrating regression, Augmented Dickey Fuller test, Augmented Engle-Granger test, Granger causality, error correction mechanism

Procedia PDF Downloads 416
1784 EMI Radiation Prediction and Final Measurement Process Optimization by Neural Network

Authors: Hussam Elias, Ninovic Perez, Holger Hirsch

Abstract:

The completion of the EMC regulations worldwide is growing steadily as the usage of electronics in our daily lives is increasing more than ever. In this paper, we introduce a novel method to perform the final phase of Electromagnetic compatibility (EMC) measurement and to reduce the required test time according to the norm EN 55032 by using a developed tool and the conventional neural network(CNN). The neural network was trained using real EMC measurements, which were performed in the Semi Anechoic Chamber (SAC) by CETECOM GmbH in Essen, Germany. To implement our proposed method, we wrote software to perform the radiated electromagnetic interference (EMI) measurements and use the CNN to predict and determine the position of the turntable that meets the maximum radiation value.

Keywords: conventional neural network, electromagnetic compatibility measurement, mean absolute error, position error

Procedia PDF Downloads 201
1783 Inflation Tail Risks and Asset Pricing

Authors: Sebastian Luber

Abstract:

The study demonstrates that tail inflation risk is priced into stock returns and credit spreads. This holds true even when controlling for current and historical inflation moments. The analysis employs inflation caps and floors to obtain the distribution of future inflation under the risk-neutral measure. Credit spreads decrease as the mean and median of future inflation rise, but they respond positively to tail risks. Conversely, stocks serve as a robust hedge against future inflation. Stock returns increase with a higher mean and median of future inflation and rising inflationary tail risk, while they decrease with rising deflationary tail risk.

Keywords: asset pricing, inflation expectations, tail risk, stocks, inflation derivatives, credit

Procedia PDF Downloads 24
1782 Effects of Canned Cycles and Cutting Parameters on Hole Quality in Cryogenic Drilling of Aluminum 6061-6T

Authors: M. N. Islam, B. Boswell, Y. R. Ginting

Abstract:

The influence of canned cycles and cutting parameters on hole quality in cryogenic drilling has been investigated experimentally and analytically. A three-level, three-parameter experiment was conducted by using the design-of-experiment methodology. The three levels of independent input parameters were the following: for canned cycles—a chip-breaking canned cycle (G73), a spot drilling canned cycle (G81), and a deep hole canned cycle (G83); for feed rates—0.2, 0.3, and 0.4 mm/rev; and for cutting speeds—60, 75, and 100 m/min. The selected work and tool materials were aluminum 6061-6T and high-speed steel (HSS), respectively. For cryogenic cooling, liquid nitrogen (LN2) was used and was applied externally. The measured output parameters were the three widely used quality characteristics of drilled holes—diameter error, circularity, and surface roughness. Pareto ANOVA was applied for analyzing the results. The findings revealed that the canned cycle has a significant effect on diameter error (contribution ratio 44.09%) and small effects on circularity and surface finish (contribution ratio 7.25% and 6.60%, respectively). The best results for the dimensional accuracy and surface roughness were achieved by G81. G73 produced the best circularity results; however, for dimensional accuracy, it was the worst level.

Keywords: circularity, diameter error, drilling canned cycle, pareto ANOVA, surface roughness

Procedia PDF Downloads 287
1781 Estimation of Residual Stresses in Thick Walled Cylinder by Radial Basis Artificial Neural

Authors: Mohammad Heidari

Abstract:

In this paper a method for high strength steel is proposed of residual stresses in autofrettaged tubes by combination of artificial neural networks is presented. Many different thick walled cylinders that were subjected to different conditions were studied. At first, the residual stress is calculated by analytical solution. Then by changing of the parameters that influenced in residual stresses such as percentage of autofrettage, internal pressure, wall ratio of cylinder, material property of cylinder, bauschinger and hardening effect factor, a neural network is created. These parameters are the input of network. The output of network is residual stress. Numerical data, employed for training the network and capabilities of the model in predicting the residual stress has been verified. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 2.75% in predicting residual stress of thick wall cylinder. Further analysis of residual stress of thick wall cylinder under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach.

Keywords: thick walled cylinder, residual stress, radial basis, artificial neural network

Procedia PDF Downloads 417
1780 Using Gene Expression Programming in Learning Process of Rough Neural Networks

Authors: Sanaa Rashed Abdallah, Yasser F. Hassan

Abstract:

The paper will introduce an approach where a rough sets, gene expression programming and rough neural networks are used cooperatively for learning and classification support. The Objective of gene expression programming rough neural networks (GEP-RNN) approach is to obtain new classified data with minimum error in training and testing process. Starting point of gene expression programming rough neural networks (GEP-RNN) approach is an information system and the output from this approach is a structure of rough neural networks which is including the weights and thresholds with minimum classification error.

Keywords: rough sets, gene expression programming, rough neural networks, classification

Procedia PDF Downloads 385
1779 Development of a Work-Related Stress Management Program Guaranteeing Fitness-For-Duty for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

Human error is one of the most dreaded factors that may result in unexpected accidents, especially in nuclear power plants. For accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Out of lots factors, stress has been reported to have a significant influence on human performance. Therefore, this research aimed to develop a work-related stress management program which can guarantee Fitness-for-Duty (FFD) of the workers in nuclear power plants, especially those working in main control rooms. Major stress factors were elicited through literal surveys and classified into major categories such as demands, supports, and relationships. To manage those factors, a test and intervention program based on 4-level approaches was developed over the whole employment cycle including selection and screening of workers, job allocation, and job rotation. In addition, a managerial care program was introduced with the concept of Employee-Assistance-Program (EAP) program. Reviews on the program conducted by ex-operators in nuclear power plants showed responses in the affirmative, and suggested additional treatment to guarantee high performance of human workers, not in normal operations but also in emergency situations.

Keywords: human error, work performance, work stress, Fitness-For-Duty (FFD), Employee Assistance Program (EAP)

Procedia PDF Downloads 405
1778 Self-Tuning Dead-Beat PD Controller for Pitch Angle Control of a Bench-Top Helicopter

Authors: H. Mansor, S.B. Mohd-Noor, N. I. Othman, N. Tazali, R. I. Boby

Abstract:

This paper presents an improved robust Proportional Derivative controller for a 3-Degree-of-Freedom (3-DOF) bench-top helicopter by using adaptive methodology. Bench-top helicopter is a laboratory scale helicopter used for experimental purposes which is widely used in teaching laboratory and research. Proportional Derivative controller has been developed for a 3-DOF bench-top helicopter by Quanser. Experiments showed that the transient response of designed PD controller has very large steady state error i.e., 50%, which is very serious. The objective of this research is to improve the performance of existing pitch angle control of PD controller on the bench-top helicopter by integration of PD controller with adaptive controller. Usually standard adaptive controller will produce zero steady state error; however response time to reach desired set point is large. Therefore, this paper proposed an adaptive with deadbeat algorithm to overcome the limitations. The output response that is fast, robust and updated online is expected. Performance comparisons have been performed between the proposed self-tuning deadbeat PD controller and standard PD controller. The efficiency of the self-tuning dead beat controller has been proven from the tests results in terms of faster settling time, zero steady state error and capability of the controller to be updated online.

Keywords: adaptive control, deadbeat control, bench-top helicopter, self-tuning control

Procedia PDF Downloads 326
1777 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 200