Search results for: error matrices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2173

Search results for: error matrices

1903 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 622
1902 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 163
1901 MCERTL: Mutation-Based Correction Engine for Register-Transfer Level Designs

Authors: Khaled Salah

Abstract:

In this paper, we present MCERTL (mutation-based correction engine for RTL designs) as an automatic error correction technique based on mutation analysis. A mutation-based correction methodology is proposed to automatically fix the erroneous RTL designs. The proposed strategy combines the processes of mutation and assertion-based localization. The erroneous statements are mutated to produce possible fixes for the failed RTL code. A concurrent mutation engine is proposed to mitigate the computational cost of running sequential mutants operators. The proposed methodology is evaluated against some benchmarks. The experimental results demonstrate that our proposed method enables us to automatically locate and correct multiple bugs at reasonable time.

Keywords: bug localization, error correction, mutation, mutants

Procedia PDF Downloads 247
1900 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions

Authors: Hannah F. Opayinka, Adedayo A. Adepoju

Abstract:

This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.

Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution

Procedia PDF Downloads 155
1899 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible

Procedia PDF Downloads 237
1898 Characterization of Biocomposites Based on Mussel Shell Wastes

Authors: Suheyla Kocaman, Gulnare Ahmetli, Alaaddin Cerit, Alize Yucel, Merve Gozukucuk

Abstract:

Shell wastes represent a considerable quantity of byproducts in the shellfish aquaculture. From the viewpoint of ecofriendly and economical disposal, it is highly desirable to convert these residues into high value-added products for industrial applications. So far, the utilization of shell wastes was confined at relatively lower levels, e.g. wastewater decontaminant, soil conditioner, fertilizer constituent, feed additive and liming agent. Shell wastes consist of calcium carbonate and organic matrices, with the former accounting for 95-99% by weight. Being the richest source of biogenic CaCO3, shell wastes are suitable to prepare high purity CaCO3 powders, which have been extensively applied in various industrial products, such as paper, rubber, paints and pharmaceuticals. Furthermore, the shell waste could be further processed to be the filler of polymer composites. This paper presents a study on the potential use of mussel shell waste as biofiller to produce the composite materials with different epoxy matrices, such as bisphenol-A type, CTBN modified and polyurethane modified epoxy resins. Morphology and mechanical properties of shell particles reinforced epoxy composites were evaluated to assess the possibility of using it as a new material. The effects of shell particle content on the mechanical properties of the composites were investigated. It was shown that in all composites, the tensile strength and Young’s modulus values increase with the increase of mussel shell particles content from 10 wt% to 50 wt%, while the elongation at break decreased, compared to pure epoxy resin. The highest Young’s modulus values were determined for bisphenol-A type epoxy composites.

Keywords: biocomposite, epoxy resin, mussel shell, mechanical properties

Procedia PDF Downloads 293
1897 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers

Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang

Abstract:

One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.

Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis

Procedia PDF Downloads 415
1896 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 63
1895 Effect of Treadmill Exercise on Fluid Intelligence in Early Adults: Electroencephalogram Study

Authors: Ladda Leungratanamart, Seree Chadcham

Abstract:

Fluid intelligence declines along with age, but it can be developed. For this reason, increasing fluid intelligence in young adults can be possible. This study examined the effects of a two-month treadmill exercise program on fluid intelligence. The researcher designed a treadmill exercise program to promote cardiorespiratory fitness. Thirty-eight healthy voluntary students from the Boromarajonani College of Nursing, Chon Buri were assigned randomly to an exercise group (n=18) and a control group (n=20). The experiment consisted of three sessions: The baseline session consisted of measuring the VO2max, electroencephalogram and behavioral response during performed the Raven Progressive Matrices (RPM) test, a measure of fluid intelligence. For the exercise session, an experimental group exercises using treadmill training at 60 % to 80 % maximum heart rate for 30 mins, three times per week, whereas the control group did not exercise. For the following two sessions, each participant was measured the same as baseline testing. The data were analyzed using the t-test to examine whether there is significant difference between the means of the two groups. The results showed that the mean VO2 max in the experimental group were significantly more than the control group (p<.05), suggesting a two-month treadmill exercise program can improve fluid intelligence. When comparing the behavioral data, it was found that experimental group performed RPM test more accurately and faster than the control group. Neuroelectric data indicated a significant increase in percentages of alpha band ERD (%ERD) at P3 and Pz compared to the pre-exercise condition and the control group. These data suggest that a two-month treadmill exercise program can contribute to the development of cardiorespiratory fitness which influences an increase fluid intelligence. Exercise involved in cortical activation in difference brain areas.

Keywords: treadmill exercise, fluid intelligence, raven progressive matrices test, alpha band

Procedia PDF Downloads 325
1894 Hybrid Localization Schemes for Wireless Sensor Networks

Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir

Abstract:

This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.

Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks

Procedia PDF Downloads 441
1893 New HCI Design Process Education

Authors: Jongwan Kim

Abstract:

Human Computer Interaction (HCI) is a subject covering the study, plan, and design of interactions between humans and computers. The prevalent use of digital mobile devices is increasing the need for education and research on HCI. This work is focused on a new education method geared towards reducing errors while developing application programs that incorporate role-changing brainstorming techniques during HCI design process. The proposed method has been applied to a capstone design course in the last spring semester. Students discovered some examples about UI design improvement and their error discovering and reducing capability was promoted. An UI design improvement, PC voice control for people with disabilities as an assistive technology examplar, will be presented. The improvement of these students' design ability will be helpful to the real field work.

Keywords: HCI, design process, error reducing education, role-changing brainstorming, assistive technology

Procedia PDF Downloads 466
1892 Income-Consumption Relationships in Pakistan (1980-2011): A Cointegration Approach

Authors: Himayatullah Khan, Alena Fedorova

Abstract:

The present paper analyses the income-consumption relationships in Pakistan using annual time series data from 1980-81 to 2010-1. The paper uses the Augmented Dickey-Fuller test to check the unit root and stationarity in these two time series. The paper finds that the two time series are nonstationary but stationary at their first difference levels. The Augmented Engle-Granger test and the Cointegrating Regression Durbin-Watson test imply that the two time series of consumption and income are cointegrated and that long-run marginal propensity to consume is 0.88 which is given by the estimated (static) equilibrium relation. The paper also used the error correction mechanism to find out to model dynamic relationship. The purpose of the ECM is to indicate the speed of adjustment from the short-run equilibrium to the long-run equilibrium state. The results show that MPC is equal to 0.93 and is highly significant. The coefficient of Engle-Granger residuals is negative but insignificant. Statistically, the equilibrium error term is zero, which suggests that consumption adjusts to changes in GDP in the same period. The short-run changes in GDP have a positive impact on short-run changes in consumption. The paper concludes that we may interpret 0.93 as the short-run MPC. The pair-wise Granger Causality test shows that both GDP and consumption Granger cause each other.

Keywords: cointegrating regression, Augmented Dickey Fuller test, Augmented Engle-Granger test, Granger causality, error correction mechanism

Procedia PDF Downloads 388
1891 EMI Radiation Prediction and Final Measurement Process Optimization by Neural Network

Authors: Hussam Elias, Ninovic Perez, Holger Hirsch

Abstract:

The completion of the EMC regulations worldwide is growing steadily as the usage of electronics in our daily lives is increasing more than ever. In this paper, we introduce a novel method to perform the final phase of Electromagnetic compatibility (EMC) measurement and to reduce the required test time according to the norm EN 55032 by using a developed tool and the conventional neural network(CNN). The neural network was trained using real EMC measurements, which were performed in the Semi Anechoic Chamber (SAC) by CETECOM GmbH in Essen, Germany. To implement our proposed method, we wrote software to perform the radiated electromagnetic interference (EMI) measurements and use the CNN to predict and determine the position of the turntable that meets the maximum radiation value.

Keywords: conventional neural network, electromagnetic compatibility measurement, mean absolute error, position error

Procedia PDF Downloads 169
1890 Effects of Canned Cycles and Cutting Parameters on Hole Quality in Cryogenic Drilling of Aluminum 6061-6T

Authors: M. N. Islam, B. Boswell, Y. R. Ginting

Abstract:

The influence of canned cycles and cutting parameters on hole quality in cryogenic drilling has been investigated experimentally and analytically. A three-level, three-parameter experiment was conducted by using the design-of-experiment methodology. The three levels of independent input parameters were the following: for canned cycles—a chip-breaking canned cycle (G73), a spot drilling canned cycle (G81), and a deep hole canned cycle (G83); for feed rates—0.2, 0.3, and 0.4 mm/rev; and for cutting speeds—60, 75, and 100 m/min. The selected work and tool materials were aluminum 6061-6T and high-speed steel (HSS), respectively. For cryogenic cooling, liquid nitrogen (LN2) was used and was applied externally. The measured output parameters were the three widely used quality characteristics of drilled holes—diameter error, circularity, and surface roughness. Pareto ANOVA was applied for analyzing the results. The findings revealed that the canned cycle has a significant effect on diameter error (contribution ratio 44.09%) and small effects on circularity and surface finish (contribution ratio 7.25% and 6.60%, respectively). The best results for the dimensional accuracy and surface roughness were achieved by G81. G73 produced the best circularity results; however, for dimensional accuracy, it was the worst level.

Keywords: circularity, diameter error, drilling canned cycle, pareto ANOVA, surface roughness

Procedia PDF Downloads 255
1889 Estimation of Residual Stresses in Thick Walled Cylinder by Radial Basis Artificial Neural

Authors: Mohammad Heidari

Abstract:

In this paper a method for high strength steel is proposed of residual stresses in autofrettaged tubes by combination of artificial neural networks is presented. Many different thick walled cylinders that were subjected to different conditions were studied. At first, the residual stress is calculated by analytical solution. Then by changing of the parameters that influenced in residual stresses such as percentage of autofrettage, internal pressure, wall ratio of cylinder, material property of cylinder, bauschinger and hardening effect factor, a neural network is created. These parameters are the input of network. The output of network is residual stress. Numerical data, employed for training the network and capabilities of the model in predicting the residual stress has been verified. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 2.75% in predicting residual stress of thick wall cylinder. Further analysis of residual stress of thick wall cylinder under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach.

Keywords: thick walled cylinder, residual stress, radial basis, artificial neural network

Procedia PDF Downloads 383
1888 Development of Wound Dressing System Based on Hydrogel Matrix Incorporated with pH-Sensitive Nanocarrier-Drug Systems

Authors: Dagmara Malina, Katarzyna Bialik-Wąs, Klaudia Pluta

Abstract:

The growing significance of transdermal systems, in which skin is a route for systemic drug delivery, has generated a considerable amount of data which has resulted in a deeper understanding of the mechanisms of transport across the skin in the context of the controlled and prolonged release of active substances. One of such solutions may be the use of carrier systems based on intelligent polymers with different physicochemical properties. In these systems, active substances, e.g. drugs, can be conjugated (attached), immobilized, or encapsulated in a polymer matrix that is sensitive to specific environmental conditions (e.g. pH or temperature changes). Intelligent polymers can be divided according to their sensitivity to specific environmental stimuli such as temperature, pH, light, electric, magnetic, sound, or electromagnetic fields. Materials & methods—The first stage of the presented research concerned the synthesis of pH-sensitive polymeric carriers by a radical polymerization reaction. Then, the selected active substance (hydrocortisone) was introduced into polymeric carriers. In a further stage, bio-hybrid sodium alginate/poly(vinyl alcohol) – SA/PVA-based hydrogel matrices modified with various carrier-drug systems were prepared with the chemical cross-linking method. The conducted research included the assessment of physicochemical properties of obtained materials i.e. degree of hydrogel swelling and degradation studies as a function of pH in distilled water and phosphate-buffered saline (PBS) at 37°C in time. The gel fraction represents the insoluble gel fraction as a result of inter-molecule cross-linking formation was also measured. Additionally, the chemical structure of obtained hydrogels was confirmed using FT-IR spectroscopic technique. The dynamic light scattering (DLS) technique was used for the analysis of the average particle size of polymer-carriers and carrier-drug systems. The nanocarriers morphology was observed using SEM microscopy. Results & Discussion—The analysis of the encapsulated polymeric carriers showed that it was possible to obtain the time-stable empty pH-sensitive carrier with an average size 479 nm and the encapsulated system containing hydrocortisone with an average 543 nm, which was introduced into hydrogel structure. Bio-hybrid hydrogel matrices are stable materials, and the presence of an additional component: pH-sensitive carrier – hydrocortisone system, does not reduce the degree of cross-linking of the matrix nor its swelling ability. Moreover, the results of swelling tests indicate that systems containing higher concentrations of the drug have a slightly higher sorption capacity in each of the media used. All analyzed materials show stable and statically changing swelling values in simulated body fluids - there is no sudden fluid uptake and no rapid release from the material. The analysis of FT-IR spectra confirms the chemical structure of the obtained bio-hybrid hydrogel matrices. In the case of modifications with a pH-sensitive carrier, a much more intense band can be observed in the 3200-3500 cm⁻¹ range, which most likely originates from the strong hydrogen interactions that occur between individual components.

Keywords: hydrogels, polymer nanocarriers, sodium alginate/poly(vinyl alcohol) matrices, wound dressings.

Procedia PDF Downloads 121
1887 Using Gene Expression Programming in Learning Process of Rough Neural Networks

Authors: Sanaa Rashed Abdallah, Yasser F. Hassan

Abstract:

The paper will introduce an approach where a rough sets, gene expression programming and rough neural networks are used cooperatively for learning and classification support. The Objective of gene expression programming rough neural networks (GEP-RNN) approach is to obtain new classified data with minimum error in training and testing process. Starting point of gene expression programming rough neural networks (GEP-RNN) approach is an information system and the output from this approach is a structure of rough neural networks which is including the weights and thresholds with minimum classification error.

Keywords: rough sets, gene expression programming, rough neural networks, classification

Procedia PDF Downloads 348
1886 Development of a Work-Related Stress Management Program Guaranteeing Fitness-For-Duty for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

Human error is one of the most dreaded factors that may result in unexpected accidents, especially in nuclear power plants. For accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Out of lots factors, stress has been reported to have a significant influence on human performance. Therefore, this research aimed to develop a work-related stress management program which can guarantee Fitness-for-Duty (FFD) of the workers in nuclear power plants, especially those working in main control rooms. Major stress factors were elicited through literal surveys and classified into major categories such as demands, supports, and relationships. To manage those factors, a test and intervention program based on 4-level approaches was developed over the whole employment cycle including selection and screening of workers, job allocation, and job rotation. In addition, a managerial care program was introduced with the concept of Employee-Assistance-Program (EAP) program. Reviews on the program conducted by ex-operators in nuclear power plants showed responses in the affirmative, and suggested additional treatment to guarantee high performance of human workers, not in normal operations but also in emergency situations.

Keywords: human error, work performance, work stress, Fitness-For-Duty (FFD), Employee Assistance Program (EAP)

Procedia PDF Downloads 380
1885 Self-Tuning Dead-Beat PD Controller for Pitch Angle Control of a Bench-Top Helicopter

Authors: H. Mansor, S.B. Mohd-Noor, N. I. Othman, N. Tazali, R. I. Boby

Abstract:

This paper presents an improved robust Proportional Derivative controller for a 3-Degree-of-Freedom (3-DOF) bench-top helicopter by using adaptive methodology. Bench-top helicopter is a laboratory scale helicopter used for experimental purposes which is widely used in teaching laboratory and research. Proportional Derivative controller has been developed for a 3-DOF bench-top helicopter by Quanser. Experiments showed that the transient response of designed PD controller has very large steady state error i.e., 50%, which is very serious. The objective of this research is to improve the performance of existing pitch angle control of PD controller on the bench-top helicopter by integration of PD controller with adaptive controller. Usually standard adaptive controller will produce zero steady state error; however response time to reach desired set point is large. Therefore, this paper proposed an adaptive with deadbeat algorithm to overcome the limitations. The output response that is fast, robust and updated online is expected. Performance comparisons have been performed between the proposed self-tuning deadbeat PD controller and standard PD controller. The efficiency of the self-tuning dead beat controller has been proven from the tests results in terms of faster settling time, zero steady state error and capability of the controller to be updated online.

Keywords: adaptive control, deadbeat control, bench-top helicopter, self-tuning control

Procedia PDF Downloads 295
1884 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 166
1883 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 47
1882 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 136
1881 Reduction of Impulsive Noise in OFDM System using Adaptive Algorithm

Authors: Alina Mirza, Sumrin M. Kabir, Shahzad A. Sheikh

Abstract:

The Orthogonal Frequency Division Multiplexing (OFDM) with high data rate, high spectral efficiency and its ability to mitigate the effects of multipath makes them most suitable in wireless application. Impulsive noise distorts the OFDM transmission and therefore methods must be investigated to suppress this noise. In this paper, a State Space Recursive Least Square (SSRLS) algorithm based adaptive impulsive noise suppressor for OFDM communication system is proposed. And a comparison with another adaptive algorithm is conducted. The state space model-dependent recursive parameters of proposed scheme enables to achieve steady state mean squared error (MSE), low bit error rate (BER), and faster convergence than that of some of existing algorithm.

Keywords: OFDM, impulsive noise, SSRLS, BER

Procedia PDF Downloads 429
1880 Employing Bayesian Artificial Neural Network for Evaluation of Cold Rolling Force

Authors: P. Kooche Baghy, S. Eskandari, E.javanmard

Abstract:

Neural network has been used as a predictive means of cold rolling force in this dissertation. Thus, imposed average force on rollers as a mere input and five pertaining parameters to its as a outputs are regarded. According to our study, feed-forward multilayer perceptron network has been selected. Besides, Bayesian algorithm based on the feed-forward back propagation method has been selected due to noisy data. Further, 470 out of 585 all tests were used for network learning and others (115 tests) were considered as assessment criteria. Eventually, by 30 times running the MATLAB software, mean error was obtained 3.84 percent as a criteria of network learning. As a consequence, this the mentioned error on par with other approaches such as numerical and empirical methods is acceptable admittedly.

Keywords: artificial neural network, Bayesian, cold rolling, force evaluation

Procedia PDF Downloads 409
1879 Fabrication of Optical Tissue Phantoms Simulating Human Skin and Their Application

Authors: Jihoon Park, Sungkon Yu, Byungjo Jung

Abstract:

Although various optical tissue phantoms (OTPs) simulating human skin have been actively studied, their completeness is unclear because skin tissue has the intricate optical property and complicated structure disturbing the optical simulation. In this study, we designed multilayer OTP mimicking skin structure, and fabricated OTP models simulating skin-blood vessel and skin pigmentation in the skin, which are useful in Biomedical optics filed. The OTPs were characterized with the optical property and the cross-sectional structure, and analyzed by using various optical tools such as a laser speckle imaging system, OCT and a digital microscope to show the practicality. The measured optical property was within 5% error, and the thickness of each layer was uniform within 10% error in micrometer scale.

Keywords: blood vessel, optical tissue phantom, optical property, skin tissue, pigmentation

Procedia PDF Downloads 411
1878 Albanian Students’ Errors in Spoken and Written English and the Role of Error Correction in Assessment and Self-Assessment

Authors: Arburim Iseni, Afrim Aliti, Nagri Rexhepi

Abstract:

This paper focuses mainly on an important aspect of student-linguistic errors. It aims to explore the nature of Albanian intermediate level or B1 students’ language errors and mistakes and attempts to trace the possible sources or causes by classifying the error samples into both inter lingual and intra lingual errors. The hypothesis that intra lingua errors may be determined or induced somehow by the native language influence seems to be confirmed by the significant number of errors found in Albanian EFL students in the Study Program of the English Language and Literature at the State University of Tetova. Findings of this study have revealed that L1 interference first and then ignorance of the English Language grammar rules constitute the main sources or causes of errors, even though carelessness cannot be ruled out. Although we have conducted our study with 300 students of intermediate or B1 level, we believe that this hypothesis would need to be confirmed by further research, maybe with a larger number of students with different levels in order to draw more steady and accurate conclusions. The analysis of the questionnaires was done according to quantitative and qualitative research methods. This study was also conducted by taking written samples on different topics from our students and then distributing them with comments to the students and University teachers as well. These questionnaires were designed to gather information among 300 students and 48 EFL teachers, all of whom teach in the Study Program of English Language and Literature at the State University of Tetova. From the analyzed written samples of the students and face-to-face interviews, we could get useful insights into some important aspects of students’ error-making and error-correction. These different research methodologies were used in order to comprise a holistic research and the findings of the questionnaires helped us to come up with some more steady solutions in order to minimize the potential gap between students and teachers.

Keywords: L1 & L2, Linguistics, Applied linguistics, SLA, Albanian EFL students and teachers, Errors and Mistakes, Students’ Assessment and Self-Assessment

Procedia PDF Downloads 454
1877 A Comparative Study on the Dimensional Error of 3D CAD Model and SLS RP Model for Reconstruction of Cranial Defect

Authors: L. Siva Rama Krishna, Sriram Venkatesh, M. Sastish Kumar, M. Uma Maheswara Chary

Abstract:

Rapid Prototyping (RP) is a technology that produces models and prototype parts from 3D CAD model data, CT/MRI scan data, and model data created from 3D object digitizing systems. There are several RP process like Stereolithography (SLA), Solid Ground Curing (SGC), Selective Laser Sintering (SLS), Fused Deposition Modelling (FDM), 3D Printing (3DP) among them SLS and FDM RP processes are used to fabricate pattern of custom cranial implant. RP technology is useful in engineering and biomedical application. This is helpful in engineering for product design, tooling and manufacture etc. RP biomedical applications are design and development of medical devices, instruments, prosthetics and implantation; it is also helpful in planning complex surgical operation. The traditional approach limits the full appreciation of various bony structure movements and therefore the custom implants produced are difficult to measure the anatomy of parts and analyse the changes in facial appearances accurately. Cranioplasty surgery is a surgical correction of a defect in cranial bone by implanting a metal or plastic replacement to restore the missing part. This paper aims to do a comparative study on the dimensional error of CAD and SLS RP Models for reconstruction of cranial defect by comparing the virtual CAD with the physical RP model of a cranial defect.

Keywords: rapid prototyping, selective laser sintering, cranial defect, dimensional error

Procedia PDF Downloads 300
1876 Using Eigenvalues and Eigenvectors in Population Growth and Stability Obtaining

Authors: Abubakar Sadiq Mensah

Abstract:

The Knowledge of the population growth of a nation is paramount to national planning. The population of a place is studied and a model developed over a period of time, Matrices is used to form model for population growth. The eigenvalue ƛ of the matrix A and its corresponding eigenvector X is such that AX = ƛX is calculated. The stable age distribution of the population is obtained using the eigenvalue and the characteristic polynomial. Hence, estimation could be made using eigenvalues and eigenvectors.

Keywords: eigenvalues, eigenvectors, population, growth/stability

Procedia PDF Downloads 481
1875 Effect of Core Stability Exercises on Trunk Proprioception in Healthy Adult Individuals

Authors: Omaima E. S. Mohammed, Amira A. A. Abdallah, Amal A. M. El Borady

Abstract:

Background: Core stability training has recently attracted attention for improving muscle performance. Purpose: This study investigated the effect of beginners' core stability exercises on trunk active repositioning error at 30° and 60° trunk flexion. Methods: Forty healthy males participated in the study. They were divided into two equal groups; experimental “group I” and control “group II”. Their mean age, weight and height were 19.35±1.11 vs 20.45±1.64 years, 70.15±6.44 vs 72.45±6.91 kg and 174.7±7.02 vs 176.3±7.24 cm for group I vs group II. Data were collected using the Biodex Isokinetic system at an angular velocity of 60º/s. The participants were tested twice; before and after a 6-week period during which group I performed a core stability training program. Results: The Mixed 3-way ANOVA revealed significant increases (p<0.05) in the absolute error (AE) at 30˚ compared with 60˚ flexion in the pre-test condition of group I and II and the post-test condition of group II. Moreover, there were significant decreases (p<0.05) in the AE in the post-test condition compared with the pre-test in group I at both 30˚ and 60˚ flexion with no significant differences for group II. Finally, there were significant decreases (p<0.05) in the AE in group I compared with group II in the post-test condition at 30˚ and 60˚ flexion with no significant differences for the pre-test condition Interpretation/Conclusion: The improvement in trunk proprioception indicated by the decrease in the active repositioning error in the experimental group recommends including core stability training in the exercise programs that aim to improve trunk proprioception.

Keywords: core stability, isokinetic, trunk proprioception, biomechanics

Procedia PDF Downloads 446
1874 Detecting Logical Errors in Haskell

Authors: Vanessa Vasconcelos, Mariza A. S. Bigonha

Abstract:

In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying functional programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against functional programming assignments submitted by students enrolled at the functional programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available on GitHub. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. Results also showed that the Ochiai method was more effective than Tarantula.

Keywords: debug, fault localization, functional programming, Haskell

Procedia PDF Downloads 274