Search results for: mean absolute error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2256

Search results for: mean absolute error

1986 Spelling Errors in Persian Children with Developmental Dyslexia

Authors: Mohammad Haghighi, Amineh Akhondi, Leila Jahangard, Mohammad Ahmadpanah, Masoud Ansari

Abstract:

Background: According to the recent estimation, approximately 4%-12% percent of Iranians have difficulty in learning to read and spell possibly as a result of developmental dyslexia. The study was planned to investigate spelling error patterns among Persian children with developmental dyslexia and compare that with the errors exhibited by control groups Participants: 90 students participated in this study. 30 students from Grade level five, diagnosed as dyslexics by professionals, 30 normal 5th Grade readers and 30 younger normal readers. There were 15 boys and 15 girls in each of the groups. Qualitative and quantitative methods for analysis of errors were used. Results and conclusion: results of this study indicate similar spelling error profiles among dyslexics and the reading level matched groups, and these profiles were different from age-matched group. However, performances of dyslexic group and reading level matched group were different and inconsistent in some cases.

Keywords: spelling, error types, developmental dyslexia, Persian, writing system, learning disabilities, processing

Procedia PDF Downloads 393
1985 Introduction of Integrated Image Deep Learning Solution and How It Brought Laboratorial Level Heart Rate and Blood Oxygen Results to Everyone

Authors: Zhuang Hou, Xiaolei Cao

Abstract:

The general public and medical professionals recognized the importance of accurately measuring and storing blood oxygen levels and heart rate during the COVID-19 pandemic. The demand for accurate contactless devices was motivated by the need for cross-infection reduction and the shortage of conventional oximeters, partially due to the global supply chain issue. This paper evaluated a contactless mini program HealthyPai’s heart rate (HR) and oxygen saturation (SpO2) measurements compared with other wearable devices. In the HR study of 185 samples (81 in the laboratory environment, 104 in the real-life environment), the mean absolute error (MAE) ± standard deviation was 1.4827 ± 1.7452 in the lab, 6.9231 ± 5.6426 in the real-life setting. In the SpO2 study of 24 samples, the MAE ± standard deviation of the measurement was 1.0375 ± 0.7745. Our results validated that HealthyPai utilizing the Integrated Image Deep Learning Solution (IIDLS) framework, can accurately measure HR and SpO2, providing the test quality at least comparable to other FDA-approved wearable devices in the market and surpassing the consumer-grade and research-grade wearable standards.

Keywords: remote photoplethysmography, heart rate, oxygen saturation, contactless measurement, mini program

Procedia PDF Downloads 104
1984 Traverse Surveying Table Simple and Sure

Authors: Hamid Fallah

Abstract:

Creating surveying stations is the first thing that a surveyor learns; they can use it for control and implementation in projects such as buildings, roads, tunnels, monitoring, etc., whatever is related to the preparation of maps. In this article, the method of calculation through the traverse table and by checking several examples of errors of several publishers of surveying books in the calculations of this table, we also control the results of several software in a simple way. Surveyors measure angles and lengths in creating surveying stations, so the most important task of a surveyor is to be able to correctly remove the error of angles and lengths from the calculations and to determine whether the amount of error is within the permissible limit for delete it or not.

Keywords: UTM, localization, scale factor, cartesian, traverse

Procedia PDF Downloads 49
1983 How Do L1 Teachers Assess Haitian Immigrant High School Students in Chile?

Authors: Gloria Toledo, Andrea Lizasoain, Leonardo Mena

Abstract:

Immigration has largely increased in Chile in the last 20 years. About 6.6% of our population is foreign, from which 14.3% is Haitian. Haitians are between 15 and 29 years old and have come to Chile escaping from a social crisis. They believe that education and work will help them do better in life. Therefore, rates of Haitian students in the Chilean school system have also increased: there were 3,121 Haitian students enrolled in 2017. This is a challenge for the public school, which takes in young people who must face schooling, social immersion and learning of a second language simultaneously. The linguistic barrier affects both students’ and teachers’ adaptation process, which has an impact on the students’ academic performance and consequent acquisition of Spanish. In order to explore students’ academic performance and interlanguage development, we examined how L1 teachers assess Haitian high school students’ written production in Spanish. With this purpose, teachers were asked to use a specially designed grid to assess correction, accommodation, lexical and analytical complexity, organization and fluency of both Haitian and Chilean students. Parallelly, texts were approached from an error analysis perspective. Results from grids and error analysis were then compared. On the one hand, it has been found that teachers give very little feedback to students apart from scores and grades, which does not contribute to the development of the second language. On the other hand, error analysis has yielded that Haitian students are in a dynamic process of the acquisition of Spanish, which could be enhanced if L1 teacher were aware of the process of interlanguage developmen.

Keywords: assessment, error analysis, grid, immigration, Spanish aquisition, writing

Procedia PDF Downloads 101
1982 Security as Human Value: Issue of Human Rights in Indian Sub-Continental Operations

Authors: Pratyush Vatsala, Sanjay Ahuja

Abstract:

The national security and human rights are related terms as there is nothing like absolute security or absolute human right. If we are committed to security, human right is a problem and also a solution, and if we deliberate on human rights, security is a problem but also part of the solution. Ultimately, we have to maintain a balance between the two co-related terms. As more and more armed forces are being deployed by the government within the nation for maintaining peace and security, using force against its own citizen, the search for a judicious balance between intent and action needs to be emphasized. Notwithstanding that a nation state needs complete political independence; the search for security is a driving force behind unquestioned sovereignty. If security is a human value, it overlaps the value of freedom, order, and solidarity. Now, the question needs to be explored, to what extent human rights can be compromised in the name of security in Kashmir or Mizoram like places. The present study aims to explore the issue of maintaining a balance between the use of power and good governance as human rights, providing security as a human value. This paper has been prepared with an aim of strengthening the understanding of the complex and multifaceted relationship between human rights and security forces operating for conflict management and identifies some of the critical human rights issues raised in the context of security forces operations highlighting the relevant human rights principles and standards in which Security as human value be respected at all times and in particular in the context of security forces operations in India.

Keywords: Kashmir, Mizoram, security, value, human right

Procedia PDF Downloads 244
1981 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate

Authors: A. Majeed, P. Ali

Abstract:

Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.

Keywords: artificial neural network, load estimation, regional survey, rural electrification

Procedia PDF Downloads 94
1980 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error

Authors: Qianhua He, Weili Zhou, Aiwu Chen

Abstract:

A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.

Keywords: speech denoising, sparse representation, k-singular value decomposition, orthogonal matching pursuit

Procedia PDF Downloads 470
1979 Feature Extraction and Classification Based on the Bayes Test for Minimum Error

Authors: Nasar Aldian Ambark Shashoa

Abstract:

Classification with a dimension reduction based on Bayesian approach is proposed in this paper . The first step is to generate a sample (parameter) of fault-free mode class and faulty mode class. The second, in order to obtain good classification performance, a selection of important features is done with the discrete karhunen-loeve expansion. Next, the Bayes test for minimum error is used to classify the classes. Finally, the results for simulated data demonstrate the capabilities of the proposed procedure.

Keywords: analytical redundancy, fault detection, feature extraction, Bayesian approach

Procedia PDF Downloads 497
1978 The Best Prediction Data Mining Model for Breast Cancer Probability in Women Residents in Kabul

Authors: Mina Jafari, Kobra Hamraee, Saied Hossein Hosseini

Abstract:

The prediction of breast cancer disease is one of the challenges in medicine. In this paper we collected 528 records of women’s information who live in Kabul including demographic, life style, diet and pregnancy data. There are many classification algorithm in breast cancer prediction and tried to find the best model with most accurate result and lowest error rate. We evaluated some other common supervised algorithms in data mining to find the best model in prediction of breast cancer disease among afghan women living in Kabul regarding to momography result as target variable. For evaluating these algorithms we used Cross Validation which is an assured method for measuring the performance of models. After comparing error rate and accuracy of three models: Decision Tree, Naive Bays and Rule Induction, Decision Tree with accuracy of 94.06% and error rate of %15 is found the best model to predicting breast cancer disease based on the health care records.

Keywords: decision tree, breast cancer, probability, data mining

Procedia PDF Downloads 107
1977 Study on Flexible Diaphragm In-Plane Model of Irregular Multi-Storey Industrial Plant

Authors: Cheng-Hao Jiang, Mu-Xuan Tao

Abstract:

The rigid diaphragm model may cause errors in the calculation of internal forces due to neglecting the in-plane deformation of the diaphragm. This paper thus studies the effects of different diaphragm in-plane models (including in-plane rigid model and in-plane flexible model) on the seismic performance of structures. Taking an actual industrial plant as an example, the seismic performance of the structure is predicted using different floor diaphragm models, and the analysis errors caused by different diaphragm in-plane models including deformation error and internal force error are calculated. Furthermore, the influence of the aspect ratio on the analysis errors is investigated. Finally, the code rationality is evaluated by assessing the analysis errors of the structure models whose floors were determined as rigid according to the code’s criterion. It is found that different floor models may cause great differences in the distribution of structural internal forces, and the current code may underestimate the influence of the floor in-plane effect.

Keywords: industrial plant, diaphragm, calculating error, code rationality

Procedia PDF Downloads 114
1976 Neural Network Approach to Classifying Truck Traffic

Authors: Ren Moses

Abstract:

The process of classifying vehicles on a highway is hereby viewed as a pattern recognition problem in which connectionist techniques such as artificial neural networks (ANN) can be used to assign vehicles to their correct classes and hence to establish optimum axle spacing thresholds. In the United States, vehicles are typically classified into 13 classes using a methodology commonly referred to as “Scheme F”. In this research, the ANN model was developed, trained, and applied to field data of vehicles. The data comprised of three vehicular features—axle spacing, number of axles per vehicle, and overall vehicle weight. The ANN reduced the classification error rate from 9.5 percent to 6.2 percent when compared to an existing classification algorithm that is not ANN-based and which uses two vehicular features for classification, that is, axle spacing and number of axles. The inclusion of overall vehicle weight as a third classification variable further reduced the error rate from 6.2 percent to only 3.0 percent. The promising results from the neural networks were used to set up new thresholds that reduce classification error rate.

Keywords: artificial neural networks, vehicle classification, traffic flow, traffic analysis, and highway opera-tions

Procedia PDF Downloads 273
1975 Fast and Scale-Adaptive Target Tracking via PCA-SIFT

Authors: Yawen Wang, Hongchang Chen, Shaomei Li, Chao Gao, Jiangpeng Zhang

Abstract:

As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm.

Keywords: target tracking, PCA-SIFT, mean-shift, scale-adaptive

Procedia PDF Downloads 403
1974 FPGA Implementation of the BB84 Protocol

Authors: Jaouadi Ikram, Machhout Mohsen

Abstract:

The development of a quantum key distribution (QKD) system on a field-programmable gate array (FPGA) platform is the subject of this paper. A quantum cryptographic protocol is designed based on the properties of quantum information and the characteristics of FPGAs. The proposed protocol performs key extraction, reconciliation, error correction, and privacy amplification tasks to generate a perfectly secret final key. We modeled the presence of the spy in our system with a strategy to reveal some of the exchanged information without being noticed. Using an FPGA card with a 100 MHz clock frequency, we have demonstrated the evolution of the error rate as well as the amounts of mutual information (between the two interlocutors and that of the spy) passing from one step to another in the key generation process.

Keywords: QKD, BB84, protocol, cryptography, FPGA, key, security, communication

Procedia PDF Downloads 151
1973 Frequency of Refractive Errors in Squinting Eyes of Children from 4 to 16 Years Presenting at Tertiary Care Hospital

Authors: Maryum Nawaz

Abstract:

Purpose: To determine the frequency of refractive errors in squinting eyes of children from 4 to 16 years presenting at tertiary care hospital. Study Design: A descriptive cross-sectional study was done. Place and Duration: The study was conducted in Pediatric Ophthalmology, Hayatabad Medical Complex, Peshawar. Materials and Methods: The sample size was 146 keeping 41.45%5 proportion of refractive errors in children with squinting eyes, 95% confidence interval and 8% margin of error under WHO sample size calculations. Non-probability consecutive sampling was done. Result: Mean age was 8.57±2.66 years. Male were 89 (61.0%) and female were 57 (39.0%). Refractive error was present in 56 (38.4%) and was not present in 90 (61.6%) of patients. There was no association of gender, age, parent refractive errors, or early usage of electric equipment with the refractive errors. Conclusion: There is a high prevalence of refractive errors in a patient with strabismus. There is no association of age, gender, parent refractive errors, or early usage of electric equipment in the occurrence of refractive errors. Further studies are recommended for confirmation of these.

Keywords: strabismus, refractive error, myopia, hypermetropia, astigmatism

Procedia PDF Downloads 117
1972 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 223
1971 Absolute Liability in International Human Rights Law

Authors: Gassem Alfaleh

Abstract:

In Strict liability, a person can be held liable for any harm resulting from certain actions or activities without any mistake. The liability is strict because a person can be liable when he or she commits any harm with or without his intention. The duty owed is the duty to avoid causing the plaintiff any harm. However, “strict liability is imposed at the International level by two types of treaties, namely those limited to giving internal effect to treaty provisions and those that impose responsibilities on states. The basic principle of strict liability is that there is a liability on the operator or the state (when the act concerned is attributable to the state) for damage inflicted without there being a need to prove unlawful behavior”. In international human rights law, strict liability can exist when a defendant is in legal jeopardy by virtue of an internationally wrongful act, without any accompanying intent or mental state. When the defendant engages in an abnormally dangerous activity against the environment, he will be held liable for any harm it causes, even if he was not at fault. The paper will focus on these activities under international human rights law. First, the paper will define important terms in the first section of the paper. Second, it will focus on state and non-state actors in terms of strict liability. Then, the paper will cover three major areas in which states should be liable for hazardous activities: (1) nuclear energy, (2) maritime pollution, (3) Space Law, and (4) other hazardous activities which damage the environment.

Keywords: human rights, law, legal, absolute

Procedia PDF Downloads 125
1970 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine

Authors: Anas Rao, Hao Duan, Fanhua Ma

Abstract:

The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.

Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet

Procedia PDF Downloads 222
1969 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 71
1968 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study

Authors: Ignatio Madanhire, Charles Mbohwa

Abstract:

This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firms

Keywords: aggregate production planning, trial and error, linear programming, furniture industry

Procedia PDF Downloads 519
1967 Error Analysis of Wavelet-Based Image Steganograhy Scheme

Authors: Geeta Kasana, Kulbir Singh, Satvinder Singh

Abstract:

In this paper, a steganographic scheme for digital images using Integer Wavelet Transform (IWT) is proposed. The cover image is decomposed into wavelet sub bands using IWT. Each of the subband is divided into blocks of equal size and secret data is embedded into the largest and smallest pixel values of each block of the subband. Visual quality of stego images is acceptable as PSNR between cover image and stego is above 40 dB, imperceptibility is maintained. Experimental results show better tradeoff between capacity and visual perceptivity compared to the existing algorithms. Maximum possible error analysis is evaluated for each of the wavelet subbands of an image.

Keywords: DWT, IWT, MSE, PSNR

Procedia PDF Downloads 470
1966 Feedback in the Language Class: An Action Research Process

Authors: Arash Golzari Koloor

Abstract:

Feedback seems to be an inseparable part of teaching a second/foreign language. One type of feedback is corrective feedback which is one type of error treatment in second language classrooms. This study is a report on the types of corrective feedback employed in an IELTS preparation course. The types of feedback, their frequencies, and their effectiveness are enlisted, enumerated, and interpreted. The results showed that explicit correction and recast were the most frequent types of feedback while repetition and elicitation were the least. The results also revealed that metalinguistic feedback, elicitation, and explicit correction were the most effective types of feedback and affected learners performance greatly.

Keywords: classroom interaction, corrective feedback, error treatment, oral performance

Procedia PDF Downloads 302
1965 Error Estimation for the Reconstruction Algorithm with Fan Beam Geometry

Authors: Nirmal Yadav, Tanuja Srivastava

Abstract:

Shannon theory is an exact method to recover a band limited signals from its sampled values in discrete implementation, using sinc interpolators. But sinc based results are not much satisfactory for band-limited calculations so that convolution with window function, having compact support, has been introduced. Convolution Backprojection algorithm with window function is an approximation algorithm. In this paper, the error has been calculated, arises due to this approximation nature of reconstruction algorithm. This result will be defined for fan beam projection data which is more faster than parallel beam projection.

Keywords: computed tomography, convolution backprojection, radon transform, fan beam

Procedia PDF Downloads 455
1964 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution

Authors: Md. Rashidul Hasan, Atikur Rahman Baizid

Abstract:

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.

Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function

Procedia PDF Downloads 354
1963 A Low Order Thermal Envelope Model for Heat Transfer Characteristics of Low-Rise Residential Buildings

Authors: Nadish Anand, Richard D. Gould

Abstract:

A simplistic model is introduced for determining the thermal characteristics of a Low-rise Residential (LRR) building and then predicts the energy usage by its Heating Ventilation & Air Conditioning (HVAC) system according to changes in weather conditions which are reflected in the Ambient Temperature (Outside Air Temperature). The LRR buildings are treated as a simple lump for solving the heat transfer problem and the model is derived using the lumped capacitance model of transient conduction heat transfer from bodies. Since most contemporary HVAC systems have a thermostat control which will have an offset temperature and user defined set point temperatures which define when the HVAC system will switch on and off. The aim is to predict without any error the Body Temperature (i.e. the Inside Air Temperature) which will estimate the switching on and off of the HVAC system. To validate the mathematical model derived from lumped capacitance we have used EnergyPlus simulation engine, which simulates Buildings with considerable accuracy. We have predicted through the low order model the Inside Air Temperature of a single house kept in three different climate zones (Detroit, Raleigh & Austin) and different orientations for summer and winter seasons. The prediction error from the model for the same day as that of model parameter calculation has showed an error of < 10% in winter for almost all the orientations and climate zones. Whereas the prediction error is only <10% for all the orientations in the summer season for climate zone at higher latitudes (Raleigh & Detroit). Possible factors responsible for the large variations are also noted in the work, paving way for future research.

Keywords: building energy, energy consumption, energy+, HVAC, low order model, lumped capacitance

Procedia PDF Downloads 237
1962 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 90
1961 Method of False Alarm Rate Control for Cyclic Redundancy Check-Aided List Decoding of Polar Codes

Authors: Dmitry Dikarev, Ajit Nimbalker, Alexei Davydov

Abstract:

Polar coding is a novel example of error correcting codes, which can achieve Shannon limit at block length N→∞ with log-linear complexity. Active research is being carried to adopt this theoretical concept for using in practical applications such as 5th generation wireless communication systems. Cyclic redundancy check (CRC) error detection code is broadly used in conjunction with successive cancellation list (SCL) decoding algorithm to improve finite-length polar code performance. However, there are two issues: increase of code block payload overhead by CRC bits and decrease of CRC error-detection capability. This paper proposes a method to control CRC overhead and false alarm rate of polar decoding. As shown in the computer simulations results, the proposed method provides the ability to use any set of CRC polynomials with any list size while maintaining the desired level of false alarm rate. This level of flexibility allows using polar codes in 5G New Radio standard.

Keywords: 5G New Radio, channel coding, cyclic redundancy check, list decoding, polar codes

Procedia PDF Downloads 192
1960 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 120
1959 A Real Time Ultra-Wideband Location System for Smart Healthcare

Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang

Abstract:

Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.

Keywords: intelligent monitoring, ultra-wideband technology, real-time location, IoT devices, smart healthcare

Procedia PDF Downloads 97
1958 Pathways and Mechanisms of Lymphocytes Emigration from Newborn Thymus

Authors: Olena Grygorieva

Abstract:

Nowadays mechanisms of thymocytes emigration from the thymus to the periphery are investigated actively. We have proposed a hypothesis of thymocytes’ migration from the thymus through lymphatic vessels during periodical short-term local edema. By morphological, hystochemical methods we have examined quantity of lymphocytes, epitelioreticulocytes, mast cells, blood and lymphatic vessels in morpho-functional areas of rats’ thymuses during the first week after birth in 4 hours interval. In newborn and beginning from 8 hour after birth every 12 hours specific density of the thymus, absolute quantity of microcirculatory vessels, especially of lymphatic ones, lymphcyte-epithelial index, quantity of mast cells and their degranulative forms increase. Structure of extracellular matrix, intrathymical microenvironment and lymphocytes’ adhesive properties change. Absolute quantity of small lymphocytes in thymic cortex changes wavy. All these changes are straightly expressed from 0 till 2, from 12 till 16, from 108 till 120 hours of postnatal life. During this periods paravasal lymphatic vessels are stuffed with lymphocytes, i.e. discrete migration of lymphocytes from the thymus occurs. After rapid edema reduction, quantity of lymphatic vessels decrease, they become empty. Therefore, in the thymus of newborn periodical short-term local edema is observed, on its top discrete migration of lymphocytes from the thymus occurs.

Keywords: lymphocytes, lymphatic vessels, mast cells, thymus

Procedia PDF Downloads 196
1957 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation

Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran

Abstract:

Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.

Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning

Procedia PDF Downloads 463