Search results for: first order error analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 36950

Search results for: first order error analysis

36650 Phonological Characteristics of Severe to Profound Hearing Impaired Children

Authors: Akbar Darouie, Mamak Joulaie

Abstract:

In regard of phonological skills development importance and its influence on other aspects of language, this study has been performed. Determination of some phonological indexes in children with hearing impairment and comparison with hearing children was the objective. A sample of convenience was selected from a rehabilitation center and a kindergarten in Karaj, Iran. Participants consisted of 12 hearing impaired and 12 hearing children (age range: 5 years and 6 months to 6 years and 6 months old). Hearing impaired children suffered from severe to profound hearing loss while three of them were cochlear implanted and the others were wearing hearing aids. Conversational speech of these children was recorded and 50 first utterances were selected to analyze. Percentage of consonant correct (PCC) and vowel correct (PVC), initial and final consonant omission error, cluster consonant omission error and syllabic structure variety were compared in two groups. Data were analyzed with t test (version 16th SPSS). Comparison between PCC and PVC averages in two groups showed a significant difference (P< 0/01). There was a significant difference about final consonant emission error (P<0/001) and initial consonant emission error (P<0/01) too. Also, the differences between two groups on cluster consonant omission were significant (P<0/001). Therefore, some changes were seen in syllabic structures in children with hearing impairment compared to typical group. This study demonstrates some phonological differences in Farsi language between two groups of children. Therefore, it seems, in clinical practices we must notice this issue.

Keywords: hearing impairment, phonology, vowel, consonant

Procedia PDF Downloads 223
36649 Time Compression in Engineer-to-Order Industry: A Case Study of a Norwegian Shipbuilding Industry

Authors: Tarek Fatouh, Chehab Elbelehy, Alaa Abdelsalam, Eman Elakkad, Alaa Abdelshafie

Abstract:

This paper aims to explore the possibility of time compression in Engineer to Order production networks. A case study research method is used in a Norwegian shipbuilding project by implementing a value stream mapping lean tool with total cycle time as a unit of analysis. The analysis resulted in demonstrating the time deviations for the planned tasks in one of the processes in the shipbuilding project. So, authors developed a future state map by removing time wastes from value stream process.

Keywords: engineer to order, total cycle time, value stream mapping, shipbuilding

Procedia PDF Downloads 139
36648 Impact of Hard Limited Clipping Crest Factor Reduction Technique on Bit Error Rate in OFDM Based Systems

Authors: Theodore Grosch, Felipe Koji Godinho Hoshino

Abstract:

In wireless communications, 3GPP LTE is one of the solutions to meet the greater transmission data rate demand. One issue inherent to this technology is the PAPR (Peak-to-Average Power Ratio) of OFDM (Orthogonal Frequency Division Multiplexing) modulation. This high PAPR affects the efficiency of power amplifiers. One approach to mitigate this effect is the Crest Factor Reduction (CFR) technique. In this work, we simulate the impact of Hard Limited Clipping Crest Factor Reduction technique on BER (Bit Error Rate) in OFDM based Systems. In general, the results showed that CFR has more effects on higher digital modulation schemes, as expected. More importantly, we show the worst-case degradation due to CFR on QPSK, 16QAM, and 64QAM signals in a linear system. For example, hard clipping of 9 dB results in a 2 dB increase in signal to noise energy at a 1% BER for 64-QAM modulation.

Keywords: bit error rate, crest factor reduction, OFDM, physical layer simulation

Procedia PDF Downloads 347
36647 Performance Analysis of M-Ary Pulse Position Modulation in Multihop Multiple Input Multiple Output-Free Space Optical System over Uncorrelated Gamma-Gamma Atmospheric Turbulence Channels

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of Decode and Forward (DF) multihop Free Space Optical ( FSO) scheme deploying Multiple Input Multiple Output (MIMO) configuration under Gamma-Gamma (GG) statistical distribution, that adopts M-ary Pulse Position Modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of Symbol-Error Rates (SERs) respectively. A closed form formula related to the Probability Density Function (PDF) is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.

Keywords: free space optical, multiple input multiple output, M-ary pulse position modulation, multihop, decode and forward, symbol error rate, gamma-gamma channel

Procedia PDF Downloads 185
36646 Bit Error Rate Performance of MIMO Systems for Wireless Communications

Authors: E. Ghayoula, M. Haj Taieb, A. Bouallegue, J. Y. Chouinard, R. Ghayoula

Abstract:

This paper evaluates the bit error rate (BER) performance of MIMO systems for wireless communication. MIMO uses multiple transmitting antennas, multiple receiving antennas and the space-time block codes to provide diversity. MIMO transmits signal encoded by space-time block (STBC) encoder through different transmitting antennas. These signals arrive at the receiver at slightly different times. Spatially separated multiple receiving antennas are employed to provide diversity reception to combat the effect of fading in the channel. This paper presents a detailed study of diversity coding for MIMO systems. STBC techniques are implemented and simulation results in terms of the BER performance with varying number of MIMO transmitting and receiving antennas are presented. Our results show how increasing the number of both transmit and receive antenna improves system performance and reduces the bit error rate.

Keywords: MIMO systems, diversity, BER, MRRC, SIMO, MISO, STBC, alamouti, SNR

Procedia PDF Downloads 479
36645 Performance Analysis of New Types of Reference Targets Based on Spaceborne and Airborne SAR Data

Authors: Y. S. Zhou, C. R. Li, L. L. Tang, C. X. Gao, D. J. Wang, Y. Y. Guo

Abstract:

Triangular trihedral corner reflector (CR) has been widely used as point target for synthetic aperture radar (SAR) calibration and image quality assessment. The additional “tip” of the triangular plate does not contribute to the reflector’s theoretical RCS and if it interacts with a perfectly reflecting ground plane, it will yield an increase of RCS at the radar bore-sight and decrease the accuracy of SAR calibration and image quality assessment. Regarding this problem, two types of CRs were manufactured. One was the hexagonal trihedral CR. It is a self-illuminating CR with relatively small plate edge length, while large edge length usually introduces unexpected edge diffraction error. The other was the triangular trihedral CR with extended bottom plate which considers the effect of ‘tip’ into the total RCS. In order to assess the performance of the two types of new CRs, flight campaign over the National Calibration and Validation Site for High Resolution Remote Sensors was carried out. Six hexagonal trihedral CRs and two bottom-extended trihedral CRs, as well as several traditional triangular trihedral CRs, were deployed. KOMPSAT-5 X-band SAR image was acquired for the performance analysis of the hexagonal trihedral CRs. C-band airborne SAR images were acquired for the performance analysis of the bottom-extended trihedral CRs. The analysis results showed that the impulse response function of both the hexagonal trihedral CRs and bottom-extended trihedral CRs were much closer to the ideal sinc-function than the traditional triangular trihedral CRs. The flight campaign results validated the advantages of new types of CRs and they might be useful in the future SAR calibration mission.

Keywords: synthetic aperture radar, calibration, corner reflector, KOMPSAT-5

Procedia PDF Downloads 257
36644 Quadrature Mirror Filter Bank Design Using Population Based Stochastic Optimization

Authors: Ju-Hong Lee, Ding-Chen Chung

Abstract:

The paper deals with the optimal design of two-channel linear-phase (LP) quadrature mirror filter (QMF) banks using a metaheuristic based optimization technique. Based on the theory of two-channel QMF banks using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the group delay error of the designed QMF bank and the magnitude response error of the designed low-pass analysis filter. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a particle swarm optimization algorithm. The resulting two-channel QMF banks can possess approximately LP response without magnitude distortion. Simulation results are presented for illustration and comparison.

Keywords: quadrature mirror filter bank, digital all-pass filter, weighted least squares algorithm, particle swarm optimization

Procedia PDF Downloads 495
36643 Evalutaion of the Surface Water Quality Using the Water Quality Index and Discriminant Analysis Method

Authors: Lazhar Belkhiri, Ammar Tiri, Lotfi Mouni

Abstract:

Water resources present to the public order of the world a very important problem for the protection and management of water quality given the complexity of water quality data sets. In this study, the water quality index (WQI) and irrigation water quality index (IWQI) were calculated in order to evaluate the surface water quality for drinking and irrigation purposes based on nine hydrochemical parameters. In order to separate the variables that are the most responsible for the spatial differentiation, the discriminant analysis (DA) was applied. The results show that the surface water quality for drinking is poor quality and very poor quality based on WQI values, however, the values of IWQI reflect that this water is acceptable for irrigation with a restriction for sensitive plants. Consequently, the discriminant analysis DA method has shown that the following parameters pH, potassium, chloride, sulfate, and bicarbonate are significant discrimination between the different stations with the spatial variation of the surface water quality, therefore, the results obtained in this study provide very useful information to decision-makers

Keywords: surface water quality, drinking and irrigation purposes, water quality index, discriminant analysis

Procedia PDF Downloads 64
36642 Analysis and Prediction of the Behavior of the Landslide at Ain El Hammam, Algeria Based on the Second Order Work Criterion

Authors: Zerarka Hizia, Akchiche Mustapha, Prunier Florent

Abstract:

The landslide of Ain El Hammam (AEH) is characterized by a complex geology and a high hydrogeology hazard. AEH's perpetual reactivation compels us to look closely at its triggers and to better understand the mechanisms of its evolution in mass and in depth. This study builds a numerical model to simulate the influencing factors such as precipitation, non-saturation, and pore pressure fluctuations, using Plaxis software. For a finer analysis of instabilities, we use Hill's criterion, based on the sign of the second order work, which is the most appropriate material stability criterion for non-associated elastoplastic materials. The results of this type of calculation allow us, in theory, to predict the shape and position of the slip surface(s) which are liable to ground movements of the slope, before reaching the rupture given by the plastic limit of Mohr Coulomb. To validate the numerical model, an analysis of inclinometer measures is performed to confirm the direction of movement and kinematic of the sliding mechanism of AEH’s slope.

Keywords: landslide, second order work, precipitation, inclinometers

Procedia PDF Downloads 155
36641 Cancellation of Transducer Effects from Frequency Response Functions: Experimental Case Study on the Steel Plate

Authors: P. Zamani, A. Taleshi Anbouhi, M. R. Ashory, S. Mohajerzadeh, M. M. Khatibi

Abstract:

Modal analysis is a developing science in the experimental evaluation of dynamic properties of the structures. Mechanical devices such as accelerometers are one of the sources of lack of quality in measuring modal testing parameters. In this paper, eliminating the accelerometer’s mass effect of the frequency response of the structure is studied. So, a strategy is used for eliminating the mass effect by using sensitivity analysis. In this method, the amount of mass change and the place to measure the structure’s response with least error in frequency correction is chosen. Experimental modal testing is carried out on a steel plate and the effect of accelerometer’s mass is omitted using this strategy. Finally, a good agreement is achieved between numerical and experimental results.

Keywords: accelerometer mass, frequency response function, modal analysis, sensitivity analysis

Procedia PDF Downloads 425
36640 Bit Error Rate Analysis of Multiband OFCDM UWB System in UWB Fading Channel

Authors: Sanjay M. Gulhane, Athar Ravish Khan, Umesh W. Kaware

Abstract:

Orthogonal frequency and code division multiplexing (OFCDM) has received large attention as a modulation scheme to realize high data rate transmission. Multiband (MB) Orthogonal frequency division multiplexing (OFDM) Ultra Wide Band (UWB) system become promising technique for high data rate due to its large number of advantage over Singleband (UWB) system, but it suffer from coherent frequency diversity problem. In this paper we have proposed MB-OFCDM UWB system, in which two-dimensional (2D) spreading (time and frequency domain spreading), has been introduced, combining OFDM with 2D spreading, proposed system can provide frequency diversity. This paper presents the basic structure and main functions of the MB-OFCDM system, and evaluates the bit error rate BER performance of MB-OFDM and MB-OFCDM system under UWB indoor multi-path channel model. It is observe that BER curve of MB-OFCDM UWB improve its performance by 2dB as compare to MB-OFDM UWB system.

Keywords: MB-OFDM UWB system, MB-OFCDM UWB system, UWB IEEE channel model, BER

Procedia PDF Downloads 525
36639 Variation in Complement Order in English: Implications for Interlanguage Syntax

Authors: Juliet Udoudom

Abstract:

Complement ordering principles of natural language phrases (XPs) stipulate that Head terms be consistently placed phrase initially or phrase-finally, yielding two basic theoretical orders – Head – Complement order or Complement – Head order. This paper examines the principles which determine complement ordering in English V- and N-bar structures. The aim is to determine the extent to which complement linearisations in the two phrase types are consistent with the two theoretical orders outlined above given the flexible and varied nature of natural language structures. The objective is to see whether there are variation(s) in the complement linearisations of the XPs studied and the implications which such variations hold for the inter-language syntax of English and Ibibio. A corpus-based approach was employed in obtaining the English data. V- and -N – bar structures containing complement structures were isolated for analysis. Data were examined from the perspective of the X-bar and Government – theories of Chomsky’s (1981) Government-Binding format. Findings from the analysis show that in V – bar structures in English, heads are consistently placed phrase – initially yielding a Head – Complement order; however, complement linearisation in the N – bar structures studied exhibited parametric variations. Thus, in some N – bar structures in English the nominal head is ordered to the left whereas in others, the head term occurs to the right. It may therefore be concluded that the principles which determine complement ordering are both Language – Particular and Phrase – specific following insights provided within Phrasal Syntax.

Keywords: complement order, complement–head order, head–complement order, language–particular principles

Procedia PDF Downloads 329
36638 An Enhanced AODV Routing Protocol for Wireless Sensor and Actuator Networks

Authors: Apidet Booranawong, Wiklom Teerapabkajorndet

Abstract:

An enhanced ad-hoc on-demand distance vector routing (E-AODV) protocol for control system applications in wireless sensor and actuator networks (WSANs) is proposed. Our routing algorithm is designed by considering both wireless network communication and the control system aspects. Control system error and network delay are the main selection criteria in our routing protocol. The control and communication performance is evaluated on multi-hop IEEE 802.15.4 networks for building-temperature control systems. The Gilbert-Elliott error model is employed to simulate packet loss in wireless networks. The simulation results demonstrate that the E-AODV routing approach can significantly improve the communication performance better than an original AODV routing under various packet loss rates. However, the control performance result by our approach is not much improved compared with the AODV routing solution.

Keywords: WSANs, building temperature control, AODV routing protocol, control system error, settling time, delay, delivery ratio

Procedia PDF Downloads 317
36637 Aerodynamic Design an UAV and Stability Analysis with Method of Genetic Algorithm Optimization

Authors: Saul A. Torres Z., Eduardo Liceaga C., Alfredo Arias M.

Abstract:

We seek to develop a UAV for agricultural spraying at a maximum altitude of 5000 meters above sea level, with a payload of 100 liters of fumigant. For the developing the aerodynamic design of the aircraft is using computational tools such as the "Vortex Lattice Athena" software, "MATLAB", "ANSYS FLUENT", "XFoil" package among others. Also methods are being used structured programming, exhaustive analysis of optimization methods and search. The results have a very low margin of error, and the multi-objective problems can be helpful for future developments. Also we developed method for Stability Analysis (Lateral-Directional and Longitudinal).

Keywords: aerodynamics design, optimization, algorithm genetic, multi-objective problem, longitudinal stability, lateral-directional stability

Procedia PDF Downloads 571
36636 Test-Retest Agreement, Random Measurement Error and Practice Effect of the Continuous Performance Test-Identical Pairs for Patients with Schizophrenia

Authors: Kuan-Wei Chen, Chien-Wei Chen, Tai-Ling Chang, Nan-Cheng Chen, Ching-Lin Hsieh, Gong-Hong Lin

Abstract:

Background and Purposes: Deficits in sustained attention are common in patients with schizophrenia. Such impairment can limit patients to effectively execute daily activities and affect the efficacy of rehabilitation. The aims of this study were to examine the test-retest agreement, random measurement error, and practice effect of the Continuous Performance Test-Identical Pairs (CPT-IP) (a commonly used sustained attention test) in patients with schizophrenia. The results can provide empirical evidence for clinicians and researchers to apply a sustained attention test with sound psychometric properties in schizophrenia patients. Methods: We recruited patients with chronic schizophrenia to be assessed twice with 1 week interval using CPT-IP. The intra-class correlation coefficient (ICC) was used to examine the test-retest agreement. The percentage of minimal detectable change (MDC%) was used to examine the random measurement error. Moreover, the standardized response mean (SRM) was used to examine the practice effect. Results: A total of 56 patients participated in this study. Our results showed that the ICC was 0.82, MDC% was 47.4%, and SRMs were 0.36 for the CPT-IP. Conclusion: Our results indicate that CPT-IP has acceptable test-retests agreement, substantial random measurement error, and small practice effect in patients with schizophrenia. Therefore, to avoid overestimating patients’ changes in sustained attention, we suggest that clinicians interpret the change scores of CPT-IP conservatively in their routine repeated assessments.

Keywords: schizophrenia, sustained attention, CPT-IP, reliability

Procedia PDF Downloads 284
36635 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation

Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang

Abstract:

In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.

Keywords: building energy model, simulation, geometric simplification, design, regression

Procedia PDF Downloads 163
36634 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 247
36633 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School

Authors: Shofiayuningtyas Luftiani

Abstract:

Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.

Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis

Procedia PDF Downloads 160
36632 The Design of Multiple Detection Parallel Combined Spread Spectrum Communication System

Authors: Lixin Tian, Wei Xue

Abstract:

Many jobs in society go underground, such as mine mining, tunnel construction and subways, which are vital to the development of society. Once accidents occur in these places, the interruption of traditional wired communication is not conducive to the development of rescue work. In order to realize the positioning, early warning and command functions of underground personnel and improve rescue efficiency, it is necessary to develop and design an emergency ground communication system. It is easy to be subjected to narrowband interference when performing conventional underground communication. Spreading communication can be used for this problem. However, general spread spectrum methods such as direct spread communication are inefficient, so it is proposed to use parallel combined spread spectrum (PCSS) communication to improve efficiency. The PCSS communication not only has the anti-interference ability and the good concealment of the traditional spread spectrum system, but also has a relatively high frequency band utilization rate and a strong information transmission capability. So, this technology has been widely used in practice. This paper presents a PCSS communication model-multiple detection parallel combined spread spectrum (MDPCSS) communication system. In this paper, the principle of MDPCSS communication system is described, that is, the sequence at the transmitting end is processed in blocks and cyclically shifted to facilitate multiple detection at the receiving end. The block diagrams of the transmitter and receiver of the MDPCSS communication system are introduced. At the same time, the calculation formula of the system bit error rate (BER) is introduced, and the simulation and analysis of the BER of the system are completed. By comparing with the common parallel PCSS communication, we can draw a conclusion that it is indeed possible to reduce the BER and improve the system performance. Furthermore, the influence of different pseudo-code lengths selected on the system BER is simulated and analyzed, and the conclusion is that the larger the pseudo-code length is, the smaller the system error rate is.

Keywords: cyclic shift, multiple detection, parallel combined spread spectrum, PN code

Procedia PDF Downloads 116
36631 Student Attendance System Applying Reed Solomon ECC

Authors: Mohd Noah A. Rahman, Armandurni Abd Rahman, Afzaal H. Seyal, Md Rizal Md Hendry

Abstract:

The article reports an automated student attendance system modeled and developed for use at a Vocational school. This project focuses on developing an application using a QR code utilizing the Reed-Solomon error correction code using a smartphone scanned through a webcam. This system enables us to speed up the process of taking attendance and would save us valuable teaching time. This is planned to help students avoid consequences that may result from poor attendances which will eventually penalize them from sitting their final examination as required.

Keywords: QR code, Reed-Solomon, error correction, system design.

Procedia PDF Downloads 367
36630 Anomaly Detection in Financial Markets Using Tucker Decomposition

Authors: Salma Krafessi

Abstract:

The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.

Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models

Procedia PDF Downloads 37
36629 Numerical Solution of Manning's Equation in Rectangular Channels

Authors: Abdulrahman Abdulrahman

Abstract:

When the Manning equation is used, a unique value of normal depth in the uniform flow exists for a given channel geometry, discharge, roughness, and slope. Depending on the value of normal depth relative to the critical depth, the flow type (supercritical or subcritical) for a given characteristic of channel conditions is determined whether or not flow is uniform. There is no general solution of Manning's equation for determining the flow depth for a given flow rate, because the area of cross section and the hydraulic radius produce a complicated function of depth. The familiar solution of normal depth for a rectangular channel involves 1) a trial-and-error solution; 2) constructing a non-dimensional graph; 3) preparing tables involving non-dimensional parameters. Author in this paper has derived semi-analytical solution to Manning's equation for determining the flow depth given the flow rate in rectangular open channel. The solution was derived by expressing Manning's equation in non-dimensional form, then expanding this form using Maclaurin's series. In order to simplify the solution, terms containing power up to 4 have been considered. The resulted equation is a quartic equation with a standard form, where its solution was obtained by resolving this into two quadratic factors. The proposed solution for Manning's equation is valid over a large range of parameters, and its maximum error is within -1.586%.

Keywords: channel design, civil engineering, hydraulic engineering, open channel flow, Manning's equation, normal depth, uniform flow

Procedia PDF Downloads 194
36628 Traverse Surveying Table Simple and Sure

Authors: Hamid Fallah

Abstract:

Creating surveying stations is the first thing that a surveyor learns; they can use it for control and implementation in projects such as buildings, roads, tunnels, monitoring, etc., whatever is related to the preparation of maps. In this article, the method of calculation through the traverse table and by checking several examples of errors of several publishers of surveying books in the calculations of this table, we also control the results of several software in a simple way. Surveyors measure angles and lengths in creating surveying stations, so the most important task of a surveyor is to be able to correctly remove the error of angles and lengths from the calculations and to determine whether the amount of error is within the permissible limit for delete it or not.

Keywords: UTM, localization, scale factor, cartesian, traverse

Procedia PDF Downloads 61
36627 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes

Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi

Abstract:

The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.

Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees

Procedia PDF Downloads 122
36626 The Relationship Between Hourly Compensation and Unemployment Rate Using the Panel Data Regression Analysis

Authors: S. K. Ashiquer Rahman

Abstract:

the paper concentrations on the importance of hourly compensation, emphasizing the significance of the unemployment rate. There are the two most important factors of a nation these are its unemployment rate and hourly compensation. These are not merely statistics but they have profound effects on individual, families, and the economy. They are inversely related to one another. When we consider the unemployment rate that will probably decline as hourly compensations in manufacturing rise. But when we reduced the unemployment rates and increased job prospects could result from higher compensation. That’s why, the increased hourly compensation in the manufacturing sector that could have a favorable effect on job changing issues. Moreover, the relationship between hourly compensation and unemployment is complex and influenced by broader economic factors. In this paper, we use panel data regression models to evaluate the expected link between hourly compensation and unemployment rate in order to determine the effect of hourly compensation on unemployment rate. We estimate the fixed effects model, evaluate the error components, and determine which model (the FEM or ECM) is better by pooling all 60 observations. We then analysis and review the data by comparing 3 several countries (United States, Canada and the United Kingdom) using panel data regression models. Finally, we provide result, analysis and a summary of the extensive research on how the hourly compensation effects on the unemployment rate. Additionally, this paper offers relevant and useful informational to help the government and academic community use an econometrics and social approach to lessen on the effect of the hourly compensation on Unemployment rate to eliminate the problem.

Keywords: hourly compensation, Unemployment rate, panel data regression models, dummy variables, random effects model, fixed effects model, the linear regression model

Procedia PDF Downloads 56
36625 Identifying, Reporting and Preventing Medical Errors Among Nurses Working in Critical Care Units At Kenyatta National Hospital, Kenya: Closing the Gap Between Attitude and Practice

Authors: Jared Abuga, Wesley Too

Abstract:

Medical error is the third leading cause of death in US, with approximately 98,000 deaths occurring every year as a result of medical errors. The world financial burden of medication errors is roughly USD 42 billion. Medication errors may lead to at least one death daily and injure roughly 1.3 million people every year. Medical error reporting is essential in creating a culture of accountability in our healthcare system. Studies have shown that attitudes and practice of healthcare workers in reporting medical errors showed that the major factors in under-reporting of errors included work stress and fear of medico-legal consequences due to the disclosure of error. Further, the majority believed that increase in reporting medical errors would contribute to a better system. Most hospitals depend on nurses to discover medication errors because they are considered to be the sources of these errors, as contributors or mere observers, consequently, the nurse’s perception of medication errors and what needs to be done is a vital feature to reducing incidences of medication errors. We sought to explore knowledge among nurses on medical errors and factors affecting or hindering reporting of medical errors among nurses working at the emergency unit, KNH. Critical care nurses are faced with many barriers to completing incident reports on medication errors. One of these barriers which contribute to underreporting is a lack of education and/or knowledge regarding medication errors and the reporting process. This study, therefore, sought to determine the availability and the use of reporting systems for medical errors in critical care unity. It also sought to establish nurses’ perception regarding medical errors and reporting and document factors facilitating timely identification and reporting of medical errors in critical care settings. Methods: The study used cross-section study design to collect data from 76 critical care nurses from Kenyatta Teaching & Research National Referral Hospital, Kenya. Data analysis and results is ongoing. By October 2022, we will have analysis, results, discussions, and recommendations of the study for purposes of the conference in 2023

Keywords: errors, medical, kenya, nurses, safety

Procedia PDF Downloads 221
36624 Error Analysis of Pronunciation of French by Sinhala Speaking Learners

Authors: Chandeera Gunawardena

Abstract:

The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.

Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French

Procedia PDF Downloads 191
36623 A Modified Refined Higher Order Zigzag Theory for Stress Analysis of Hybrid Composite Laminates

Authors: Dhiraj Biswas, Chaitali Ray

Abstract:

A modified refined higher order zigzag theory has been developed in this paper in order to compute the accurate interlaminar stresses within hybrid laminates. Warping has significant effect on the mechanical behaviour of the laminates. To the best of author(s)’ knowledge the stress analysis of hybrid laminates is not reported in the published literature. The present paper aims to develop a new C0 continuous element based on the refined higher order zigzag theories considering warping effect in the formulation of hybrid laminates. The eight noded isoparametric plate bending element is used for the flexural analysis of laminated composite plates to study the performance of the proposed model. The transverse shear stresses are computed by using the differential equations of stress equilibrium in a simplified manner. A computer code has been developed using MATLAB software package. Several numerical examples are solved to assess the performance of the present finite element model based on the proposed higher order zigzag theory by comparing the present results with three-dimensional elasticity solutions. The present formulation is validated by comparing the results obtained from the relevant literature. An extensive parametric study has been carried out on the hybrid laminates with varying percentage of materials and angle of orientation of fibre content.

Keywords: hybrid laminate, Interlaminar stress, refined higher order zigzag theory, warping effect

Procedia PDF Downloads 205
36622 Mathematical and Numerical Analysis of a Nonlinear Cross Diffusion System

Authors: Hassan Al Salman

Abstract:

We consider a nonlinear parabolic cross diffusion model arising in applied mathematics. A fully practical piecewise linear finite element approximation of the model is studied. By using entropy-type inequalities and compactness arguments, existence of a global weak solution is proved. Providing further regularity of the solution of the model, some uniqueness results and error estimates are established. Finally, some numerical experiments are performed.

Keywords: cross diffusion model, entropy-type inequality, finite element approximation, numerical analysis

Procedia PDF Downloads 365
36621 The Identification of Combined Genomic Expressions as a Diagnostic Factor for Oral Squamous Cell Carcinoma

Authors: Ki-Yeo Kim

Abstract:

Trends in genetics are transforming in order to identify differential coexpressions of correlated gene expression rather than the significant individual gene. Moreover, it is known that a combined biomarker pattern improves the discrimination of a specific cancer. The identification of the combined biomarker is also necessary for the early detection of invasive oral squamous cell carcinoma (OSCC). To identify the combined biomarker that could improve the discrimination of OSCC, we explored an appropriate number of genes in a combined gene set in order to attain the highest level of accuracy. After detecting a significant gene set, including the pre-defined number of genes, a combined expression was identified using the weights of genes in a gene set. We used the Principal Component Analysis (PCA) for the weight calculation. In this process, we used three public microarray datasets. One dataset was used for identifying the combined biomarker, and the other two datasets were used for validation. The discrimination accuracy was measured by the out-of-bag (OOB) error. There was no relation between the significance and the discrimination accuracy in each individual gene. The identified gene set included both significant and insignificant genes. One of the most significant gene sets in the classification of normal and OSCC included MMP1, SOCS3 and ACOX1. Furthermore, in the case of oral dysplasia and OSCC discrimination, two combined biomarkers were identified. The combined genomic expression achieved better performance in the discrimination of different conditions than in a single significant gene. Therefore, it could be expected that accurate diagnosis for cancer could be possible with a combined biomarker.

Keywords: oral squamous cell carcinoma, combined biomarker, microarray dataset, correlated genes

Procedia PDF Downloads 400