Search results for: high relative accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23311

Search results for: high relative accuracy

23281 Human Health Risks Assessment of Particulate Air Pollution in Romania

Authors: Katalin Bodor, Zsolt Bodor, Robert Szep

Abstract:

The particulate matter (PM) smaller than 2.5 μm are less studied due to the limited availability of PM₂.₅, and less information is available on the health effects attributable to PM₁₀ in Central-Eastern Europe. The objective of the current study was to assess the human health risk and characterize the spatial and temporal variation of PM₂.₅ and PM₁₀ in eight Romanian regions between the 2009-2018 and. The PM concentrations showed high variability over time and spatial distribution. The highest concentration was detected in the Bucharest region in the winter period, and the lowest was detected in West. The relative risk caused by the PM₁₀ for all-cause mortality varied between 1.017 (B) and 1.025 (W), with an average 1.020. The results demonstrate a positive relative risk of cardiopulmonary and lung cancer disease due to exposure to PM₂.₅ on the national average 1.26 ( ± 0.023) and 1.42 ( ± 0.037), respectively.

Keywords: PM₂.₅, PM₁₀, relative risk, health effect

Procedia PDF Downloads 136
23280 A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30dB SNR as a reference for voice activity.

Keywords: atomic decomposition, gabor, gammatone, matching pursuit, voice activity detection

Procedia PDF Downloads 267
23279 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 128
23278 Charging-Vacuum Helium Mass Spectrometer Leak Detection Technology in the Application of Space Products Leak Testing and Error Control

Authors: Jijun Shi, Lichen Sun, Jianchao Zhao, Lizhi Sun, Enjun Liu, Chongwu Guo

Abstract:

Because of the consistency of pressure direction, more short cycle, and high sensitivity, Charging-Vacuum helium mass spectrometer leak testing technology is the most popular leak testing technology for the seal testing of the spacecraft parts, especially the small and medium size ones. Usually, auxiliary pump was used, and the minimum detectable leak rate could reach 5E-9Pa•m3/s, even better on certain occasions. Relative error is more important when evaluating the results. How to choose the reference leak, the background level of helium, and record formats would affect the leak rate tested. In the linearity range of leak testing system, it would reduce 10% relative error if the reference leak with larger leak rate was used, and the relative error would reduce obviously if the background of helium was low efficiently, the record format of decimal was used, and the more stable data were recorded.

Keywords: leak testing, spacecraft parts, relative error, error control

Procedia PDF Downloads 428
23277 The Impact of Syntactic Priming on Language Learners’ Perception of Relative Clauses

Authors: Kaine Gulozer

Abstract:

Listening comprehension in a foreign language context has been a constant challenge for Turkish speakers of English. Syntactic priming (SP) of relative clauses might affect the perception of subsequent sentences of identical structure and this could have an impact on the listening comprehension of second or foreign language learners. There has been little attempt to investigate the syntactic priming of English subject relative clauses and object relative clauses in relation to perception for the learners of English in Turkish context. This study investigates SP effects on low-proficiency EFL learners’ production of English relative clauses. Both qualitative and quantitative method along with a pre-test and post-test tasks were adopted, recruiting 62 EFL learners to receive a six-week listening instruction on relative clauses. Testing instruments for language production included the two tasks: (1) the visual- cued presentation and recall and (2) the auditory-cued presentation and recall. Students’ listening comprehension in task 1 and 2 were recorded and transcribed. Fifteen of the participants were also interviewed. The results of the dependent samples t-test analyses revealed that SP had a significant effect on the overall perception of relative clauses.

Keywords: listening comprehension, relative clauses, structural priming, syntactic persistance, syntactic priming

Procedia PDF Downloads 131
23276 The Modeling and Effectiveness Evaluation for Vessel Evasion to Acoustic Homing Torpedo

Authors: Li Minghui, Min Shaorong, Zhang Jun

Abstract:

This paper aims for studying the operational efficiency of surface warship’s motorized evasion to acoustic homing torpedo. It orderly developed trajectory model, self-guide detection model, vessel evasion model, as well as anti-torpedo error model in three-dimensional space to make up for the deficiency of precious researches analyzing two-dimensionally confrontational models. Then, making use of the Monte Carlo method, it carried out the simulation for the confrontation process of evasion in the environment of MATLAB. At last, it quantitatively analyzed the main factors which determine vessel’s survival probability. The results show that evasion relative bearing and speed will affect vessel’s survival probability significantly. Thus, choosing appropriate evasion relative bearing and speed according to alarming range and alarming relative bearing for torpedo, improving alarming range and positioning accuracy and reducing the response time against torpedo will improve the vessel’s survival probability significantly.

Keywords: acoustic homing torpedo, vessel evasion, monte carlo method, torpedo defense, vessel's survival probability

Procedia PDF Downloads 422
23275 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 112
23274 Using Greywolf Optimized Machine Learning Algorithms to Improve Accuracy for Predicting Hospital Readmission for Diabetes

Authors: Vincent Liu

Abstract:

Machine learning algorithms (ML) can achieve high accuracy in predicting outcomes compared to classical models. Metaheuristic, nature-inspired algorithms can enhance traditional ML algorithms by optimizing them such as by performing feature selection. We compare ten ML algorithms to predict 30-day hospital readmission rates for diabetes patients in the US using a dataset from UCI Machine Learning Repository with feature selection performed by Greywolf nature-inspired algorithm. The baseline accuracy for the initial random forest model was 65%. After performing feature engineering, SMOTE for class balancing, and Greywolf optimization, the machine learning algorithms showed better metrics, including F1 scores, accuracy, and confusion matrix with improvements ranging in 10%-30%, and a best model of XGBoost with an accuracy of 95%. Applying machine learning this way can improve patient outcomes as unnecessary rehospitalizations can be prevented by focusing on patients that are at a higher risk of readmission.

Keywords: diabetes, machine learning, 30-day readmission, metaheuristic

Procedia PDF Downloads 22
23273 High Resolution Image Generation Algorithm for Archaeology Drawings

Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu

Abstract:

Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.

Keywords: archaeology drawings, digital heritage, image generation, deep learning

Procedia PDF Downloads 21
23272 Ethanol Chlorobenzene Dosimetr Usage for Measuring Dose of the Intraoperative Linear Electron Accelerator System

Authors: Mojtaba Barzegar, Alireza Shirazi, Saied Rabi Mahdavi

Abstract:

Intraoperative radiation therapy (IORT) is an innovative treatment modality that the delivery of a large single dose of radiation to the tumor bed during the surgery. The radiotherapy success depends on the absorbed dose delivered to the tumor. The achievement better accuracy in patient treatment depends upon the measured dose by standard dosimeter such as ionization chamber, but because of the high density of electric charge/pulse produced by the accelerator in the ionization chamber volume, the standard correction factor for ion recombination Ksat calculated with the classic two-voltage method is overestimated so the use of dose/pulse independent dosimeters such as chemical Fricke and ethanol chlorobenzene (ECB) dosimeters have been suggested. Dose measurement is usually calculated and calibrated in the Zmax. Ksat calculated by comparison of ion chamber response and ECB dosimeter at each applicator degree, size, and dose. The relative output factors for IORT applicators have been calculated and compared with experimentally determined values and the results simulated by Monte Carlo software. The absorbed doses have been calculated and measured with statistical uncertainties less than 0.7% and 2.5% consecutively. The relative differences between calculated and measured OF’s were up to 2.5%, for major OF’s the agreement was better. In these conditions, together with the relative absorbed dose calculations, the OF’s could be considered as an indication that the IORT electron beams have been well simulated. These investigations demonstrate the utility of the full Monte Carlo simulation of accelerator head with ECB dosimeter allow us to obtain detailed information of clinical IORT beams.

Keywords: intra operative radiotherapy, ethanol chlorobenzene, ksat, output factor, monte carlo simulation

Procedia PDF Downloads 451
23271 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 231
23270 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 92
23269 Political Determinants of Sovereign Spread: The Great East-West Divide

Authors: Maruska Vizek, Josip Glaurdic, Marina Tkalec, Goran Vuksic

Abstract:

We empirically explore whether and how taxation affects bilateral real exchange rates in the euro area – relative unit labor costs and relative consumer price indices. We find that employers’ social security contributions and the value added tax changes have the expected effects put forward in the fiscal devaluation literature and simulations. Increases in employers’ contributions appreciate the relative unit labor costs in the short- and the long-run, while value added tax hike appreciates the relative consumer prices. Somewhat surprisingly, for personal income tax increases, we find a short-run depreciating impact on the relative unit labor costs, while increases in employees’ contributions depreciate both measures of real exchange rates in the short-run.

Keywords: sovereign bonds, European Union, developing countries, political determinants

Procedia PDF Downloads 275
23268 Hydrodynamic Characteristics of Single and Twin Offshore Rubble Mound Breakwaters under Regular and Random Waves

Authors: M. Alkhalidi, S. Neelamani, Z. Al-Zaqah

Abstract:

This paper investigates the interaction of single and twin offshore rubble mound breakwaters with regular and random water waves through physical modeling to assess their reflection, transmission and energy dissipation characteristics. Various combinations of wave heights and wave periods were utilized in a series of experiments, along with three different water depths. The single and twin permeable breakwater models were both constructed with one layer of rubbles. Both models had the same total volume; however, the single breakwater was of trapezoidal type while the twin breakwaters were of triangular type. Physical modeling experiments were carried out in the wave flume of the coastal engineering laboratory of Kuwait Institute for Scientific Research (KISR). Measurements of the six wave probes which were fixed in the two-dimensional wave flume were collected and used to determine the generated incident wave heights, as well as the reflected and transmitted wave heights resulting from the wave-breakwater interaction. The possible factors affecting the wave attenuation efficiency of the breakwater models are the relative water depth (d/L), wave steepness (H/L), relative wave height ((h-d)/Hi), relative height of the breakwater (h/d), and relative clear spacing between the twin breakwaters (S/h). The results indicated that the single and double breakwaters show different responds to the change in their relative height as well as the relative wave height which demonstrates that the effect of the relative water depth on wave reflection, transmission, and energy dissipation is highly influenced by the change in the relative breakwater height, the relative wave height and the relative breakwater spacing. In general, within the range of the relative water depth tested in this study, and under both regular and random waves, it is found that the single breakwater allows for lower wave transmission and shows higher energy dissipation effect than both of the tested twin breakwaters, and hence has the best overall performance.

Keywords: random waves, regular waves, relative water depth, relative wave height, single breakwater, twin breakwater, wave steepness

Procedia PDF Downloads 278
23267 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: relative permeabilty, porosity, 1-D black oil simulator, capillary pressures

Procedia PDF Downloads 412
23266 The Universal Theory: Role of Imaginary Pressure on Different Relative Motions

Authors: Sahib Dino Naseerani

Abstract:

The presented scientific text discusses the concept of imaginary pressure and its role in different relative motions. It explores how imaginary pressure, which is the combined effect of external atmospheric pressure and real pressure, affects various substances and their physical properties. The study aims to understand the impact of imaginary pressure and its potential applications in different contexts, such as spaceflight. The main objective of this study is to investigate the role of imaginary pressure on different relative motions. Specifically, the researchers aim to examine how imaginary pressure affects the contraction and mass variation of a body when it is in motion at the speed of light. The study seeks to provide insights into the behavior and consequences of imaginary pressure in various scenarios. The data was collected using three research papers. This research contributes to a better understanding of the theoretical implications of imaginary pressure. It elucidates how imaginary pressure is responsible for the contraction and mass variation of a body in motion, particularly at the speed of light. The findings shed light on the behavior of substances under the influence of imaginary pressure, providing valuable insights for future scientific studies. The study addresses the question of how imaginary pressure influences various relative motions and their associated physical properties. It aims to understand the role of imaginary pressure in the contraction and mass variation of a body, particularly at high speeds. By examining different substances in liquid and solid forms, the research explores the consequences of imaginary pressure on their volume, length, and mass.

Keywords: imaginary pressure, contraction, variation, relative motion

Procedia PDF Downloads 67
23265 Determination of Gold in Microelectronics Waste Pieces

Authors: S. I. Usenko, V. N. Golubeva, I. A. Konopkina, I. V. Astakhova, O. V. Vakhnina, A. A. Korableva, A. A. Kalinina, K. B. Zhogova

Abstract:

Gold can be determined in natural objects and manufactured articles of different origin. The up-to-date status of research and problems of high gold level determination in alloys and manufactured articles are described in detail in the literature. No less important is the task of this metal determination in minerals, process products and waste pieces. The latters, as objects of gold content chemical analysis, are most hard-to-study for two reasons: Because of high requirements to accuracy of analysis results and because of difference in chemical and phase composition. As a rule, such objects are characterized by compound, variable and very often unknown matrix composition that leads to unpredictable and uncontrolled effect on accuracy and other analytical characteristics of analysis technique. In this paper, the methods for the determination of gold are described, using flame atomic-absorption spectrophotometry and gravimetric analysis technique. The techniques are aimed at gold determination in a solution for gold etching (KJ+J2), in the technological mixture formed after cleaning stainless steel members of vacuum-deposit installation with concentrated nitric and hydrochloric acids as well as in gold-containing powder resulted from liquid wastes reprocessing. Optimal conditions for sample preparation and analysis of liquid and solid waste specimens of compound and variable matrix composition were chosen. The boundaries of relative resultant error were determined for the methods within the range of gold mass concentration from 0.1 to 30g/dm3 in the specimens of liquid wastes and mass fractions from 3 to 80% in the specimens of solid wastes.

Keywords: microelectronics waste pieces, gold, sample preparation, atomic-absorption spectrophotometry, gravimetric analysis technique

Procedia PDF Downloads 173
23264 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers

Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin

Abstract:

Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.

Keywords: anxiety, emotional valence, childhood, lexical access

Procedia PDF Downloads 258
23263 Characterization of 3D-MRP for Analyzing of Brain Balancing Index (BBI) Pattern

Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan

Abstract:

This paper discusses on power spectral density (PSD) characteristics which are extracted from three-dimensional (3D) electroencephalogram (EEG) models. The EEG signal recording was conducted on 150 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, the values of maximum PSD were extracted as features from the model. These features are analysed using mean relative power (MRP) and different mean relative power (DMRP) technique to observe the pattern among different brain balancing indexes. The results showed that by implementing these techniques, the pattern of brain balancing indexes can be clearly observed. Some patterns are indicates between index 1 to index 5 for left frontal (LF) and right frontal (RF).

Keywords: power spectral density, 3D EEG model, brain balancing, mean relative power, different mean relative power

Procedia PDF Downloads 448
23262 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless full-field displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital Image Correlation (DIC), deformation simulation, natural pattern, subset size

Procedia PDF Downloads 392
23261 Oil Displacement by Water in Hauterivian Sandstone Reservoir of Kashkari Oil Field

Authors: A. J. Nazari, S. Honma

Abstract:

This paper evaluates oil displacement by water in Hauterivian sandstone reservoir of Kashkari oil field in North of Afghanistan. The core samples of this oil field were taken out from well No-21st, and the relative permeability and fractional flow are analyzed. Steady state flow laboratory experiments are performed to empirically obtain the fractional flow curves and relative permeability in different water saturation ratio. The relative permeability represents the simultaneous flow behavior in the reservoir. The fractional flow approach describes the individual phases as fractional of the total flow. The fractional flow curve interprets oil displacement by water, and from the tangent of fractional flow curve can find out the average saturation behind the water front flow saturation. Therefore, relative permeability and fractional flow curves are suitable for describing the displacement of oil by water in a petroleum reservoir. The effects of irreducible water saturation, residual oil saturation on the displaceable amount of oil are investigated through Buckley-Leveret analysis.

Keywords: fractional flow, oil displacement, relative permeability, simultaneously flow

Procedia PDF Downloads 354
23260 Upon One Smoothing Problem in Project Management

Authors: Dimitri Golenko-Ginzburg

Abstract:

A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.

Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate

Procedia PDF Downloads 274
23259 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 121
23258 A Model for Diagnosis and Prediction of Coronavirus Using Neural Network

Authors: Sajjad Baghernezhad

Abstract:

Meta-heuristic and hybrid algorithms have high adeer in modeling medical problems. In this study, a neural network was used to predict covid-19 among high-risk and low-risk patients. This study was conducted to collect the applied method and its target population consisting of 550 high-risk and low-risk patients from the Kerman University of medical sciences medical center to predict the coronavirus. In this study, the memetic algorithm, which is a combination of a genetic algorithm and a local search algorithm, has been used to update the weights of the neural network and develop the accuracy of the neural network. The initial study showed that the accuracy of the neural network was 88%. After updating the weights, the memetic algorithm increased by 93%. For the proposed model, sensitivity, specificity, positive predictivity value, value/accuracy to 97.4, 92.3, 95.8, 96.2, and 0.918, respectively; for the genetic algorithm model, 87.05, 9.20 7, 89.45, 97.30 and 0.967 and for logistic regression model were 87.40, 95.20, 93.79, 0.87 and 0.916. Based on the findings of this study, neural network models have a lower error rate in the diagnosis of patients based on individual variables and vital signs compared to the regression model. The findings of this study can help planners and health care providers in signing programs and early diagnosis of COVID-19 or Corona.

Keywords: COVID-19, decision support technique, neural network, genetic algorithm, memetic algorithm

Procedia PDF Downloads 49
23257 An Criterion to Minimize FE Mesh-Dependency in Concrete Plate Subjected to Impact Loading

Authors: Kwak, Hyo-Gyung, Gang, Han Gul

Abstract:

In the context of an increasing need for reliability and safety in concrete structures under blast and impact loading condition, the behavior of concrete under high strain rate condition has been an important issue. Since concrete subjected to impact loading associated with high strain rate shows quite different material behavior from that in the static state, several material models are proposed and used to describe the high strain rate behavior under blast and impact loading. In the process of modelling, in advance, mesh dependency in the used finite element (FE) is the key problem because simulation results under high strain-rate condition are quite sensitive to applied FE mesh size. It means that the accuracy of simulation results may deeply be dependent on FE mesh size in simulations. This paper introduces an improved criterion which can minimize the mesh-dependency of simulation results on the basis of the fracture energy concept, and HJC (Holmquist Johnson Cook), CSC (Continuous Surface Cap) and K&C (Karagozian & Case) models are examined to trace their relative sensitivity to the used FE mesh size. To coincide with the purpose of the penetration test with a concrete plate under a projectile (bullet), the residual velocities of projectile after penetration are compared. The correlation studies between analytical results and the parametric studies associated with them show that the variation of residual velocity with the used FE mesh size is quite reduced by applying a unique failure strain value determined according to the proposed criterion.

Keywords: high strain rate concrete, penetration simulation, failure strain, mesh-dependency, fracture energy

Procedia PDF Downloads 497
23256 Effect of Relative Humidity on Corrosion Behavior of SN-0.7Cu Solder under Polyvinyl Chloride Fire Smoke Atmosphere

Authors: Qian Li, Shouxiang Lu

Abstract:

With the rapid increase in electric power use, wire and cable fire occur more and more frequent. The fire smoke has a corrosive effect on the solders, which seriously affects the function of electronic equipment. In this research, the effect of environment relative humidity on corrosion behavior of Sn-0.7Cu solder has been researched under 140 g·m⁻³ polyvinyl chloride (PVC) fire smoke atmosphere. The mass loss of Sn-0.7Cu solder increased with the relative humidity. Furthermore, the microstructures and corrosion mechanism were analyzed by using SEM, EDS, XRD, and XPS. The result shows that Sn₂₁Cl₁₆(OH)₁₄O₆ is the main corrosion products and the corrosion process is an electrochemical reaction. The present work could provide guidance to the risk assessment for electronic equipment rescue after a fire.

Keywords: corrosion, fire smoke, relative humidity, Sn-0.7Cu solder

Procedia PDF Downloads 332
23255 Modeling of the Attitude Control Reaction Wheels of a Spacecraft in Software in the Loop Test Bed

Authors: Amr AbdelAzim Ali, G. A. Elsheikh, Moutaz M. Hegazy

Abstract:

Reaction wheels (RWs) are generally used as main actuator in the attitude control system (ACS) of spacecraft (SC) for fast orientation and high pointing accuracy. In order to achieve the required accuracy for the RWs model, the main characteristics of the RWs that necessitate analysis during the ACS design phase include: technical features, sequence of operating and RW control logic are included in function (behavior) model. A mathematical model is developed including the various errors source. The errors in control torque including relative, absolute, and error due to time delay. While the errors in angular velocity due to differences between average and real speed, resolution error, loose in installation of angular sensor, and synchronization errors. The friction torque is presented in the model include the different feature of friction phenomena: steady velocity friction, static friction and break-away torque, and frictional lag. The model response is compared with the experimental torque and frequency-response characteristics of tested RWs. Based on the created RW model, some criteria of optimization based control torque allocation problem can be recommended like: avoiding the zero speed crossing, bias angular velocity, or preventing wheel from running on the same angular velocity.

Keywords: friction torque, reaction wheels modeling, software in the loop, spacecraft attitude control

Procedia PDF Downloads 231
23254 High-Accuracy Satellite Image Analysis and Rapid DSM Extraction for Urban Environment Evaluations (Tripoli-Libya)

Authors: Abdunaser Abduelmula, Maria Luisa M. Bastos, José A. Gonçalves

Abstract:

The modeling of the earth's surface and evaluation of urban environment, with 3D models, is an important research topic. New stereo capabilities of high-resolution optical satellites images, such as the tri-stereo mode of Pleiades, combined with new image matching algorithms, are now available and can be applied in urban area analysis. In addition, photogrammetry software packages gained new, more efficient matching algorithms, such as SGM, as well as improved filters to deal with shadow areas, can achieve denser and more precise results. This paper describes a comparison between 3D data extracted from tri-stereo and dual stereo satellite images, combined with pixel based matching and Wallis filter. The aim was to improve the accuracy of 3D models especially in urban areas, in order to assess if satellite images are appropriate for a rapid evaluation of urban environments. The results showed that 3D models achieved by Pleiades tri-stereo outperformed, both in terms of accuracy and detail, the result obtained from a Geo-eye pair. The assessment was made with reference digital surface models derived from high-resolution aerial photography. This could mean that tri-stereo images can be successfully used for the proposed urban change analyses.

Keywords: 3D models, environment, matching, pleiades

Procedia PDF Downloads 295
23253 Frequency Recognition Models for Steady State Visual Evoked Potential Based Brain Computer Interfaces (BCIs)

Authors: Zeki Oralhan, Mahmut Tokmakçı

Abstract:

SSVEP based brain computer interface (BCI) systems have been preferred, because of high information transfer rate (ITR) and practical use. ITR is the parameter of BCI overall performance. For high ITR value, one of specification BCI system is that has high accuracy. In this study, we investigated to recognize SSVEP with shorter time and lower error rate. In the experiment, there were 8 flickers on light crystal display (LCD). Participants gazed to flicker which had 12 Hz frequency and 50% duty cycle ratio on the LCD during 10 seconds. During the experiment, EEG signals were acquired via EEG device. The EEG data was filtered in preprocessing session. After that Canonical Correlation Analysis (CCA), Multiset CCA (MsetCCA), phase constrained CCA (PCCA), and Multiway CCA (MwayCCA) methods were applied on data. The highest average accuracy value was reached when MsetCCA was applied.

Keywords: brain computer interface, canonical correlation analysis, human computer interaction, SSVEP

Procedia PDF Downloads 243
23252 Effect of Addition Rate of Expansive Additive on Autogenous Shrinkage and Delayed Expansion of Ultra-High Strength Mortar

Authors: Yulu Zhang, Atushi Teramoto, Taka-Aki Ohkubo

Abstract:

In this study, the effect of expansive additives on autogenous shrinkage and delayed expansion of ultra-high strength mortar was explored. The specimens made for the study were composed of ultra-high strength mortar, which was mixed with ettringite-lime composite type expansive additive. Two series of experiments were conducted with the specimens. The experimental results confirmed that the autogenous shrinkage of specimens was effectively decreased by increasing the proportion of the expansive additive. On the other hand, for the specimens, which had 7% expansive additive, and were cured for seven days at a constant temperature of 20°C, and then cured for a long time in either in an underwater, moist (Relative humidity: 100%) or dry air (Relative humidity: 60%) environment, excessively large expansion strain occurred. Specifically, typical turtle shell-like swelling expansion cracks were confirmed in the specimens that underwent long-term curing in an underwater and moist environment. According to the result of hydration analysis, the formation of expansive substances, calcium hydroxide and alumina, ferric oxide, tri-sulfate contribute to the occurrence of delayed expansion.

Keywords: ultra-high strength mortar, expansive additive, autogenous shrinkage, delayed expansion

Procedia PDF Downloads 211