Search results for: axial error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2450

Search results for: axial error

140 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion

Authors: Roberta Martino, Viviana Ventre

Abstract:

From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.

Keywords: impatience, risk aversion, subjective probability, uncertainty

Procedia PDF Downloads 107
139 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos

Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling

Abstract:

Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.

Keywords: boredom, engagement, music videos, posture, proxemics

Procedia PDF Downloads 167
138 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 150
137 Exposure to Radon on Air in Tourist Caves in Bulgaria

Authors: Bistra Kunovska, Kremena Ivanova, Jana Djounova, Desislava Djunakova, Zdenka Stojanovska

Abstract:

The carcinogenic effects of radon as a radioactive noble gas have been studied and show a strong correlation between radon exposure and lung cancer occurrence, even in the case of low radon levels. The major part of the natural radiation dose in humans is received by inhaling radon and its progenies, which originates from the decay chain of U-238. Indoor radon poses a substantial threat to human health when build-up occurs in confined spaces such as homes, mines and caves and the risk increases with the duration of radon exposure and is proportional to both the radon concentration and the time of exposure. Tourist caves are a case of special environmental conditions that may be affected by high radon concentration. Tourist caves are a recognized danger in terms of radon exposure to cave workers (guides, employees working in shops built above the cave entrances, etc.), but due to the sensitive nature of the cave environment, high concentrations cannot be easily removed. Forced ventilation of the air in the caves is considered unthinkable due to the possible harmful effects on the microclimate, flora and fauna. The risks to human health posed by exposure to elevated radon levels in caves are not well documented. Various studies around the world often detail very high concentrations of radon in caves and exposure of employees but without a follow-up assessment of the overall impact on human health. This study was developed in the implementation of a national project to assess the potential health effects caused by exposure to elevated levels of radon in buildings with public access under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018. The purpose of the work is to assess the radon level in Bulgarian caves and the exposure of the visitors and workers. The number of caves (sampling size) was calculated for simple random selection from total available caves 65 (sampling population) are 13 caves with confidence level 95 % and confidence interval (margin of error) approximately 25 %. A measurement of the radon concentration in air at specific locations in caves was done by using CR-39 type nuclear track-etch detectors that were placed by the participants in the research team. Despite the fact that all of the caves were formed in karst rocks, the radon levels were rather different from each other (97–7575 Bq/m3). An assessment of the influence of the orientation of the caves in the earth's surface (horizontal, inclined, vertical) on the radon concentration was performed. Evaluation of health hazards and radon risk exposure causing by inhaling the radon and its daughter products in each surveyed caves was done. Reducing the time spent in the cave has been recommended in order to decrease the exposure of workers.

Keywords: tourist caves, radon concentration, exposure, Bulgaria

Procedia PDF Downloads 189
136 Microplastics Accumulation and Abundance Standardization for Fluvial Sediments: Case Study for the Tena River

Authors: Mishell E. Cabrera, Bryan G. Valencia, Anderson I. Guamán

Abstract:

Human dependence on plastic products has led to global pollution, with plastic particles ranging in size from 0.001 to 5 millimeters, which are called microplastics (hereafter, MPs). The abundance of microplastics is used as an indicator of pollution. However, reports of pollution (abundance of MPs) in river sediments do not consider that the accumulation of sediments and MPs depends on the energy of the river. That is, the abundance of microplastics will be underestimated if the sediments analyzed come from places where the river flows with a lot of energy, and the abundance will be overestimated if the sediment analyzed comes from places where the river flows with less energy. This bias can generate an error greater than 300% of the MPs value reported for the same river and should increase when comparisons are made between 2 rivers with different characteristics. Sections where the river flows with higher energy allow sands to be deposited and limit the accumulation of MPs, while sections, where the same river has lower energy, allow fine sediments such as clays and silts to be deposited and should facilitate the accumulation of MPs particles. That is, the abundance of MPs in the same river is underrepresented when the sediment analyzed is sand, and the abundance of MPs is overrepresented if the sediment analyzed is silt or clay. The present investigation establishes a protocol aimed at incorporating sample granulometry to calibrate MPs quantification and eliminate over- or under-representation bias (hereafter granulometric bias). A total of 30 samples were collected by taking five samples within six work zones. The slope of the sampling points was less than 8 degrees, referred to as low slope areas, according to the Van Zuidam slope classification. During sampling, blanks were used to estimate possible contamination by MPs during sampling. Samples were dried at 60 degrees Celsius for three days. A flotation technique was employed to isolate the MPs using sodium metatungstate with a density of 2 gm/l. For organic matter digestion, 30% hydrogen peroxide and Fenton were used at a ratio of 6:1 for 24 hours. The samples were stained with rose bengal at a concentration of 200 mg/L and were subsequently dried in an oven at 60 degrees Celsius for 1 hour to be identified and photographed in a stereomicroscope with the following conditions: Eyepiece magnification: 10x, Zoom magnification (zoom knob): 4x, Objective lens magnification: 0.35x for analysis in ImageJ. A total of 630 fibers of MPs were identified, mainly red, black, blue, and transparent colors, with an overall average length of 474,310 µm and an overall median length of 368,474 µm. The particle size of the 30 samples was calculated using 100 g per sample using sieves with the following apertures: 2 mm, 1 mm, 500 µm, 250 µm, 125 µm and 0.63 µm. This sieving allowed a visual evaluation and a more precise quantification of the microplastics present. At the same time, the weight of sediment in each fraction was calculated, revealing an evident magnitude: as the presence of sediment in the < 63 µm fraction increases, a significant increase in the number of MPs particles is observed.

Keywords: microplastics, pollution, sediments, Tena River

Procedia PDF Downloads 73
135 Problems and Solutions in the Application of ICP-MS for Analysis of Trace Elements in Various Samples

Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Áron Soós, Xénia Vágó, Dávid Andrási

Abstract:

In agriculture for analysis of elements in different food and food raw materials, moreover environmental samples generally flame atomic absorption spectrometers (FAAS), graphite furnace atomic absorption spectrometers (GF-AAS), inductively coupled plasma optical emission spectrometers (ICP-OES) and inductively coupled plasma mass spectrometers (ICP-MS) are routinely applied. An inductively coupled plasma mass spectrometer (ICP-MS) is capable for analysis of 70-80 elements in multielemental mode, from 1-5 cm3 volume of a sample, moreover the detection limits of elements are in µg/kg-ng/kg (ppb-ppt) concentration range. All the analytical instruments have different physical and chemical interfering effects analysing the above types of samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays there is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better (smaller) detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium, arsenic, germanium, vanadium and chromium. To elaborate an analytical method for trace elements with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) Physical interferences; 2) Spectral interferences (elemental and molecular isobaric); 3) Effect of easily ionisable elements; 4) Memory interferences. Analysing food and food raw materials, moreover environmental samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food and food raw materials, moreover environmental samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of the applied elements. So finally we could find “opportunities” to decrease or eliminate the error of the analyses of applied elements (Cr, Co, Ni, Cu, Zn, Ge, As, Se, Mo, Cd, Sn, Sb, Te, Hg, Pb, Bi). To analyse these elements in the above samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of the above elements, which can be corrected using different internal standards.

Keywords: elements, environmental and food samples, ICP-MS, interference effects

Procedia PDF Downloads 504
134 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 352
133 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 100
132 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 255
131 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 128
130 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 324
129 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs

Authors: Lexi Li

Abstract:

This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.

Keywords: EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 124
128 Healthcare Fire Disasters: Readiness, Response and Resilience Strategies: A Real-Time Experience of a Healthcare Organization of North India

Authors: Raman Sharma, Ashok Kumar, Vipin Koushal

Abstract:

Healthcare facilities are always seen as places of haven and protection for managing the external incidents, but the situation becomes more difficult and challenging when such facilities themselves are affected from internal hazards. Such internal hazards are arguably more disruptive than external incidents affecting vulnerable ones, as patients are always dependent on supportive measures and are neither in a position to respond to such crisis situation nor do they know how to respond. The situation becomes more arduous and exigent to manage if, in case critical care areas like Intensive Care Units (ICUs) and Operating Rooms (OR) are convoluted. And, due to these complexities of patients’ in-housed there, it becomes difficult to move such critically ill patients on immediate basis. Healthcare organisations use different types of electrical equipment, inflammable liquids, and medical gases often at a single point of use, hence, any sort of error can spark the fire. Even though healthcare facilities face many fire hazards, damage caused by smoke rather than flames is often more severe. Besides burns, smoke inhalation is primary cause of fatality in fire-related incidents. The greatest cause of illness and mortality in fire victims, particularly in enclosed places, appears to be the inhalation of fire smoke, which contains a complex mixture of gases in addition to carbon monoxide. Therefore, healthcare organizations are required to have a well-planned disaster mitigation strategy, proactive and well prepared manpower to cater all types of exigencies resulting from internal as well as external hazards. This case report delineates a true OR fire incident in Emergency Operation Theatre (OT) of a tertiary care multispecialty hospital and details the real life evidence of the challenges encountered by OR staff in preserving both life and property. No adverse event was reported during or after this fire commotion, yet, this case report aimed to congregate the lessons identified of the incident in a sequential and logical manner. Also, timely smoke evacuation and preventing the spread of smoke to adjoining patient care areas by opting appropriate measures, viz. compartmentation, pressurisation, dilution, ventilation, buoyancy, and airflow, helped to reduce smoke-related fatalities. Henceforth, precautionary measures may be implemented to mitigate such incidents. Careful coordination, continuous training, and fire drill exercises can improve the overall outcomes and minimize the possibility of these potentially fatal problems, thereby making a safer healthcare environment for every worker and patient.

Keywords: healthcare, fires, smoke, management, strategies

Procedia PDF Downloads 68
127 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution

Authors: Peter G. Hollis, Kim G. Clarke

Abstract:

Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.

Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag

Procedia PDF Downloads 266
126 Artificial Intelligence and Governance in Relevance to Satellites in Space

Authors: Anwesha Pathak

Abstract:

With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.

Keywords: satellite, space debris, traffic, threats, cyber security.

Procedia PDF Downloads 77
125 Benign Recurrent Unilateral Abducens (6th) Nerve Palsy in 14 Months Old Girl: A Case Report

Authors: Khaled Alabduljabbar

Abstract:

Background: Benign, isolated, recurrent sixth nerve palsy is very rare in children. Here we report a case of recurrent abducens nerve palsy with no obvious etiology. It is a diagnosis of exclusion. A recurrent benign form of 6th nerve palsy, a rarer still palsy, has been described in the literature, and it is of most likely secondary to inflammatory causes, e.g, following viral and bacterial infections. Purpose: To present a case of 14 months old girl with recurrent attacks of isolated left sixth cranial nerve palsy following upper respiratory tract infection. Observation: The patient presented to opthalmology clinic with sudden onset of inward deviation (esotropia) of the left eye with a compensatory left face turn one week following signs of upper respiratory tract infection. Ophthalmological examination revealed large angle esotropia of the left eye in primary position, with complete limitation of abduction of the left eye, no palpebral fissure changes, and abnormal position of the head (left face turn). Visual acuity was normal, and no significant refractive error on cycloplegic refraction for her age. Fundus examination was normal with no evidence of papilledema. There was no relative afferent pupillary defect (RAPD) and no anisocoria. Past medical history and family history were unremarkable, with no history of convulsion attacks or head trauma. Additional workout include CBC. Erythrocyte sedimentation rate, Urgent magnetic resonance imaging (MRI), and angiography of the brain were performed and demonstrated the absence of intracranial and orbital lesions. Referral to pediatric neurologist was also done and concluded no significant finding. The patient showed improvement of the left sixth cranial nerve palsy and left face turn over a period of two months. Seven months since the first attack, she experienced a recurrent attack of left eye esotropia with left face turn concurrent with URTI. The rest of eye examination was again unremarkable. CT scan and MRI scan of brain and orbit were performed and showed only signs of sinusitis with no intracranial pathology. The palsy resolved spontaneously within two months. A third episode of left 6th nerve palsy occurred 6 months later, whichrecovered over one month. Examination and neuroimagingwere unremarkable. A diagnosis of benign recurrent left 6th cranial nerve palsy was made. Conclusion: Benign sixth cranial nerve palsy is always a diagnosis of exclusion given the more serious and life-threatening alternative causes. It seems to have a good prognosis with only supportive measures. The likelihood of benign 6th cranial nerve palsy to resolve completely and spontaneously is high. Observation for at least 6 months without intervention is advisable.

Keywords: 6th nerve pasy, abducens nerve pasy, recurrent nerve palsy, cranial nerve palsy

Procedia PDF Downloads 90
124 Design of a Low-Cost, Portable, Sensor Device for Longitudinal, At-Home Analysis of Gait and Balance

Authors: Claudia Norambuena, Myissa Weiss, Maria Ruiz Maya, Matthew Straley, Elijah Hammond, Benjamin Chesebrough, David Grow

Abstract:

The purpose of this project is to develop a low-cost, portable sensor device that can be used at home for long-term analysis of gait and balance abnormalities. One area of particular concern involves the asymmetries in movement and balance that can accompany certain types of injuries and/or the associated devices used in the repair and rehabilitation process (e.g. the use of splints and casts) which can often increase chances of falls and additional injuries. This device has the capacity to monitor a patient during the rehabilitation process after injury or operation, increasing the patient’s access to healthcare while decreasing the number of visits to the patient’s clinician. The sensor device may thereby improve the quality of the patient’s care, particularly in rural areas where access to the clinician could be limited, while simultaneously decreasing the overall cost associated with the patient’s care. The device consists of nine interconnected accelerometer/ gyroscope/compass chips (9-DOF IMU, Adafruit, New York, NY). The sensors attach to and are used to determine the orientation and acceleration of the patient’s lower abdomen, C7 vertebra (lower neck), L1 vertebra (middle back), anterior side of each thigh and tibia, and dorsal side of each foot. In addition, pressure sensors are embedded in shoe inserts with one sensor (ESS301, Tekscan, Boston, MA) beneath the heel and three sensors (Interlink 402, Interlink Electronics, Westlake Village, CA) beneath the metatarsal bones of each foot. These sensors measure the distribution of the weight applied to each foot as well as stride duration. A small microntroller (Arduino Mega, Arduino, Ivrea, Italy) is used to collect data from these sensors in a CSV file. MATLAB is then used to analyze the data and output the hip, knee, ankle, and trunk angles projected on the sagittal plane. An open-source program Processing is then used to generate an animation of the patient’s gait. The accuracy of the sensors was validated through comparison to goniometric measurements (±2° error). The sensor device was also shown to have sufficient sensitivity to observe various gait abnormalities. Several patients used the sensor device, and the data collected from each represented the patient’s movements. Further, the sensors were found to have the ability to observe gait abnormalities caused by the addition of a small amount of weight (4.5 - 9.1 kg) to one side of the patient. The user-friendly interface and portability of the sensor device will help to construct a bridge between patients and their clinicians with fewer necessary inpatient visits.

Keywords: biomedical sensing, gait analysis, outpatient, rehabilitation

Procedia PDF Downloads 289
123 Experiences of Pediatric Cancer Patients and Their Families: A Focus Group Interview

Authors: Bu Kyung Park

Abstract:

Background: The survival rate of pediatric cancer patients has been increased. Thus, the needs of long-term management and follow-up education after discharge continue to grow. Purpose: The purpose of this study was to explore the experiences of pediatric cancer patients and their families from first diagnosis to returning their social life. The ultimate goal of this study was to assess which information and intervention did pediatric cancer patients and their families required and needed, so that this could provide fundamental information for developing educational content of web-based intervention program for pediatric cancer patients. Research Approach: This study was based on a descriptive qualitative research design using semi-structured focus group interview. Participants: Twelve pediatric cancer patients and 12 family members participated in a total six focus group interview sessions. Methods: All interviews were audiotaped after obtaining participants’ approval. The recordings were transcribed. Qualitative Content analysis using the inductive coding approach was performed on the transcriptions by three coders. Findings: Eighteen categories emerged from the six main themes: 1) Information needs, 2) Support system, 3) Barriers to treatment, 4) Facilitators to treatment, 5) Return to social life, 6) Healthcare system issues. Each theme had both pediatric cancer patients’ codes and their family members’ codes. Patients and family members had high information needs through the whole process of treatment, not only the first diagnosis but also after completion of treatment. Hospitals provided basic information on chemo therapy, medication, and various examinations. However, they were more likely to rely on information from other patients and families by word of mouth. Participants’ information needs were different according to their treatment stage (e.g., first admitted patients versus cancer survivors returning to their social life). Even newly diagnosed patients worried about social adjustment after completion of all treatment, such as return to school and diet and physical activity at home. Most family members had unpleasant experiences while they were admitted in hospitals and concerned about healthcare system issues, such as medical error and patient safety. Conclusions: In conclusion, pediatric cancer patients and their family members wanted information source which can provide tailored information based on their needs. Different information needs with patients and their family members based on their diagnosis, progress, stage of treatment were identified. Findings from this study will be used to develop a patient-centered online health intervention program for pediatric cancer patients. Pediatric cancer patients and their family members had variety fields of education needs and soak the information from various sources. Web-based health intervention program for them is required to satisfy their inquiries to provide reliable information.

Keywords: focus group interview, family caregivers, pediatric cancer patients, qualitative content analysis

Procedia PDF Downloads 181
122 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 225
121 A Preliminary Analysis of The Effect After Cochlear Implantation in the Unilateral Hearing Loss

Authors: Haiqiao Du, Qian Wang, Shuwei Wang, Jianan Li

Abstract:

Purpose: The aim is to evaluate the effect of cochlear implantation (CI) in patients with unilateral hearing loss, with a view to providing data support for the selection of therapeutic interventions for patients with single-sided deafness (SSD)/asymmetric hearing loss (AHL) and the broadening of the indications for CI. Methods: The study subjects were patients with unilateral hearing loss who underwent cochlear implantation surgery in our hospital in August 2022 and were willing to cooperate with the test and were divided into 2 groups: SSD group and AHL group. The enrolled patients were followed up for hearing level, tinnitus changes, speech recognition ability, sound source localization ability, and quality of life at five-time points: preoperatively, and 1, 3, 6, and 12 months after postoperative start-up. Results: As of June 30, 2024, a total of nine patients completed follow-up, including four in the SSD group and five in the AHL group. The mean postoperative hearing aid thresholds on the CI side were 31.56 dB HL and 34.75 dB HL in the two groups, respectively. Of the four patients with preoperative tinnitus symptoms (three patients in the SSD group and one patient in the AHL group), all showed a degree of reduction in Tinnitus Handicap Inventory (THI) scores, except for one patient who showed no change. In both the SSD and AHL groups, the sound source localization results (expressed as RMS error values, with smaller values indicating better ability) were 66.87° and 77.41° preoperatively and 29.34° and 54.60° 12 months after postoperative start-up, respectively, which showed that the ability to localize the sound source improved significantly with longer implantation time. The level of speech recognition was assessed by 3 test methods: speech recognition rate of monosyllabic words in a quiet environment and speech recognition rate of different sound source directions at 0° and 90° (implantation side) in a noisy environment. The results of the 3 tests were 99.0%, 72.0%, and 36.0% in the preoperative SSD group and 96.0%, 83.6%, and 73.8% in the AHL group, respectively, whereas they fluctuated in the postoperative period 3 months after start-up, and stabilized at 12 months after start-up to 99.0%, 100.0%, and 100.0% in the SSD group and 99.5%, 96.0%, and 99.0%. Quality of life was subjectively evaluated by three tests: the Speech Spatial Quality of Sound Auditory Scale (SSQ-12), the Quality-of-Life Bilateral Listening Questionnaire (QLBHE), and the Nijmegen Cochlear Implantation Inventory (NCIQ). The results of the SSQ-12 (with a 10-point score out of 10) showed that the scores of preoperative and postoperative 12 months after start-up were 6.35 and 6.46 in the SSD group, while they were 5.61 and 9.83 in the AHL group. The QLBHE scores (100 points out of 100) were 61.0 and 76.0 in the SSD group and 53.4 and 63.7 in the AHL group for the preoperative versus the postoperative 12 months after start-up. Conclusion: Patients with unilateral hearing loss can benefit from cochlear implantation: CI implantation is effective in compensating for the hearing on the affected side and reduces the accompanying tinnitus symptoms; there is a significant improvement in sound source localization and speech recognition in the presence of noise; and the quality of life is improved.

Keywords: single-sided deafness, asymmetric hearing loss, cochlear implant, unilateral hearing loss

Procedia PDF Downloads 15
120 Industrial Waste Multi-Metal Ion Exchange

Authors: Thomas S. Abia II

Abstract:

Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.

Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese

Procedia PDF Downloads 143
119 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador

Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria

Abstract:

There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.

Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables

Procedia PDF Downloads 432
118 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture

Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz

Abstract:

Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.

Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV

Procedia PDF Downloads 107
117 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 246
116 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf

Authors: Abderrazak Bannari, Ghadeer Kadhem

Abstract:

The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.

Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water

Procedia PDF Downloads 381
115 Teen Insights into Drugs, Alcohol, and Nicotine: A National Survey of Adolescent Attitudes toward Addictive Substances

Authors: Linda Richter

Abstract:

Background and Significance: The influence of parents on their children’s attitudes and behaviors is immense, even as children grow out of what one might assume to be their most impressionable years and into teenagers. This study specifically examines the potential that parents have to prevent or reduce the risk of adolescent substance use, even in the face of considerable environmental influences to use nicotine, alcohol, or drugs. Methodology: The findings presented are based on a nationally representative survey of 1,014 teens aged 12-17 living in the United States. Data were collected using an online platform in early 2018. About half the sample was female (51%), 49% was aged 12-14, and 51% was aged 15-17. The margin of error was +/- 3.5%. Demographic data on the teens and their families were available through the survey platform. Survey items explored adolescent respondents’ exposure to addictive substances; the extent to which their sources of information about these substances are reliable or credible; friends’ and peers’ substance use; their own intentions to try substances in the future; and their relationship with their parents. Key Findings: Exposure to nicotine, alcohol, or other drugs and misinformation about these substances were associated with a greater likelihood that adolescents have friends who use drugs and that they have intentions to try substances in the future, which are known to directly predict actual teen substance use. In addition, teens who reported a positive relationship with their parents and having parents who are involved in their lives had a lower likelihood of having friends who use drugs and of having intentions to try substances in the future. This relationship appears to be mediated by parents’ ability to reduce the extent to which their children are exposed to substances in their environment and to misinformation about them. Indeed, the findings indicated that teens who reported a good relationship with their parents and those who reported higher levels of parental monitoring had significantly higher odds of reporting a lower number of risk factors than teens with a less positive relationship with parents or less monitoring. There also were significantly greater risk factors associated with substance use among older teens relative to younger teens. This shift appears to coincide directly with the tendency of parents to pull back in their monitoring and their involvement in their adolescent children’s lives. Conclusion: The survey findings underscore the importance of resisting the urge to completely pull back as teens age and demand more independence since that is exactly when the risks for teen substance use spike and young people need their parents and other trusted adults to be involved more than ever. Particularly through the cultivation of a healthy, positive, and open relationship, parents can help teens receive accurate and credible information about substance use and also monitor their whereabouts and exposure to addictive substances. These findings, which come directly from teens themselves, demonstrate the importance of continued parental engagement throughout children’s lives, regardless of their age and the disincentives to remaining involved and connected.

Keywords: adolescent, parental monitoring, prevention, substance use

Procedia PDF Downloads 146
114 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 111
113 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 97
112 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 185
111 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 140