Search results for: medication error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2300

Search results for: medication error

110 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 141
109 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 76
108 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 139
107 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 281
106 Potential Impacts of Climate Change on Hydrological Droughts in the Limpopo River Basin

Authors: Nokwethaba Makhanya, Babatunde J. Abiodun, Piotr Wolski

Abstract:

Climate change possibly intensifies hydrological droughts and reduces water availability in river basins. Despite this, most research on climate change effects in southern Africa has focused exclusively on meteorological droughts. This thesis projects the potential impact of climate change on the future characteristics of hydrological droughts in the Limpopo River Basin (LRB). The study uses regional climate model (RCM) measurements (from the Coordinated Regional Climate Downscaling Experiment, CORDEX) and a combination of hydrological simulations (using the Soil and Water Assessment Tool Plus model, SWAT+) to predict the impacts at four global warming levels (GWLs: 1.5℃, 2.0℃, 2.5℃, and 3.0℃) under the RCP8.5 future climate scenario. The SWAT+ model was calibrated and validated with a streamflow dataset observed over the basin, and the sensitivity of model parameters was investigated. The performance of the SWAT+LRB model was verified using the Nash-Sutcliffe efficiency (NSE), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and coefficient of determination (R²). The Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Precipitation Index (SPI) have been used to detect meteorological droughts. The Soil Water Index (SSI) has been used to define agricultural drought, while the Water Yield Drought Index (WYLDI), the Surface Run-off Index (SRI), and the Streamflow Index (SFI) have been used to characterise hydrological drought. The performance of the SWAT+ model simulations over LRB is sensitive to the parameters CN2 (initial SCS runoff curve number for moisture condition II) and ESCO (soil evaporation compensation factor). The best simulation generally performed better during the calibration period than the validation period. In calibration and validation periods, NSE is ≤ 0.8, while PBIAS is ≥ ﹣80.3%, RMSE ≥ 11.2 m³/s, and R² ≤ 0.9. The simulations project a future increase in temperature and potential evapotranspiration over the basin, but they do not project a significant future trend in precipitation and hydrological variables. However, the spatial distribution of precipitation reveals a projected increase in precipitation in the southern part of the basin and a decline in the northern part of the basin, with the region of reduced precipitation projected to increase with GWLs. A decrease in all hydrological variables is projected over most parts of the basin, especially over the eastern part of the basin. The simulations predict meteorological droughts (i.e., SPEI and SPI), agricultural droughts (i.e., SSI), and hydrological droughts (i.e., WYLDI, SRI) would become more intense and severe across the basin. SPEI-drought has a greater magnitude of increase than SPI-drought, and agricultural and hydrological droughts have a magnitude of increase between the two. As a result, this research suggests that future hydrological droughts over the LRB could be more severe than the SPI-drought projection predicts but less severe than the SPEI-drought projection. This research can be used to mitigate the effects of potential climate change on basin hydrological drought.

Keywords: climate change, CORDEX, drought, hydrological modelling, Limpopo River Basin

Procedia PDF Downloads 129
105 The Role of Macroeconomic Condition and Volatility in Credit Risk: An Empirical Analysis of Credit Default Swap Index Spread on Structural Models in U.S. Market during Post-Crisis Period

Authors: Xu Wang

Abstract:

This research builds linear regressions of U.S. macroeconomic condition and volatility measures in the investment grade and high yield Credit Default Swap index spreads using monthly data from March 2009 to July 2016, to study the relationship between different dimensions of macroeconomy and overall credit risk quality. The most significant contribution of this research is systematically examining individual and joint effects of macroeconomic condition and volatility on CDX spreads by including macroeconomic time series that captures different dimensions of the U.S. economy. The industrial production index growth, non-farm payroll growth, consumer price index growth, 3-month treasury rate and consumer sentiment are introduced to capture the condition of real economic activity, employment, inflation, monetary policy and risk aversion respectively. The conditional variance of the macroeconomic series is constructed using ARMA-GARCH model and is used to measure macroeconomic volatility. The linear regression model is conducted to capture relationships between monthly average CDX spreads and macroeconomic variables. The Newey–West estimator is used to control for autocorrelation and heteroskedasticity in error terms. Furthermore, the sensitivity factor analysis and standardized coefficients analysis are conducted to compare the sensitivity of CDX spreads to different macroeconomic variables and to compare relative effects of macroeconomic condition versus macroeconomic uncertainty respectively. This research shows that macroeconomic condition can have a negative effect on CDX spread while macroeconomic volatility has a positive effect on determining CDX spread. Macroeconomic condition and volatility variables can jointly explain more than 70% of the whole variation of the CDX spread. In addition, sensitivity factor analysis shows that the CDX spread is the most sensitive to Consumer Sentiment index. Finally, the standardized coefficients analysis shows that both macroeconomic condition and volatility variables are important in determining CDX spread but macroeconomic condition category of variables have more relative importance in determining CDX spread than macroeconomic volatility category of variables. This research shows that the CDX spread can reflect the individual and joint effects of macroeconomic condition and volatility, which suggests that individual investors or government should carefully regard CDX spread as a measure of overall credit risk because the CDX spread is influenced by macroeconomy. In addition, the significance of macroeconomic condition and volatility variables, such as Non-farm Payroll growth rate and Industrial Production Index growth volatility suggests that the government, should pay more attention to the overall credit quality in the market when macroecnomy is low or volatile.

Keywords: autoregressive moving average model, credit spread puzzle, credit default swap spread, generalized autoregressive conditional heteroskedasticity model, macroeconomic conditions, macroeconomic uncertainty

Procedia PDF Downloads 167
104 Generating Ideas to Improve Road Intersections Using Design with Intent Approach

Authors: Omar Faruqe Hamim, M. Shamsul Hoque, Rich C. McIlroy, Katherine L. Plant, Neville A. Stanton

Abstract:

Road safety has become an alarming issue, especially in low-middle income developing countries. The traditional approaches lack the out of the box thinking, making engineers confined to applying usual techniques in making roads safer. A socio-technical approach has recently been introduced in improving road intersections through designing with intent. This Design With Intent (DWI) approach aims to give practitioners a more nuanced approach to design and behavior, working with people, people’s understanding, and the complexities of everyday human experience. It's a collection of design patterns —and a design and research approach— for exploring the interactions between design and people’s behavior across products, services, and environments, both digital and physical. Through this approach, it can be seen that how designing with people in behavior change can be applied to social and environmental problems, as well as commercially. It has a total of 101 cards across eight different lenses, such as architectural, error-proofing, interaction, ludic, perceptual, cognitive, Machiavellian, and security lens each having its own distinct characteristics of extracting ideas from the participant of this approach. For this research purpose, a three-legged accident blackspot intersection of a national highway has been chosen to perform the DWI workshop. Participants from varying fields such as civil engineering, naval architecture and marine engineering, urban and regional planning, and sociology actively participated for a day long workshop. While going through the workshops, the participants were given a preamble of the accident scenario and a brief overview of DWI approach. Design cards of varying lenses were distributed among 10 participants and given an hour and a half for brainstorming and generating ideas to improve the safety of the selected intersection. After the brainstorming session, the participants spontaneously went through roundtable discussions regarding the ideas they have come up with. According to consensus of the forum, ideas were accepted or rejected. These generated ideas were then synthesized and agglomerated to bring about an improvement scheme for the intersection selected in our study. To summarize the improvement ideas from DWI approach, color coding of traffic lanes for separate vehicles, channelizing the existing bare intersection, providing advance warning traffic signs, cautionary signs and educational signs motivating road users to drive safe, using textured surfaces at approach with rumble strips before the approach of intersection were the most significant one. The motive of this approach is to bring about new ideas from the road users and not just depend on traditional schemes to increase the efficiency, safety of roads as well and to ensure the compliance of road users since these features are being generated from the minds of users themselves.

Keywords: design with intent, road safety, human experience, behavior

Procedia PDF Downloads 142
103 Modelling of Groundwater Resources for Al-Najaf City, Iraq

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.

Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW

Procedia PDF Downloads 213
102 Reliable and Error-Free Transmission through Multimode Polymer Optical Fibers in House Networks

Authors: Tariq Ahamad, Mohammed S. Al-Kahtani, Taisir Eldos

Abstract:

Optical communications technology has made enormous and steady progress for several decades, providing the key resource in our increasingly information-driven society and economy. Much of this progress has been in finding innovative ways to increase the data carrying capacity of a single optical fiber. In this research article we have explored basic issues in terms of security and reliability for secure and reliable information transfer through the fiber infrastructure. Conspicuously, one potentially enormous source of improvement has however been left untapped in these systems: fibers can easily support hundreds of spatial modes, but today’s commercial systems (single-mode or multi-mode) make no attempt to use these as parallel channels for independent signals. Bandwidth, performance, reliability, cost efficiency, resiliency, redundancy, and security are some of the demands placed on telecommunications today. Since its initial development, fiber optic systems have had the advantage of most of these requirements over copper-based and wireless telecommunications solutions. The largest obstacle preventing most businesses from implementing fiber optic systems was cost. With the recent advancements in fiber optic technology and the ever-growing demand for more bandwidth, the cost of installing and maintaining fiber optic systems has been reduced dramatically. With so many advantages, including cost efficiency, there will continue to be an increase of fiber optic systems replacing copper-based communications. This will also lead to an increase in the expertise and the technology needed to tap into fiber optic networks by intruders. As ever before, all technologies have been subject to hacking and criminal manipulation, fiber optics is no exception. Researching fiber optic security vulnerabilities suggests that not everyone who is responsible for their networks security is aware of the different methods that intruders use to hack virtually undetected into fiber optic cables. With millions of miles of fiber optic cables stretching across the globe and carrying information including but certainly not limited to government, military, and personal information, such as, medical records, banking information, driving records, and credit card information; being aware of fiber optic security vulnerabilities is essential and critical. Many articles and research still suggest that fiber optics is expensive, impractical and hard to tap. Others argue that it is not only easily done, but also inexpensive. This paper will briefly discuss the history of fiber optics, explain the basics of fiber optic technologies and then discuss the vulnerabilities in fiber optic systems and how they can be better protected. Knowing the security risks and knowing the options available may save a company a lot embarrassment, time, and most importantly money.

Keywords: in-house networks, fiber optics, security risk, money

Procedia PDF Downloads 423
101 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 309
100 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study

Authors: Anna Slupik, Anna Mosiolek, Sebastian Wojtowicz, Dariusz Bialoszewski

Abstract:

Introduction: Whole-body vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions. 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.

Keywords: balance, joint position sense, proprioception, whole body vibration

Procedia PDF Downloads 329
99 Experimental Measurement of Equatorial Ring Current Generated by Magnetoplasma Sail in Three-Dimensional Spatial Coordinate

Authors: Masato Koizumi, Yuya Oshio, Ikkoh Funaki

Abstract:

Magnetoplasma Sail (MPS) is a future spacecraft propulsion that generates high levels of thrust by inducing an artificial magnetosphere to capture and deflect solar wind charged particles in order to transfer momentum to the spacecraft. By injecting plasma in the spacecraft’s magnetic field region, the ring current azimuthally drifts on the equatorial plane about the dipole magnetic field generated by the current flowing through the solenoid attached on board the spacecraft. This ring current results in magnetosphere inflation which improves the thrust performance of MPS spacecraft. In this present study, the ring current was experimentally measured using three Rogowski Current Probes positioned in a circular array about the laboratory model of MPS spacecraft. This investigation aims to determine the detailed structure of ring current through physical experimentation performed under two different magnetic field strengths engendered by varying the applied voltage on the solenoid with 300 V and 600 V. The expected outcome was that the three current probes would detect the same current since all three probes were positioned at equal radial distance of 63 mm from the center of the solenoid. Although experimental results were numerically implausible due to probable procedural error, the trends of the results revealed three pieces of perceptive evidence of the ring current behavior. The first aspect is that the drift direction of the ring current depended on the strength of the applied magnetic field. The second aspect is that the diamagnetic current developed at a radial distance not occupied by the three current probes under the presence of solar wind. The third aspect is that the ring current distribution varied along the circumferential path about the spacecraft’s magnetic field. Although this study yielded experimental evidence that differed from the original hypothesis, the three key findings of this study have informed two critical MPS design solutions that will potentially improve thrust performance. The first design solution is the positioning of the plasma injection point. Based on the implication of the first of the three aspects of ring current behavior, the plasma injection point must be located at a distance instead of at close proximity from the MPS Solenoid for the ring current to drift in the direction that will result in magnetosphere inflation. The second design solution, predicated by the third aspect of ring current behavior, is the symmetrical configuration of plasma injection points. In this study, an asymmetrical configuration of plasma injection points using one plasma source resulted in a non-uniform distribution of ring current along the azimuthal path. This distorts the geometry of the inflated magnetosphere which minimizes the deflection area for the solar wind. Therefore, to realize a ring current that best provides the maximum possible inflated magnetosphere, multiple plasma sources must be spaced evenly apart for the plasma to be injected evenly along its azimuthal path.

Keywords: Magnetoplasma Sail, magnetosphere inflation, ring current, spacecraft propulsion

Procedia PDF Downloads 310
98 Direct Current Electric Field Stimulation against PC12 Cells in 3D Bio-Reactor to Enhance Axonal Extension

Authors: E. Nakamachi, S. Tanaka, K. Yamamoto, Y. Morita

Abstract:

In this study, we developed a three-dimensional (3D) direct current electric field (DCEF) stimulation bio-reactor for axonal outgrowth enhancement to generate the neural network of the central nervous system (CNS). By using our newly developed 3D DCEF stimulation bio-reactor, we cultured the rat pheochromocytoma cells (PC12) and investigated the effects on the axonal extension enhancement and network generation. Firstly, we designed and fabricated a 3D bio-reactor, which can load DCEF stimulation on PC12 cells embedded in the collagen gel as extracellular environment. The connection between the electrolyte and the medium using salt bridges for DCEF stimulation was introduced to avoid the cell death by the toxicity of metal ion. The distance between the salt bridges was adopted as the design variable to optimize a structure for uniform DCEF stimulation, where the finite element (FE) analyses results were used. Uniform DCEF strength and electric flux vector direction in the PC12 cells embedded in collagen gel were examined through measurements of the fabricated 3D bio-reactor chamber. Measurement results of DCEF strength in the bio-reactor showed a good agreement with FE results. In addition, the perfusion system was attached to maintain pH 7.2 ~ 7.6 of the medium because pH change was caused by DCEF stimulation loading. Secondly, we disseminated PC12 cells in collagen gel and carried out 3D culture. Finally, we measured the morphology of PC12 cell bodies and neurites by the multiphoton excitation fluorescence microscope (MPM). The effectiveness of DCEF stimulation to enhance the axonal outgrowth and the neural network generation was investigated. We confirmed that both an increase of mean axonal length and axogenesis rate of PC12, which have been exposed 5 mV/mm for 6 hours a day for 4 days in the bioreactor. We found following conclusions in our study. 1) Design and fabrication of DCEF stimulation bio-reactor capable of 3D culture nerve cell were completed. A uniform electric field strength of average value of 17 mV/mm within the 1.2% error range was confirmed by using FE analyses, after the structure determination through the optimization process. In addition, we attached a perfusion system capable of suppressing the pH change of the culture solution due to DCEF stimulation loading. 2) Evaluation of DCEF stimulation effects on PC12 cell activity was executed. The 3D culture of PC 12 was carried out adopting the embedding culture method using collagen gel as a scaffold for four days under the condition of 5.0 mV/mm and 10mV/mm. There was a significant effect on the enhancement of axonal extension, as 11.3% increase in an average length, and the increase of axogenesis rate. On the other hand, no effects on the orientation of axon against the DCEF flux direction was observed. Further, the network generation was enhanced to connect longer distance between the target neighbor cells by DCEF stimulation.

Keywords: PC12, DCEF stimulation, 3D bio-reactor, axonal extension, neural network generation

Procedia PDF Downloads 185
97 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers

Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin

Abstract:

Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.

Keywords: anxiety, emotional valence, childhood, lexical access

Procedia PDF Downloads 288
96 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 180
95 Evaluation of Ocular Changes in Hypertensive Disorders of Pregnancy

Authors: Rajender Singh, Nidhi Sharma, Aastha Chauhan, Meenakshi Barsaul, Jyoti Deswal, Chetan Chhikara

Abstract:

Introduction: Pre-eclampsia and eclampsia are hypertensive disorders of pregnancy with multisystem involvement and are common causes of morbidity and mortality in obstetrics. It is believed that changes in retinal arterioles may indicate similar changes in the placenta. Therefore, this study was undertaken to evaluate the ocular manifestations in cases of pre-eclampsia and eclampsia and to deduce any association between the retinal changes and blood pressure, the severity of disease, gravidity, proteinuria, and other lab parameters so that a better approach could be devised to ensure maternal and fetal well-being. Materials and Methods: This was a hospital-based cross-sectional study conducted over a period of one year, from April 2021 to May 2022. 350 admitted patients with diagnosed pre-eclampsia, eclampsia, and pre-eclampsia superimposed on chronic hypertension were included in the study. A pre-structured proforma was used. After taking consent and ocular history, a bedside examination to record visual acuity, pupillary size, corneal curvature, field of vision, and intraocular pressure was done. Dilated fundus examination was done with a direct and indirect ophthalmoscope. Age, parity, BP, proteinuria, platelet count, liver and kidney function tests were noted down. The patients with positive findings only were followed up after 72 hours and 6 weeks of termination of pregnancy. Results: The mean age of patients was 26.18±4.33 years (range 18-39 years).157 (44.9%) were primigravida while 193(55.1%) were multigravida.53 (15.1%) patients had eclampsia, 128(36.5%) had mild pre-eclampsia,128(36.5%) had severe pre-eclampsia and 41(11.7%) had chronic hypertension with superimposed pre-eclampsia. Retinal changes were found in 208 patients (59.42%), and grade I changes were the most common. 82(23.14%) patients had grade I changes, 75 (21.4%) had grade II changes, 41(11.71%) had grade III changes, and 11(3.14%) had serous retinal detachment/grade IV changes. 36 patients had unaided visual acuity <6/9, of these 17 had refractive error and 19(5.4%) had varying degrees of retinal changes. 3(0.85%) out of 350 patients had an abnormal field of vision in both eyes. All 3 of them had eclampsia and bilateral exudative retinal detachment. At day 4, retinopathy in 10 patients resolved, and 3 patients had improvement in visual acuity. At 6 weeks, retinopathy in all the patients resolved spontaneously except persistence of grade II changes in 23 patients with chronic hypertension with superimposed pre-eclampsia, while visual acuity and field of vision returned to normal in all patients. Pupillary size, intraocular pressure, and corneal curvature were found to be within normal limits at all times of examination. There was a statistically significant positive association between retinal changes and mean arterial pressure. The study showed a positive correlation between fundus findings and severity of disease (p value<0.05) and mean arterial pressure (p value<0.005). Primigravida had more retinal changes than multigravida patients. A significant association was found between fundus changes and thrombocytopenia and deranged liver and kidney function tests (p value<0.005). Conclusion: As the severity of pre-eclampsia and eclampsia increases, the incidence of retinopathy also increases, and it affects visual acuity and visual fields of the patients. Thus, timely ocular examination should be done in all such cases to prevent complications.

Keywords: eclampsia, hypertensive, ocular, pre-eclampsia

Procedia PDF Downloads 79
94 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries

Authors: Behzad Babaei, B. Gangadhara Prusty

Abstract:

The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.

Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress

Procedia PDF Downloads 102
93 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 141
92 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 79
91 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 343
90 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 101
89 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test

Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi

Abstract:

Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.

Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde

Procedia PDF Downloads 104
88 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 134
87 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 268
86 Gastro-Protective Actions of Melatonin and Murraya koenigii Leaf Extract Combination in Piroxicam Treated Male Wistar Rats

Authors: Syed Benazir Firdaus, Debosree Ghosh, Aindrila Chattyopadhyay, Kuladip Jana, Debasish Bandyopadhyay

Abstract:

Gastro-toxic effect of piroxicam, a classical non-steroidal anti-inflammatory drug (NSAID), has restricted its use in arthritis and similar diseases. The present study aims to find if a combination of melatonin and Murraya koenigii leaf extract therapy can protect against piroxicam induced ulcerative damage in rats. For this study, rats were divided into four groups namely control group where rats were orally administered distilled water, only combination treated group, piroxicam treated group and combination pre-administered piroxicam treated group. Each group of rats consisted of six animals. Melatonin at a dose of 20mg/kg body weight and antioxidant rich Murraya koenigii leaf extract at a dose of 50 mg /kg body weight were successively administered at 30 minutes interval one hour before oral administration of piroxicam at a dose of 30 mg/kg body weight to Wistar rats in the combination pre-administered piroxicam treated group. The rats of the animal group which was only combination treated were administered both the drugs respectively without piroxicam treatment whereas the piroxicam treated animal group was administered only piroxicam at 30mg/kg body weight without any pre-treatment with the combination. Macroscopic examination along with histo-pathological study of gastric tissue using haemotoxylin-eosin staining and alcian blue dye staining showed protection of the gastric mucosa in the combination pre-administered piroxicam treated group. Determination of adherent mucus content biochemically and collagen content through Image J analysis of picro-sirius stained sections of rat gastric tissue also revealed protective effects of the combination in piroxicam mediated toxicity. Gelatinolytic activity of piroxicam was significantly reduced by pre-administration of the drugs which was well exhibited by the gelatin zymography study of the rat gastric tissue. Mean ulcer index determined from macroscopic study of rat stomach reduced to a minimum (0±0.00; Mean ± Standard error of mean and number of animals in the group=6) indicating the absence of ulcer spots on pre-treatment of rats with the combination. Gastro-friendly prostaglandin (PGE2) which otherwise gets depleted on piroxicam treatment was also well protected when the combination was pre-administered in the rats prior to piroxicam treatment. The requirement of the individual drugs in low doses in this combinatorial therapeutic approach will possibly minimize the cost of therapy as well as it will eliminate the possibility of any pro-oxidant side effects on the use of high doses of antioxidants. Beneficial activity of this combination therapy in the rat model raises the possibility that similar protective actions might be also observed if it is adopted by patients consuming NSAIDs like piroxicam. However, the introduction of any such therapeutic approach is subject to future studies in human.

Keywords: gastro-protective action, melatonin, Murraya koenigii leaf extract, piroxicam

Procedia PDF Downloads 308
85 An Exploration of the Emergency Staff’s Perceptions and Experiences of Teamwork and the Skills Required in the Emergency Department in Saudi Arabia

Authors: Sami Alanazi

Abstract:

Teamwork practices have been recognized as a significant strategy to improve patient safety, quality of care, and staff and patient satisfaction in healthcare settings, particularly within the emergency department (ED). The EDs depend heavily on teams of interdisciplinary healthcare staff to carry out their operational goals and core business of providing care to the serious illness and injured. The ED is also recognized as a high-risk area in relation to service demand and the potential for human error. Few studies have considered the perceptions and experiences of the ED staff (physicians, nurses, allied health professionals, and administration staff) about the practice of teamwork, especially in Saudi Arabia (SA), and no studies have been conducted to explore the practices of teamwork in the EDs. Aim: To explore the practices of teamwork from the perspectives and experiences of staff (physicians, nurses, allied health professionals, and administration staff) when interacting with each other in the admission areas in the ED of a public hospital in the Northern Border region of SA. Method: A qualitative case study design was utilized, drawing on two methods for the data collection, comprising of semi-structured interviews (n=22) with physicians (6), nurses (10), allied health professionals (3), and administrative members (3) working in the ED of a hospital in the Northern Border region of SA. The second method is non-participant direct observation. All data were analyzed using thematic analysis. Findings: The main themes that emerged from the analysis were as follows: the meaningful of teamwork, reasons of teamwork, the ED environmental factors, the organizational factors, the value of communication, leadership, teamwork skills in the ED, team members' behaviors, multicultural teamwork, and patients and families behaviors theme. Discussion: Working in the ED environment played a major role in affecting work performance as well as team dynamics. However, Communication, time management, fast-paced performance, multitasking, motivation, leadership, and stress management were highlighted by the participants as fundamental skills that have a major impact on team members and patients in the ED. It was found that the behaviors of the team members impacted the team dynamics as well as ED health services. Behaviors such as disputes among team members, conflict, cooperation, uncooperative members, neglect, and emotions of the members. Besides that, the behaviors of the patients and their accompanies had a direct impact on the team and the quality of the services. In addition, the differences in the cultures have separated the team members and created undesirable gaps such the gender segregation, national origin discrimination, and similarity and different in interests. Conclusion: Effective teamwork, in the context of the emergency department, was recognized as an essential element to obtain the quality of care as well as improve staff satisfaction.

Keywords: teamwork, barrier, facilitator, emergencydepartment

Procedia PDF Downloads 142
84 The Influence of Microsilica on the Cluster Cracks' Geometry of Cement Paste

Authors: Maciej Szeląg

Abstract:

The changing nature of environmental impacts, in which cement composites are operating, are causing in the structure of the material a number of phenomena, which result in volume deformation of the composite. These strains can cause composite cracking. Cracks are merging by propagation or intersect to form a characteristic structure of cracks known as the cluster cracks. This characteristic mesh of cracks is crucial to almost all building materials, which are working in service loads conditions. Particularly dangerous for a cement matrix is a sudden load of elevated temperature – the thermal shock. Resulting in a relatively short period of time a large value of a temperature gradient between the outer surface and the material’s interior can result in cracks formation on the surface and in the volume of the material. In the paper, in order to analyze the geometry of the cluster cracks of the cement pastes, the image analysis tools were used. Tested were 4 series of specimens made of two different Portland cement. In addition, two series include microsilica as a substitute for the 10% of the cement. Within each series, specimens were performed in three w/b indicators (water/binder): 0.4; 0.5; 0.6. The cluster cracks were created by sudden loading the samples by elevated temperature of 250°C. Images of the cracked surfaces were obtained via scanning at 2400 DPI. Digital processing and measurements were performed using ImageJ v. 1.46r software. To describe the structure of the cluster cracks three stereological parameters were proposed: the average cluster area - A ̅, the average length of cluster perimeter - L ̅, and the average opening width of a crack between clusters - I ̅. The aim of the study was to identify and evaluate the relationships between measured stereological parameters, and the compressive strength and the bulk density of the modified cement pastes. The tests of the mechanical and physical feature have been carried out in accordance with EN standards. The curves describing the relationships have been developed using the least squares method, and the quality of the curve fitting to the empirical data was evaluated using three diagnostic statistics: the coefficient of determination – R2, the standard error of estimation - Se, and the coefficient of random variation – W. The use of image analysis allowed for a quantitative description of the cluster cracks’ geometry. Based on the obtained results, it was found a strong correlation between the A ̅ and L ̅ – reflecting the fractal nature of the cluster cracks formation process. It was noted that the compressive strength and the bulk density of cement pastes decrease with an increase in the values of the stereological parameters. It was also found that the main factors, which impact on the cluster cracks’ geometry are the cement particles’ size and the general content of the binder in a volume of the material. The microsilica caused the reduction in the A ̅, L ̅ and I ̅ values compared to the values obtained by the classical cement paste’s samples, which is caused by the pozzolanic properties of the microsilica.

Keywords: cement paste, cluster cracks, elevated temperature, image analysis, microsilica, stereological parameters

Procedia PDF Downloads 246
83 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 281
82 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 101
81 The Importance of Dialogue, Self-Respect, and Cultural Etiquette in Multicultural Society: An Islamic and Secular Perspective

Authors: Julia A. Ermakova

Abstract:

In today's multicultural societies, dialogue, self-respect, and cultural etiquette play a vital role in fostering mutual respect and understanding. Whether viewed from an Islamic or secular perspective, the importance of these values cannot be overstated. Firstly, dialogue is essential in multicultural societies as it allows individuals from different cultural backgrounds to exchange ideas, opinions, and experiences. To engage in dialogue, one must be open and willing to listen, understand, and respect the views of others. This requires a level of self-awareness, where individuals must know themselves and their interlocutors to create a productive and respectful conversation. Secondly, self-respect is crucial for individuals living in multicultural societies (McLarney). One must have adequately high self-esteem and self-confidence to interact with others positively. By valuing oneself, individuals can create healthy relationships and foster mutual respect, which is essential in diverse communities. Thirdly, cultural etiquette is a way of demonstrating the beauty of one's culture by exhibiting good temperament (Al-Ghazali). Adab, a concept that encompasses good manners, praiseworthy words and deeds, and the pursuit of what is considered good, is highly valued in Islamic teachings. By adhering to Adab, individuals can guard against making mistakes and demonstrate respect for others. Islamic teachings provide etiquette for every situation in life, making up the way of life for Muslims. In the Islamic view, an elegant Muslim woman has several essential qualities, including cultural speech and erudition, speaking style, awareness of how to greet, the ability to receive compliments, lack of desire to argue, polite behavior, avoiding personal insults, and having good intentions (Al-Ghazali). The Quran highlights the inclination of people towards arguing, bickering, and disputes (Qur'an, 4:114). Therefore, it is imperative to avoid useless arguments and disputes, for they are poison that poisons our lives. The Prophet Muhammad, peace and blessings be upon him, warned that the most hateful person to Allah is an irreconcilable disputant (Al-Ghazali). By refraining from such behavior, individuals can foster respect and understanding in multicultural societies. From a secular perspective, respecting the views of others is crucial to engage in productive dialogue. The rule of argument emphasizes the importance of showing respect for the other person's views, allowing for the possibility of error on one's part, and avoiding telling someone they are wrong (Atamali). By exhibiting polite behavior and having respect for everyone, individuals can create a welcoming environment and avoid conflict. In conclusion, the importance of dialogue, self-respect, and cultural etiquette in multicultural societies cannot be overstated. By engaging in dialogue, respecting oneself and others, and adhering to cultural etiquette, individuals can foster mutual respect and understanding in diverse communities. Whether viewed from an Islamic or secular perspective, these values are essential for creating harmonious societies.

Keywords: multiculturalism, self-respect, cultural etiquette, adab, ethics, secular perspective

Procedia PDF Downloads 88