Search results for: three-parameter sine curve fitting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1409

Search results for: three-parameter sine curve fitting

209 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar

Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo

Abstract:

The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.

Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB

Procedia PDF Downloads 89
208 Role of von Willebrand Factor Antigen as Non-Invasive Biomarker for the Prediction of Portal Hypertensive Gastropathy in Patients with Liver Cirrhosis

Authors: Mohamed El Horri, Amine Mouden, Reda Messaoudi, Mohamed Chekkal, Driss Benlaldj, Malika Baghdadi, Lahcene Benmahdi, Fatima Seghier

Abstract:

Background/aim: Recently, the Von Willebrand factor antigen (vWF-Ag)has been identified as a new marker of portal hypertension (PH) and its complications. Few studies talked about its role in the prediction of esophageal varices. VWF-Ag is considered a non-invasive approach, In order to avoid the endoscopic burden, cost, drawbacks, unpleasant and repeated examinations to the patients. In our study, we aimed to evaluate the ability of this marker in the prediction of another complication of portal hypertension, which is portal hypertensive gastropathy (PHG), the one that is diagnosed also by endoscopic tools. Patients and methods: It is about a prospective study, which include 124 cirrhotic patients with no history of bleeding who underwent screening endoscopy for PH-related complications like esophageal varices (EVs) and PHG. Routine biological tests were performed as well as the VWF-Ag testing by both ELFA and Immunoturbidimetric techniques. The diagnostic performance of our marker was assessed using sensitivity, specificity, positive predictive value, negative predictive value, accuracy, and receiver operating characteristic curves. Results: 124 patients were enrolled in this study, with a mean age of 58 years [CI: 55 – 60 years] and a sex ratio of 1.17. Viral etiologies were found in 50% of patients. Screening endoscopy revealed the presence of PHG in 20.2% of cases, while for EVsthey were found in 83.1% of cases. VWF-Ag levels, were significantly increased in patients with PHG compared to those who have not: 441% [CI: 375 – 506], versus 279% [CI: 253 – 304], respectively (p <0.0001). Using the area under the receiver operating characteristic curve (AUC), vWF-Ag was a good predictor for the presence of PHG. With a value higher than 320% and an AUC of 0.824, VWF-Ag had an 84% sensitivity, 74% specificity, 44.7% positive predictive value, 94.8% negative predictive value, and 75.8% diagnostic accuracy. Conclusion: VWF-Ag is a good non-invasive low coast marker for excluding the presence of PHG in patients with liver cirrhosis. Using this marker as part of a selective screening strategy might reduce the need for endoscopic screening and the coast of the management of these kinds of patients.

Keywords: von willebrand factor, portal hypertensive gastropathy, prediction, liver cirrhosis

Procedia PDF Downloads 206
207 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: chronological sequence of discharges, recession curves, streamflow duration curves, water concession

Procedia PDF Downloads 189
206 A Foodborne Cholera Outbreak in a School Caused by Eating Contaminated Fried Fish: Hoima Municipality, Uganda, February 2018

Authors: Dativa Maria Aliddeki, Fred Monje, Godfrey Nsereko, Benon Kwesiga, Daniel Kadobera, Alex Riolexus Ario

Abstract:

Background: Cholera is a severe gastrointestinal disease caused by Vibrio cholera. It has caused several pandemics. On 26 February 2018, a suspected cholera outbreak, with one death, occurred in School X in Hoima Municipality, western Uganda. We investigated to identify the scope and mode of transmission of the outbreak, and recommend evidence-based control measures. Methods: We defined a suspected case as onset of diarrhea, vomiting, or abdominal pain in a student or staff of School X or their family members during 14 February–10 March. A confirmed case was a suspected case with V. cholerae cultured from stool. We reviewed medical records at Hoima Hospital and searched for cases at School X. We conducted descriptive epidemiologic analysis and hypothesis-generating interviews of 15 case-patients. In a retrospective cohort study, we compared attack rates between exposed and unexposed persons. Results: We identified 15 cases among 75 students and staff of School X and their family members (attack rate=20%), with onset from 25-28 February. One patient died (case-fatality rate=6.6%). The epidemic curve indicated a point-source exposure. On 24 February, a student brought fried fish from her home in a fishing village, where a cholera outbreak was ongoing. Of the 21 persons who ate the fish, 57% developed cholera, compared with 5.6% of 54 persons who did not eat (RR=10; 95% CI=3.2-33). None of 4 persons who recooked the fish before eating, compared with 71% of 17 who did not recook it, developed cholera (RR=0.0, 95%CIFisher exact=0.0-0.95). Of 12 stool specimens cultured, 6 yielded V. cholerae. Conclusion: This cholera outbreak was caused by eating fried fish, which might have been contaminated with V. cholerae in a village with an ongoing outbreak. Lack of thorough cooking of the fish might have facilitated the outbreak. We recommended thoroughly cooking fish before consumption.

Keywords: cholera, disease outbreak, foodborne, global health security, Uganda

Procedia PDF Downloads 199
205 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit

Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey

Abstract:

Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.

Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D

Procedia PDF Downloads 183
204 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions

Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba

Abstract:

Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.

Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial

Procedia PDF Downloads 172
203 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference

Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev

Abstract:

Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.

Keywords: compartmental model, climate, dengue, machine learning, social-economic

Procedia PDF Downloads 86
202 Maximizing Giant Prawn Resource Utilization in Banjar Regency, Indonesia: A CPUE and MSY Analysis

Authors: Ahmadi, Iriansyah, Raihana Yahman

Abstract:

The giant freshwater prawn (Macrobrachium rosenbergii de Man, 1879) is a valuable species for fisheries and aquaculture, especially in Southeast Asia, including Indonesia due to their high market demand and potential for export. The growing demand for prawns is straining the sustainability of the Banjar Regency fishery. To ensure the long-term sustainability and economic viability of the prawn fishing in this region, it is imperative to implement evidence-based management practices. This requires comprehensive data on the Catch per Unit Effort (CPUE), Maximum Sustainable Yield (MSY) and the current rate of prawn resource exploitation. it analyzed five years of prawn catch data (2019-2023) obtained from South Kalimantan Marine and Fisheries Services. Fishing gears (e.g. hook & line and cast net) were first standardized with Fishing Power Index, and then calculated effort and MSY. The intercept (a) and the slope (b) values of regression curve were used to estimate the catch-maximum sustainable yield (CMSY) and optimal fishing effort (Fopt) levels within the framework of the Surplus Production Model. The estimated rates of resource utilization were then compared to the criteria of The National Commission of Marine Fish Stock Assessment. The findings showed that the CPUE value peaked in 2019 at 33.48 kg/trip, while the lowest value observed in 2022 at 5.12 kg/trip. The CMSY value was estimated to be 17,396 kg/year, corresponding to the Fopt level of 1,636 trips/year. The highest utilization rate was 56.90% recorded in 2020, while the lowest rate was observed in 2021 at 46.16%. The annual utilization rates were classified as “medium”, suggesting that increasing fishing effort by 45% could potentially maximize prawn catches at an optimum level. These findings provide a baseline for sustainable fisheries management in the region.

Keywords: giant prawns, CPUE, fishing power index, sustainable potential, utilization rate

Procedia PDF Downloads 18
201 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design

Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian

Abstract:

Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.

Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.

Procedia PDF Downloads 284
200 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 137
199 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 153
198 Development of Adsorbents for Removal of Hydrogen Sulfide and Ammonia Using Pyrolytic Carbon Black form Waste Tires

Authors: Yang Gon Seo, Chang-Joon Kim, Dae Hyeok Kim

Abstract:

It is estimated that 1.5 billion tires are produced worldwide each year which will eventually end up as waste tires representing a major potential waste and environmental problem. Pyrolysis has been great interest in alternative treatment processes for waste tires to produce valuable oil, gas and solid products. The oil and gas products may be used directly as a fuel or a chemical feedstock. The solid produced from the pyrolysis of tires ranges typically from 30 to 45 wt% and have high carbon contents of up to 90 wt%. However, most notably the solid have high sulfur contents from 2 to 3 wt% and ash contents from 8 to 15 wt% related to the additive metals. Upgrading tire pyrolysis products to high-value products has concentrated on solid upgrading to higher quality carbon black and to activated carbon. Hydrogen sulfide and ammonia are one of the common malodorous compounds that can be found in emissions from many sewages treatment plants and industrial plants. Therefore, removing these harmful gasses from emissions is of significance in both life and industry because they can cause health problems to human and detrimental effects on the catalysts. In this work, pyrolytic carbon black from waste tires was used to develop adsorbent with good adsorption capacity for removal of hydrogen and ammonia. Pyrolytic carbon blacks were prepared by pyrolysis of waste tire chips ranged from 5 to 20 mm under the nitrogen atmosphere at 600℃ for 1 hour. Pellet-type adsorbents were prepared by a mixture of carbon black, metal oxide and sodium hydroxide or hydrochloric acid, and their adsorption capacities were estimated by using the breakthrough curve of a continuous fixed bed adsorption column at ambient condition. The adsorbent was manufactured with a mixture of carbon black, iron oxide(III), and sodium hydroxide showed the maximum working capacity of hydrogen sulfide. For ammonia, maximum working capacity was obtained by the adsorbent manufactured with a mixture of carbon black, copper oxide(II), and hydrochloric acid.

Keywords: adsorbent, ammonia, pyrolytic carbon black, hydrogen sulfide, metal oxide

Procedia PDF Downloads 257
197 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 147
196 Ramadan as a Model of Intermittent Fasting: Effects on Gut Hormones, Appetite and Body Composition in Diabetes vs. Controls

Authors: Turki J. Alharbi, Jencia Wong, Dennis Yue, Tania P. Markovic, Julie Hetherington, Ted Wu, Belinda Brooks, Radhika Seimon, Alice Gibson, Stephanie L. Silviera, Amanda Sainsbury, Tanya J. Little

Abstract:

Fasting has been practiced for centuries and is incorporated into the practices of different religions including Islam, whose followers intermittently fast throughout the month of Ramadan. Thus, Ramadan presents a unique model of prolonged intermittent fasting (IF). Despite a growing body of evidence for a cardio-metabolic and endocrine benefit of IF, detailed studies of the effects of IF on these indices in type 2 diabetes are scarce. We studied 5 subjects with type 2 diabetes (T2DM) and 7 healthy controls (C) at baseline (pre), and in the last week of Ramadan (post). Fasting circulating levels of glucose, HbA1c and lipids, as well as body composition (with DXA) and resting energy expenditure (REE) were measured. Plasma gut hormone levels and appetite responses to a mixed meal were also studied. Data are means±SEM. Ramadan decreased total fat mass (-907±92 g, p=0.001) and trunk fat (-778±190 g, p=0.014) in T2DM but not in controls, without any reductions in lean mass or REE. There was a trend towards a decline in plasma FFA in both groups. Ramadan had no effect on body weight, glycemia, blood pressure, or plasma lipids in either group. In T2DM only, the area under the curve for post-meal plasma ghrelin concentrations increased after Ramadan (pre:6632±1737 vs. post:9025±2518 pg/ml.min-1, p=0.045). Despite this increase in orexigenic ghrelin, subjective appetite scores were not altered by Ramadan. Meal-induced plasma concentrations of the satiety hormone pancreatic polypeptide did not change during Ramadan, but were higher in T2DM compared to controls (post: C: 23486±6677 vs. T2DM: 62193±6880 pg/ml.min-1, p=0.003. In conclusion, Ramadan, as a model for IF appears to have more favourable effects on body composition in T2DM, without adverse effects on metabolic control or subjective appetite. These data suggest that IF may be particularly beneficial in T2DM as a nutritional intervention. Larger studies are warranted.

Keywords: type 2 diabetes, obesity, intermittent fasting, appetite regulating hormones

Procedia PDF Downloads 312
195 In-House Fatty Meal Cholescintigraphy as a Screening Tool in Patients Presenting with Dyspepsia

Authors: Avani Jain, S. Shelley, M. Indirani, Shilpa Kalal, Jaykanth Amalachandran

Abstract:

Aim: To evaluate the prevalence of gall bladder dysfunction in patients with dyspepsia using In-House fatty meal cholescintigraphy. Materials & Methods: This study is a prospective cohort study. 59 healthy volunteers with no dyspeptic complaints and negative ultrasound and endoscopy were recruited in study. 61 patients having complaint of dyspepsia for duration of more than 6 months were included. All of them underwent 99mTc-Mebrofenin fatty meal cholescintigraphy following a standard protocol. Dynamic acquisitions were acquired for 120 minutes with an In-House fatty meal being given at 45th minute. Gall bladder emptying kinetics was determined with gall bladder ejection fractions (GBEF) calculated at 30minutes, 45minutes and at 60 minutes (30min, 45min & 60 min). Standardization of fatty meal was done for volunteers. Receiver operating characteristic (ROC) analysis was used assess the diagnostic accuracy of 3 time points (30min, 45min & 60 min) used for measuring gall bladder emptying. On the basis of cut off derived from volunteers, the patients were assessed for gall bladder dysfunction. Results: In volunteers, the GBEF at 30 min was 74.42±8.26 % (mean ±SD), at 45 min was 82.61 ± 6.5 % and at 60 min was 89.37±4.48%, compared to patients where at 30min it was 33.73±22.87%, at 45 min it was 43.03±26.97% and at 60 min it was 51.85±29.60%. The lower limit of GBEF in volunteers at 30 min was 60%, 45 min was 69% and at 60 min was 81%. ROC analysis showed that area under curve was largest for 30 min GBEF (0.952; 95% CI = 0.914-0.989) and that all the 3 measures were statistically significant (p < 0.005). Majority of the volunteers had 74% of gall bladder emptying by 30 minutes; hence it was taken as an optimum cutoff time to assess gall bladder contraction. > 60% GBEF at 30 min post fatty meal was considered as normal and < 60% GBEF as indicative of gall bladder dysfunction. In patients, various causes for dyspepsia were identified: GB dysfunction (63.93%), Peptic ulcer (8.19 %), Gastroesophageal reflux disease (8.19%), Gastritis (4.91%). In 18.03% of cases GB dysfunction coexisted with other gastrointestinal conditions. The diagnosis of functional dyspepsia was made in 14.75% of cases. Conclusions: Gall bladder dysfunction contributes significantly to the causation of dyspepsia. It could coexist with various other gastrointestinal diseases. Fatty meal was well tolerated and devoid of any side effects. Many patients who are labeled as functional dyspeptics could actually have gall bladder dysfunction. Hence as an adjunct to ultrasound and endoscopy, fatty meal cholescintigraphy can also be used as a screening modality in characterization of dyspepsia.

Keywords: in-house fatty meal, choescintigraphy, dyspepsia, gall bladder ejection fraction, functional dyspepsia

Procedia PDF Downloads 508
194 Effectiveness of Participatory Ergonomic Education on Pain Due to Work Related Musculoskeletal Disorders in Food Processing Industrial Workers

Authors: Salima Bijapuri, Shweta Bhatbolan, Sejalben Patel

Abstract:

Ergonomics concerns the fitting of the environment and the equipment to the worker. Ergonomic principles can be employed in different dimensions of the industrial sector. Participation of all the stakeholders is the key to the formulation of a multifaceted and comprehensive approach to lessen the burden of occupational hazards. Taking responsibility for one’s own work activities by acquiring sufficient knowledge and potential to influence the practices and outcomes is the basis of participatory ergonomics and even hastens the process to identify workplace hazards. The study was aimed to check how participatory ergonomics can be effective in the management of work-related musculoskeletal disorders. Method: A mega kitchen was identified in a twin city of Karnataka, India. Consent was taken, and the screening of workers was done using observation methods. Kitchen work was structured to include different tasks, which included preparation, cooking, distributing, and serving food, packing food to be delivered to schools, dishwashing, cleaning and maintenance of kitchen and equipment, and receiving and storing raw material. Total 100 workers attended the education session on participatory ergonomics and its role in implementing the correct ergonomic practices, thus preventing WRMSDs. Demographic details and baseline data on related musculoskeletal pain and discomfort were collected using the Nordic pain questionnaire and VAS score pre- and post-study. Monthly visits were made, and the education sessions were reiterated on each visit, thus reminding, correcting, and problem-solving of each worker. After 9 months with a total of 4 such education session, the post education data was collected. The software SPSS 20 was used to analyse the collected data. Results: The majority of them (78%), depending on the availability and feasibility, participated in the intervention workshops were arranged four times. The average age of the participants was 39 years. The percentage of female participants was 79.49%, and 20.51% of participants comprised of males. The Nordic Musculoskeletal Questionnaire (NMQ) showed that knee pain was the most commonly reported complaint (62%) from the last 12 months with a mean VAS of 6.27, followed by low back pain. Post intervention, the mean VAS Score was reduced significantly to 2.38. The comparison of pre-post scores was made using Wilcoxon matched pairs test. Upon enquiring, it was found that, the participants learned the importance of applying ergonomics at their workplace which inturn was beneficial for them to handle any problems arising at their workplace on their own with self confidence. Conclusion: The participatory ergonomics proved effective with workers of mega kitchen, and it is a feasible and practical approach. The advantage of the given study area was that it had a sophisticated and ergonomically designed workstation; thus it was the lack of education and practical knowledge to use these stations was of utmost need. There was a significant reduction in VAS scores with the implementation of changes in the working style, and the knowledge of ergonomics helped to decrease physical load and improve musculoskeletal health.

Keywords: ergonomic awareness session, mega kitchen, participatory ergonomics, work related musculoskeletal disorders

Procedia PDF Downloads 139
193 The Effect of Degraded Shock Absorbers on the Safety-Critical Stationary and Non-Stationary Lateral Dynamics of Passenger Cars

Authors: Tobias Schramm, Günther Prokop

Abstract:

The average age of passenger cars is rising steadily around the world. Older vehicles are more sensitive to the degradation of chassis components. A higher age and a higher mileage of passenger cars correlate with an increased failure rate of vehicle shock absorbers. The most common degradation mechanism of vehicle shock absorbers is the loss of oil and gas. It is not yet fully understood how the loss of oil and gas in twin-tube shock absorbers affects the lateral dynamics of passenger cars. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers of passenger cars on their safety-critical lateral dynamics. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was used to simulate stationary cornering and steering wheel angle step maneuvers on road classes A to D. The simulations were carried out in a realistic parameter space in order to demonstrate the influence of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the understeer gradient of vehicles. For stationary lateral dynamics, degraded shock absorbers for high road excitations reduce the maximum lateral accelerations. Degraded rear axle shock absorbers can change the understeer gradient of a vehicle in the direction of oversteer. Degraded shock absorbers also lead to increased rolling angles. Furthermore, degraded shock absorbers have a major impact on driving stability during steering wheel angle steps. Degraded rear axle shock absorbers, in particular, can lead to unstable handling. Especially the tire stiffness, the unsprung mass and the stabilizer stiffness influence the effect of degraded shock absorbers on the lateral dynamics of passenger cars.

Keywords: driving dynamics, numerical simulation, road safety, shock absorber degradation, stationary and nonstationary lateral dynamics.

Procedia PDF Downloads 16
192 Properties of Magnesium-Based Hydrogen Storage Alloy Added with Palladium and Titanium Hydride

Authors: Jun Ying Lin, Tzu Hsiang Yen, Cha'o Kuang Chen

Abstract:

Nowadays, the great majority believe that there is great potentiality in hydrogen storage alloy storing hydrogen by physical and chemical absorption. However, the hydrogen storage alloy is limited by high operation temperature. Scientists find that adding transition elements can improve the properties of hydrogen storage alloy. In this research, outstanding improvements of kinetic and thermal properties are given by the addition of Palladium and Titanium hydride to Magnesium-based hydrogen storage alloy. Magnesium-based alloy is the main material, into which TiH2 / Pd are added separately. Following that, materials are milled by a Planetary Ball Miller at 650 rpm. TGA/DSC and PCT measure the capacity, spending time and temperature of abs/des-orption. Additionally, SEM and XRD analyze the structures and components of material. It is clearly shown that Pd is beneficial to kinetic properties. 2MgH2-0.1Pd has the highest capacity of all the alloys listed, approximately 5.5 wt%. Secondly, there are not any new Ti-related compounds found from XRD analysis. Thus, TiH2, considered as the catalyst, leads to the condition of 2MgH2-TiH2 and 2MgH2-TiH2-0.1Pd efficiently absorbing hydrogen in low temperature. 2MgH2-TiH2 can reach roughly 3.0 wt% in 82.4 minutes at 50°C and 8 minutes at 100°C, while2MgH2-TiH2-0.1Pd can reach 2.0 wt% in 400 minutes at 50°C and in 48 minutes at 100°C. The lowest temperature of 2MgH2-0.1Pd and 2MgH2-TiH2 is similar (320°C), otherwise the lowest temperature of 2MgH2-TiH2-0.1Pd decrease by 20°C. From XRD, it can be observed that PdTi2 and Pd3Ti are produced by mechanical alloying when adding Pd as well as TiH2 into MgH2. Due to the synergistic effects between Pd and TiH2, 2MgH2-TiH2-0.1Pd owns the lowest dehydrogenation temperature. Furthermore, the Pressure-Composition-Temperature (PCT) curve of 2MgH2-TiH2-0.1Pd is measured at different temperature, 370°C, 350°C, 320°C and 300°C separately. The plateau pressure is given form the PCT curves above. In accordance to different plateau pressures, enthalpy and entropy in the Van’t Hoff equation can be solved. In 2MgH2-TiH2-0.1Pd, the enthalpy is 74.9 KJ/mol and the entropy is 122.9 J/mol. Activation means that hydrogen storage alloy undergoes repeat abs/des-orpting processes. It plays an important role in the abs/des-orption. Activation shortens the abs/des-orption time because of the increase in surface area. From SEM, it is clear that the grain size and surface become smaller and rougher

Keywords: hydrogen storage materials, magnesium hydride, abs-/des-orption performance, Plateau pressure

Procedia PDF Downloads 270
191 Effects of Nutrient Source and Drying Methods on Physical and Phytochemical Criteria of Pot Marigold (Calendula offiCinalis L.) Flowers

Authors: Leila Tabrizi, Farnaz Dezhaboun

Abstract:

In order to study the effect of plant nutrient source and different drying methods on physical and phytochemical characteristics of pot marigold (Calendula officinalis L., Asteraceae) flowers, a factorial experiment was conducted based on completely randomized design with three replications in Research Laboratory of University of Tehran in 2010. Different nutrient sources (vermicompost, municipal waste compost, cattle manure, mushroom compost and control) which were applied in a field experiment for flower production and different drying methods including microwave (300, 600 and 900 W), oven (60, 70 and 80oC) and natural-shade drying in room temperature, were tested. Criteria such as drying kinetic, antioxidant activity, total flavonoid content, total phenolic compounds and total carotenoid of flowers were evaluated. Results indicated that organic inputs as nutrient source for flowers had no significant effects on quality criteria of pot marigold except of total flavonoid content, while drying methods significantly affected phytochemical criteria. Application of microwave 300, 600 and 900 W resulted in the highest amount of total flavonoid content, total phenolic compounds and antioxidant activity, respectively, while oven drying caused the lowest amount of phytochemical criteria. Also, interaction effect of nutrient source and drying method significantly affected antioxidant activity in which the highest amount of antioxidant activity was obtained in combination of vermicompost and microwave 900 W. In addition, application of vermicompost combined with oven drying at 60oC caused the lowest amount of antioxidant activity. Based on results of drying trend, microwave drying showed a faster drying rate than those oven and natural-shade drying in which by increasing microwave power and oven temperature, time of flower drying decreased whereas slope of moisture content reduction curve showed accelerated trend.

Keywords: drying kinetic, medicinal plant, organic fertilizer, phytochemical criteria

Procedia PDF Downloads 336
190 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh

Authors: Zahid Khalil, Saad Ul Haque, Asif Khan

Abstract:

Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).

Keywords: Remote sensing, GIS, AHP, RWH

Procedia PDF Downloads 389
189 Physicochemical Properties of Pea Protein Isolate (PPI)-Starch and Soy Protein Isolate (SPI)-Starch Nanocomplexes Treated by Ultrasound at Different pH Values

Authors: Gulcin Yildiz, Hao Feng

Abstract:

Soybean proteins are the most widely used and researched proteins in the food industry. Due to soy allergies among consumers, however, alternative legume proteins having similar functional properties have been studied in recent years. These alternative proteins are also expected to have a price advantage over soy proteins. One such protein that has shown good potential for food applications is pea protein. Besides the favorable functional properties of pea protein, it also contains fewer anti-nutritional substances than soy protein. However, a comparison of the physicochemical properties of pea protein isolate (PPI)-starch nanocomplexes and soy protein isolate (SPI)-starch nanocomplexes treated by ultrasound has not been well documented. This study was undertaken to investigate the effects of ultrasound treatment on the physicochemical properties of PPI-starch and SPI-starch nanocomplexes. Pea protein isolate (85% pea protein) provided by Roquette (Geneva, IL, USA) and soy protein isolate (SPI, Pro-Fam® 955) obtained from the Archer Daniels Midland Company were adjusted to different pH levels (2-12) and treated with 5 minutes of ultrasonication (100% amplitude) to form complexes with starch. The soluble protein content was determined by the Bradford method using BSA as the standard. The turbidity of the samples was measured using a spectrophotometer (Lambda 1050 UV/VIS/NIR Spectrometer, PerkinElmer, Waltham, MA, USA). The volume-weighted mean diameters (D4, 3) of the soluble proteins were determined by dynamic light scattering (DLS). The emulsifying properties of the proteins were evaluated by the emulsion stability index (ESI) and emulsion activity index (EAI). Both the soy and pea protein isolates showed a U-shaped solubility curve as a function of pH, with a high solubility above the isoelectric point and a low one below it. Increasing the pH from 2 to 12 resulted in increased solubility for both the SPI and PPI-starch complexes. The pea nanocomplexes showed greater solubility than the soy ones. The SPI-starch nanocomplexes showed better emulsifying properties determined by the emulsion stability index (ESI) and emulsion activity index (EAI) due to SPI’s high solubility and high protein content. The PPI had similar or better emulsifying properties at certain pH values than the SPI. The ultrasound treatment significantly decreased the particle sizes of both kinds of nanocomplex. For all pH levels with both proteins, the droplet sizes were found to be lower than 300 nm. The present study clearly demonstrated that applying ultrasonication under different pH conditions significantly improved the solubility and emulsify¬ing properties of the SPI and PPI. The PPI exhibited better solubility and emulsifying properties than the SPI at certain pH levels

Keywords: emulsifying properties, pea protein isolate, soy protein isolate, ultrasonication

Procedia PDF Downloads 319
188 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility

Procedia PDF Downloads 132
187 Validation of the Arabic Version of the Positive and Negative Syndrome Scale (PANSS)

Authors: Arij Yehya, Suhaila Ghuloum, Abdlmoneim Abdulhakam, Azza Al-Mujalli, Mark Opler, Samer Hammoudeh, Yahya Hani, Sundus Mari, Reem Elsherbiny, Ziyad Mahfoud, Hassen Al-Amin

Abstract:

Introduction: The Positive and Negative Syndrome Scale (PANSS) is a valid instrument developed by Kay and colleagues6 to assess symptoms of patients with schizophrenia. It consists of 30 items that factor the symptoms into three subscales: positive, negative and general psychopathology. This scale has been translated and validated in several languages. Objective: This study aims to determine the validity and psychometric properties of the Arabic version of the PANSS. Methods: A standardized translation and cultural adaptation method was adopted. Patients diagnosed with schizophrenia (n=98), according to psychiatrist’s diagnosis based on DSM-IV criteria, were recruited from the Psychiatry Department at Rumailah Hospital, Qatar. A first rater confirmed the diagnosis using the Arabic version of Mini International Neuropsychiatric Interview (MINI 6). A second and independent rater-administered the Arabic version of PANSS. Also, a control group (n=101), with no history of psychiatric disorder was recruited from the family and friends of the patients and from primary health care centers in Qatar. Results: There were more males than females in our sample of patients with schizophrenia (68.9% and 31.6%, respectively). On the other hand, in the control group the number of females outweighed that of males (58.4% and 41.6% respectively). The scale had a good internal consistency with Cronbach’s alpha 0.91. There was a significant difference between the scores on the three subscales of the PANSS. Patients with schizophrenia scored significantly higher (p<.0001) than the control subjects on subscales for positive symptoms 20.01(SD=7.21) and 7.30(SD=1.38), negative symptoms 18.89(SD=8.88) and 7.37(SD=2.38) and general psychopathology 34.41 (SD=11.56) and 16.93 (SD=3.93), respectively. Factor analysis and ROC curve were carried out to further test the psychometrics of the scale. Conclusions: The Arabic version of PANSS is a reliable and valid tool to assess both positive and negative symptoms of patients with schizophrenia in a balanced manner. In addition to providing the Arab population with a standardized tool to monitor symptoms of schizophrenia, this version provides a gateway to compare the prevalence of positive and negative symptoms in the Arab world which can be compared to others done elsewhere.

Keywords: Arabic version, assessment, diagnosis, schizophrenia, validation

Procedia PDF Downloads 635
186 Influence of Bottom Ash on the Geotechnical Parameters of Clayey Soil

Authors: Tanios Saliba, Jad Wakim, Elie Awwad

Abstract:

Clayey soils exhibit undesirable problems in civil engineering project: poor bearing soil capacity, shrinkage, cracking, …etc. On the other hand, the increasing production of bottom ash and its disposal in an eco-friendly manner is a matter of concern. Soil stabilization using bottom ash is a new technic in the geo-environmental engineering. It can be used wherever a soft clayey soil is encountered in foundations or road subgrade, instead of using old technics such as cement-soil mixing. This new technology can be used for road embankments and clayey foundations platform (shallow or deep foundations) instead of replacing bad soil or using old technics which aren’t eco-friendly. Moreover, applying this new technic in our geotechnical engineering projects can reduce the disposal of the bottom ash problem which is getting bigger day after day. The research consists of mixing clayey soil with different percentages of bottom ash at different values of water content, and evaluates the mechanical properties of every mix: the percentages of bottom ash are 10% 20% 30% 40% and 50% with values of water content of 25% 35% and 45% of the mix’s weight. Before testing the different mixes, clayey soil’s properties were determined: Atterbeg limits, soil’s cohesion and friction angle and particle size distribution. In order to evaluate the mechanical properties and behavior of every mix, different tests are conducted: -Direct shear test in order to determine the cohesion and internal friction angle of every mix. -Unconfined compressive strength (stress strain curve) to determine mix’s elastic modulus and compressive strength. Soil samples are prepared in accordance with the ASTM standards, and tested at different times, in order to be able to emphasize the influence of the curing period on the variation of the mix’s mechanical properties and characteristics. As of today, the results obtained are very promising: the mix’s cohesion and friction angle vary in function of the bottom ash percentage, water content and curing period: the cohesion increases enormously before decreasing for a long curing period (values of mix’s cohesion are larger than intact soil’s cohesion) while internal friction angle keeps on increasing even when the curing period is 28 days (the tests largest curing period), which give us a better soil behavior: less cracks and better soil bearing capacity.

Keywords: bottom ash, Clayey soil, mechanical properties, tests

Procedia PDF Downloads 177
185 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls

Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin

Abstract:

The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.

Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis

Procedia PDF Downloads 216
184 Rainwater Harvesting and Management of Ground Water (Case Study Weather Modification Project in Iran)

Authors: Samaneh Poormohammadi, Farid Golkar, Vahideh Khatibi Sarabi

Abstract:

Climate change and consecutive droughts have increased the importance of using rainwater harvesting methods. One of the methods of rainwater harvesting and, in other words, the management of atmospheric water resources is the use of weather modification technologies. Weather modification (also known as weather control) is the act of intentionally manipulating or altering the weather. The most common form of weather modification is cloud seeding, which increases rain or snow, usually for the purpose of increasing the local water supply. Cloud seeding operations in Iran have been married since 1999 in central Iran with the aim of harvesting rainwater and reducing the effects of drought. In this research, we analyze the results of cloud seeding operations in the Simindashtplain in northern Iran. Rainwater harvesting with the help of cloud seeding technology has been evaluated through its effects on surface water and underground water. For this purpose, two different methods have been used to estimate runoff. The first method is the US Soil Conservation Service (SCS) curve number method. Another method, known as the reasoning method, has also been used. In order to determine the infiltration rate of underground water, the balance reports of the comprehensive water plan of the country have been used. In this regard, the study areas located in the target area of each province have been extracted by drawing maps of the influence coefficients of each area in the GIS software. It should be mentioned that the infiltration coefficients were taken from the balance sheet reports of the country's comprehensive water plan. Then, based on the area of each study area, the weighted average of the infiltration coefficient of the study areas located in the target area of each province is considered as the infiltration coefficient of that province. Results show that the amount of water extracted from the rain with the help of cloud seeding projects in Simindasht is as follows: an increase in runoff 63.9 million cubic meters (with SCS equation) or 51.2 million cubic meters (with logical equation) and an increase in ground water resources: 40.5 million cubic meters.

Keywords: rainwater harvesting, ground water, atmospheric water resources, weather modification, cloud seeding

Procedia PDF Downloads 105
183 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems

Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra

Abstract:

Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.

Keywords: automated, biomechanics, team-sports, sprint

Procedia PDF Downloads 119
182 Enhanced Kinetic Solubility Profile of Epiisopiloturine Solid Solution in Hipromellose Phthalate

Authors: Amanda C. Q. M. Vieira, Cybelly M. Melo, Camila B. M. Figueirêdo, Giovanna C. R. M. Schver, Salvana P. M. Costa, Magaly A. M. de Lyra, Ping I. Lee, José L. Soares-Sobrinho, Pedro J. Rolim-Neto, Mônica F. R. Soares

Abstract:

Epiisopiloturine (EPI) is a drug candidate that is extracted from Pilocarpus microphyllus and isolated from the waste of Pilocarpine. EPI has demonstrated promising schistosomicidal, leishmanicide, anti-inflammatory and antinociceptive activities, according to in vitro studies that have been carried out since 2009. However, this molecule shows poor aqueous solubility, which represents a problem for the release of the drug candidate and its absorption by the organism. The purpose of the present study is to investigate the extent of enhancement of kinetic solubility of a solid solution (SS) of EPI in hipromellose phthalate HP-55 (HPMCP), an enteric polymer carrier. SS was obtained by the solvent evaporation methodology, using acetone/methanol (60:40) as solvent system. Both EPI and polymer (drug loading 10%) were dissolved in this solvent until a clear solution was obtained, and then dried in oven at 60ºC during 12 hours, followed by drying in a vacuum oven for 4 h. The results show a considerable modification in the crystalline structure of the drug candidate. For instance, X-ray diffraction (XRD) shows a crystalline behavior for the EPI, which becomes amorphous for the SS. Polarized light microscopy, a more sensitive technique than XRD, also shows completely absence of crystals in SS sample. Differential Scanning Calorimetric (DSC) curves show no signal of EPI melting point in SS curve, indicating, once more, no presence of crystal in this system. Interaction between the drug candidate and the polymer were found in Infrared microscopy, which shows a carbonyl 43.3 cm-1 band shift, indicating a moderate-strong interaction between them, probably one of the reasons to the SS formation. Under sink conditions (pH 6.8), EPI SS had its dissolution performance increased in 2.8 times when compared with the isolated drug candidate. EPI SS sample provided a release of more than 95% of the drug candidate in 15 min, whereas only 45% of EPI (alone) could be dissolved in 15 min and 70% in 90 min. Thus, HPMCP demonstrates to have a good potential to enhance the kinetic solubility profile of EPI. Future studies to evaluate the stability of SS are required to conclude the benefits of this system.

Keywords: epiisopiloturine, hipromellose phthalate HP-55, pharmaceuticaltechnology, solubility

Procedia PDF Downloads 608
181 Flexible Ethylene-Propylene Copolymer Nanofibers Decorated with Ag Nanoparticles as Effective 3D Surface-Enhanced Raman Scattering Substrates

Authors: Yi Li, Rui Lu, Lianjun Wang

Abstract:

With the rapid development of chemical industry, the consumption of volatile organic compounds (VOCs) has increased extensively. In the process of VOCs production and application, plenty of them have been transferred to environment. As a result, it has led to pollution problems not only in soil and ground water but also to human beings. Thus, it is important to develop a sensitive and cost-effective analytical method for trace VOCs detection in environment. Surface-enhanced Raman Spectroscopy (SERS), as one of the most sensitive optical analytical technique with rapid response, pinpoint accuracy and noninvasive detection, has been widely used for ultratrace analysis. Based on the plasmon resonance on the nanoscale metallic surface, SERS technology can even detect single molecule due to abundant nanogaps (i.e. 'hot spots') on the nanosubstrate. In this work, a self-supported flexible silver nitrate (AgNO3)/ethylene-propylene copolymer (EPM) hybrid nanofibers was fabricated by electrospinning. After an in-situ chemical reduction using ice-cold sodium borohydride as reduction agent, numerous silver nanoparticles were formed on the nanofiber surface. By adjusting the reduction time and AgNO3 content, the morphology and dimension of silver nanoparticles could be controlled. According to the principles of solid-phase extraction, the hydrophobic substance is more likely to partition into the hydrophobic EPM membrane in an aqueous environment while water and other polar components are excluded from the analytes. By the enrichment of EPM fibers, the number of hydrophobic molecules located on the 'hot spots' generated from criss-crossed nanofibers is greatly increased, which further enhances SERS signal intensity. The as-prepared Ag/EPM hybrid nanofibers were first employed to detect common SERS probe molecule (p-aminothiophenol) with the detection limit down to 10-12 M, which demonstrated an excellent SERS performance. To further study the application of the fabricated substrate for monitoring hydrophobic substance in water, several typical VOCs, such as benzene, toluene and p-xylene, were selected as model compounds. The results showed that the characteristic peaks of these target analytes in the mixed aqueous solution could be distinguished even at a concentration of 10-6 M after multi-peaks gaussian fitting process, including C-H bending (850 cm-1), C-C ring stretching (1581 cm-1, 1600 cm-1) of benzene, C-H bending (844 cm-1 ,1151 cm-1), C-C ring stretching (1001 cm-1), CH3 bending vibration (1377 cm-1) of toluene, C-H bending (829 cm-1), C-C stretching (1614 cm-1) of p-xylene. The SERS substrate has remarkable advantages which combine the enrichment capacity from EPM and the Raman enhancement of Ag nanoparticles. Meanwhile, the huge specific surface area resulted from electrospinning is benificial to increase the number of adsoption sites and promotes 'hot spots' formation. In summary, this work provides powerful potential in rapid, on-site and accurate detection of trace VOCs using a portable Raman.

Keywords: electrospinning, ethylene-propylene copolymer, silver nanoparticles, SERS, VOCs

Procedia PDF Downloads 161
180 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps

Procedia PDF Downloads 427