Search results for: curve approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1517

Search results for: curve approximation

227 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method

Authors: Emmanuel Ophel Gilbert, Williams Speret

Abstract:

The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.

Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids

Procedia PDF Downloads 141
226 Three Foci of Trust as Potential Mediators in the Association Between Job Insecurity and Dynamic Organizational Capability: A Quantitative, Exploratory Study

Authors: Marita Heyns

Abstract:

Job insecurity is a distressing phenomenon which has far reaching consequences for both employees and their organizations. Previously, much attention has been given to the link between job insecurity and individual level performance outcomes, while less is known about how subjectively perceived job insecurity might transfer beyond the individual level to affect performance of the organization on an aggregated level. Research focusing on how employees’ fear of job loss might affect the organization’s ability to respond proactively to volatility and drastic change through applying its capabilities of sensing, seizing, and reconfiguring, appears to be practically non-existent. Equally little is known about the potential underlying mechanisms through which job insecurity might affect the dynamic capabilities of an organization. This study examines how job insecurity might affect dynamic organizational capability through trust as an underling process. More specifically, it considered the simultaneous roles of trust at an impersonal (organizational) level as well as trust at an interpersonal level (in leaders and co-workers) as potential underlying mechanisms through which job insecurity might affect the organization’s dynamic capability to respond to opportunities and imminent, drastic change. A quantitative research approach and a stratified random sampling technique enabled the collection of data among 314 managers at four different plant sites of a large South African steel manufacturing organization undergoing dramatic changes. To assess the study hypotheses, the following statistical procedures were employed: Structural equation modelling was performed in Mplus to evaluate the measurement and structural models. The Chi-square values test for absolute fit as well as alternative fit indexes such as the Comparative Fit Index and the Tucker-Lewis Index, the Root Mean Square Error of Approximation and the Standardized Root Mean Square Residual were used as indicators of model fit. Composite reliabilities were calculated to evaluate the reliability of the factors. Finally, interaction effects were tested by using PROCESS and the construction of two-sided 95% confidence intervals. The findings indicate that job insecurity had a lower-than-expected detrimental effect on evaluations of the organization’s dynamic capability through the conducive buffering effects of trust in the organization and in its leaders respectively. In contrast, trust in colleagues did not seem to have any noticeable facilitative effect. The study proposes that both job insecurity and dynamic capability can be managed more effectively by also paying attention to factors that could promote trust in the organization and its leaders; some practical recommendations are given in this regard.

Keywords: dynamic organizational capability, impersonal trust, interpersonal trust, job insecurity

Procedia PDF Downloads 47
225 Role of von Willebrand Factor Antigen as Non-Invasive Biomarker for the Prediction of Portal Hypertensive Gastropathy in Patients with Liver Cirrhosis

Authors: Mohamed El Horri, Amine Mouden, Reda Messaoudi, Mohamed Chekkal, Driss Benlaldj, Malika Baghdadi, Lahcene Benmahdi, Fatima Seghier

Abstract:

Background/aim: Recently, the Von Willebrand factor antigen (vWF-Ag)has been identified as a new marker of portal hypertension (PH) and its complications. Few studies talked about its role in the prediction of esophageal varices. VWF-Ag is considered a non-invasive approach, In order to avoid the endoscopic burden, cost, drawbacks, unpleasant and repeated examinations to the patients. In our study, we aimed to evaluate the ability of this marker in the prediction of another complication of portal hypertension, which is portal hypertensive gastropathy (PHG), the one that is diagnosed also by endoscopic tools. Patients and methods: It is about a prospective study, which include 124 cirrhotic patients with no history of bleeding who underwent screening endoscopy for PH-related complications like esophageal varices (EVs) and PHG. Routine biological tests were performed as well as the VWF-Ag testing by both ELFA and Immunoturbidimetric techniques. The diagnostic performance of our marker was assessed using sensitivity, specificity, positive predictive value, negative predictive value, accuracy, and receiver operating characteristic curves. Results: 124 patients were enrolled in this study, with a mean age of 58 years [CI: 55 – 60 years] and a sex ratio of 1.17. Viral etiologies were found in 50% of patients. Screening endoscopy revealed the presence of PHG in 20.2% of cases, while for EVsthey were found in 83.1% of cases. VWF-Ag levels, were significantly increased in patients with PHG compared to those who have not: 441% [CI: 375 – 506], versus 279% [CI: 253 – 304], respectively (p <0.0001). Using the area under the receiver operating characteristic curve (AUC), vWF-Ag was a good predictor for the presence of PHG. With a value higher than 320% and an AUC of 0.824, VWF-Ag had an 84% sensitivity, 74% specificity, 44.7% positive predictive value, 94.8% negative predictive value, and 75.8% diagnostic accuracy. Conclusion: VWF-Ag is a good non-invasive low coast marker for excluding the presence of PHG in patients with liver cirrhosis. Using this marker as part of a selective screening strategy might reduce the need for endoscopic screening and the coast of the management of these kinds of patients.

Keywords: von willebrand factor, portal hypertensive gastropathy, prediction, liver cirrhosis

Procedia PDF Downloads 167
224 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: chronological sequence of discharges, recession curves, streamflow duration curves, water concession

Procedia PDF Downloads 140
223 A Foodborne Cholera Outbreak in a School Caused by Eating Contaminated Fried Fish: Hoima Municipality, Uganda, February 2018

Authors: Dativa Maria Aliddeki, Fred Monje, Godfrey Nsereko, Benon Kwesiga, Daniel Kadobera, Alex Riolexus Ario

Abstract:

Background: Cholera is a severe gastrointestinal disease caused by Vibrio cholera. It has caused several pandemics. On 26 February 2018, a suspected cholera outbreak, with one death, occurred in School X in Hoima Municipality, western Uganda. We investigated to identify the scope and mode of transmission of the outbreak, and recommend evidence-based control measures. Methods: We defined a suspected case as onset of diarrhea, vomiting, or abdominal pain in a student or staff of School X or their family members during 14 February–10 March. A confirmed case was a suspected case with V. cholerae cultured from stool. We reviewed medical records at Hoima Hospital and searched for cases at School X. We conducted descriptive epidemiologic analysis and hypothesis-generating interviews of 15 case-patients. In a retrospective cohort study, we compared attack rates between exposed and unexposed persons. Results: We identified 15 cases among 75 students and staff of School X and their family members (attack rate=20%), with onset from 25-28 February. One patient died (case-fatality rate=6.6%). The epidemic curve indicated a point-source exposure. On 24 February, a student brought fried fish from her home in a fishing village, where a cholera outbreak was ongoing. Of the 21 persons who ate the fish, 57% developed cholera, compared with 5.6% of 54 persons who did not eat (RR=10; 95% CI=3.2-33). None of 4 persons who recooked the fish before eating, compared with 71% of 17 who did not recook it, developed cholera (RR=0.0, 95%CIFisher exact=0.0-0.95). Of 12 stool specimens cultured, 6 yielded V. cholerae. Conclusion: This cholera outbreak was caused by eating fried fish, which might have been contaminated with V. cholerae in a village with an ongoing outbreak. Lack of thorough cooking of the fish might have facilitated the outbreak. We recommended thoroughly cooking fish before consumption.

Keywords: cholera, disease outbreak, foodborne, global health security, Uganda

Procedia PDF Downloads 158
222 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions

Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba

Abstract:

Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.

Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial

Procedia PDF Downloads 142
221 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design

Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian

Abstract:

Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.

Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.

Procedia PDF Downloads 248
220 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 112
219 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 127
218 Development of Adsorbents for Removal of Hydrogen Sulfide and Ammonia Using Pyrolytic Carbon Black form Waste Tires

Authors: Yang Gon Seo, Chang-Joon Kim, Dae Hyeok Kim

Abstract:

It is estimated that 1.5 billion tires are produced worldwide each year which will eventually end up as waste tires representing a major potential waste and environmental problem. Pyrolysis has been great interest in alternative treatment processes for waste tires to produce valuable oil, gas and solid products. The oil and gas products may be used directly as a fuel or a chemical feedstock. The solid produced from the pyrolysis of tires ranges typically from 30 to 45 wt% and have high carbon contents of up to 90 wt%. However, most notably the solid have high sulfur contents from 2 to 3 wt% and ash contents from 8 to 15 wt% related to the additive metals. Upgrading tire pyrolysis products to high-value products has concentrated on solid upgrading to higher quality carbon black and to activated carbon. Hydrogen sulfide and ammonia are one of the common malodorous compounds that can be found in emissions from many sewages treatment plants and industrial plants. Therefore, removing these harmful gasses from emissions is of significance in both life and industry because they can cause health problems to human and detrimental effects on the catalysts. In this work, pyrolytic carbon black from waste tires was used to develop adsorbent with good adsorption capacity for removal of hydrogen and ammonia. Pyrolytic carbon blacks were prepared by pyrolysis of waste tire chips ranged from 5 to 20 mm under the nitrogen atmosphere at 600℃ for 1 hour. Pellet-type adsorbents were prepared by a mixture of carbon black, metal oxide and sodium hydroxide or hydrochloric acid, and their adsorption capacities were estimated by using the breakthrough curve of a continuous fixed bed adsorption column at ambient condition. The adsorbent was manufactured with a mixture of carbon black, iron oxide(III), and sodium hydroxide showed the maximum working capacity of hydrogen sulfide. For ammonia, maximum working capacity was obtained by the adsorbent manufactured with a mixture of carbon black, copper oxide(II), and hydrochloric acid.

Keywords: adsorbent, ammonia, pyrolytic carbon black, hydrogen sulfide, metal oxide

Procedia PDF Downloads 220
217 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 106
216 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals

Authors: Carlos Teodoro, Oscar Bautista

Abstract:

In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.

Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation

Procedia PDF Downloads 189
215 Ramadan as a Model of Intermittent Fasting: Effects on Gut Hormones, Appetite and Body Composition in Diabetes vs. Controls

Authors: Turki J. Alharbi, Jencia Wong, Dennis Yue, Tania P. Markovic, Julie Hetherington, Ted Wu, Belinda Brooks, Radhika Seimon, Alice Gibson, Stephanie L. Silviera, Amanda Sainsbury, Tanya J. Little

Abstract:

Fasting has been practiced for centuries and is incorporated into the practices of different religions including Islam, whose followers intermittently fast throughout the month of Ramadan. Thus, Ramadan presents a unique model of prolonged intermittent fasting (IF). Despite a growing body of evidence for a cardio-metabolic and endocrine benefit of IF, detailed studies of the effects of IF on these indices in type 2 diabetes are scarce. We studied 5 subjects with type 2 diabetes (T2DM) and 7 healthy controls (C) at baseline (pre), and in the last week of Ramadan (post). Fasting circulating levels of glucose, HbA1c and lipids, as well as body composition (with DXA) and resting energy expenditure (REE) were measured. Plasma gut hormone levels and appetite responses to a mixed meal were also studied. Data are means±SEM. Ramadan decreased total fat mass (-907±92 g, p=0.001) and trunk fat (-778±190 g, p=0.014) in T2DM but not in controls, without any reductions in lean mass or REE. There was a trend towards a decline in plasma FFA in both groups. Ramadan had no effect on body weight, glycemia, blood pressure, or plasma lipids in either group. In T2DM only, the area under the curve for post-meal plasma ghrelin concentrations increased after Ramadan (pre:6632±1737 vs. post:9025±2518 pg/ml.min-1, p=0.045). Despite this increase in orexigenic ghrelin, subjective appetite scores were not altered by Ramadan. Meal-induced plasma concentrations of the satiety hormone pancreatic polypeptide did not change during Ramadan, but were higher in T2DM compared to controls (post: C: 23486±6677 vs. T2DM: 62193±6880 pg/ml.min-1, p=0.003. In conclusion, Ramadan, as a model for IF appears to have more favourable effects on body composition in T2DM, without adverse effects on metabolic control or subjective appetite. These data suggest that IF may be particularly beneficial in T2DM as a nutritional intervention. Larger studies are warranted.

Keywords: type 2 diabetes, obesity, intermittent fasting, appetite regulating hormones

Procedia PDF Downloads 284
214 Pareto Optimal Material Allocation Mechanism

Authors: Peter Egri, Tamas Kis

Abstract:

Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.

Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling

Procedia PDF Downloads 293
213 In-House Fatty Meal Cholescintigraphy as a Screening Tool in Patients Presenting with Dyspepsia

Authors: Avani Jain, S. Shelley, M. Indirani, Shilpa Kalal, Jaykanth Amalachandran

Abstract:

Aim: To evaluate the prevalence of gall bladder dysfunction in patients with dyspepsia using In-House fatty meal cholescintigraphy. Materials & Methods: This study is a prospective cohort study. 59 healthy volunteers with no dyspeptic complaints and negative ultrasound and endoscopy were recruited in study. 61 patients having complaint of dyspepsia for duration of more than 6 months were included. All of them underwent 99mTc-Mebrofenin fatty meal cholescintigraphy following a standard protocol. Dynamic acquisitions were acquired for 120 minutes with an In-House fatty meal being given at 45th minute. Gall bladder emptying kinetics was determined with gall bladder ejection fractions (GBEF) calculated at 30minutes, 45minutes and at 60 minutes (30min, 45min & 60 min). Standardization of fatty meal was done for volunteers. Receiver operating characteristic (ROC) analysis was used assess the diagnostic accuracy of 3 time points (30min, 45min & 60 min) used for measuring gall bladder emptying. On the basis of cut off derived from volunteers, the patients were assessed for gall bladder dysfunction. Results: In volunteers, the GBEF at 30 min was 74.42±8.26 % (mean ±SD), at 45 min was 82.61 ± 6.5 % and at 60 min was 89.37±4.48%, compared to patients where at 30min it was 33.73±22.87%, at 45 min it was 43.03±26.97% and at 60 min it was 51.85±29.60%. The lower limit of GBEF in volunteers at 30 min was 60%, 45 min was 69% and at 60 min was 81%. ROC analysis showed that area under curve was largest for 30 min GBEF (0.952; 95% CI = 0.914-0.989) and that all the 3 measures were statistically significant (p < 0.005). Majority of the volunteers had 74% of gall bladder emptying by 30 minutes; hence it was taken as an optimum cutoff time to assess gall bladder contraction. > 60% GBEF at 30 min post fatty meal was considered as normal and < 60% GBEF as indicative of gall bladder dysfunction. In patients, various causes for dyspepsia were identified: GB dysfunction (63.93%), Peptic ulcer (8.19 %), Gastroesophageal reflux disease (8.19%), Gastritis (4.91%). In 18.03% of cases GB dysfunction coexisted with other gastrointestinal conditions. The diagnosis of functional dyspepsia was made in 14.75% of cases. Conclusions: Gall bladder dysfunction contributes significantly to the causation of dyspepsia. It could coexist with various other gastrointestinal diseases. Fatty meal was well tolerated and devoid of any side effects. Many patients who are labeled as functional dyspeptics could actually have gall bladder dysfunction. Hence as an adjunct to ultrasound and endoscopy, fatty meal cholescintigraphy can also be used as a screening modality in characterization of dyspepsia.

Keywords: in-house fatty meal, choescintigraphy, dyspepsia, gall bladder ejection fraction, functional dyspepsia

Procedia PDF Downloads 474
212 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique

Authors: Igor V. Savchenko, Dmitrii A. Samoshkin

Abstract:

The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.

Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity

Procedia PDF Downloads 163
211 Properties of Magnesium-Based Hydrogen Storage Alloy Added with Palladium and Titanium Hydride

Authors: Jun Ying Lin, Tzu Hsiang Yen, Cha'o Kuang Chen

Abstract:

Nowadays, the great majority believe that there is great potentiality in hydrogen storage alloy storing hydrogen by physical and chemical absorption. However, the hydrogen storage alloy is limited by high operation temperature. Scientists find that adding transition elements can improve the properties of hydrogen storage alloy. In this research, outstanding improvements of kinetic and thermal properties are given by the addition of Palladium and Titanium hydride to Magnesium-based hydrogen storage alloy. Magnesium-based alloy is the main material, into which TiH2 / Pd are added separately. Following that, materials are milled by a Planetary Ball Miller at 650 rpm. TGA/DSC and PCT measure the capacity, spending time and temperature of abs/des-orption. Additionally, SEM and XRD analyze the structures and components of material. It is clearly shown that Pd is beneficial to kinetic properties. 2MgH2-0.1Pd has the highest capacity of all the alloys listed, approximately 5.5 wt%. Secondly, there are not any new Ti-related compounds found from XRD analysis. Thus, TiH2, considered as the catalyst, leads to the condition of 2MgH2-TiH2 and 2MgH2-TiH2-0.1Pd efficiently absorbing hydrogen in low temperature. 2MgH2-TiH2 can reach roughly 3.0 wt% in 82.4 minutes at 50°C and 8 minutes at 100°C, while2MgH2-TiH2-0.1Pd can reach 2.0 wt% in 400 minutes at 50°C and in 48 minutes at 100°C. The lowest temperature of 2MgH2-0.1Pd and 2MgH2-TiH2 is similar (320°C), otherwise the lowest temperature of 2MgH2-TiH2-0.1Pd decrease by 20°C. From XRD, it can be observed that PdTi2 and Pd3Ti are produced by mechanical alloying when adding Pd as well as TiH2 into MgH2. Due to the synergistic effects between Pd and TiH2, 2MgH2-TiH2-0.1Pd owns the lowest dehydrogenation temperature. Furthermore, the Pressure-Composition-Temperature (PCT) curve of 2MgH2-TiH2-0.1Pd is measured at different temperature, 370°C, 350°C, 320°C and 300°C separately. The plateau pressure is given form the PCT curves above. In accordance to different plateau pressures, enthalpy and entropy in the Van’t Hoff equation can be solved. In 2MgH2-TiH2-0.1Pd, the enthalpy is 74.9 KJ/mol and the entropy is 122.9 J/mol. Activation means that hydrogen storage alloy undergoes repeat abs/des-orpting processes. It plays an important role in the abs/des-orption. Activation shortens the abs/des-orption time because of the increase in surface area. From SEM, it is clear that the grain size and surface become smaller and rougher

Keywords: hydrogen storage materials, magnesium hydride, abs-/des-orption performance, Plateau pressure

Procedia PDF Downloads 223
210 Effects of Nutrient Source and Drying Methods on Physical and Phytochemical Criteria of Pot Marigold (Calendula offiCinalis L.) Flowers

Authors: Leila Tabrizi, Farnaz Dezhaboun

Abstract:

In order to study the effect of plant nutrient source and different drying methods on physical and phytochemical characteristics of pot marigold (Calendula officinalis L., Asteraceae) flowers, a factorial experiment was conducted based on completely randomized design with three replications in Research Laboratory of University of Tehran in 2010. Different nutrient sources (vermicompost, municipal waste compost, cattle manure, mushroom compost and control) which were applied in a field experiment for flower production and different drying methods including microwave (300, 600 and 900 W), oven (60, 70 and 80oC) and natural-shade drying in room temperature, were tested. Criteria such as drying kinetic, antioxidant activity, total flavonoid content, total phenolic compounds and total carotenoid of flowers were evaluated. Results indicated that organic inputs as nutrient source for flowers had no significant effects on quality criteria of pot marigold except of total flavonoid content, while drying methods significantly affected phytochemical criteria. Application of microwave 300, 600 and 900 W resulted in the highest amount of total flavonoid content, total phenolic compounds and antioxidant activity, respectively, while oven drying caused the lowest amount of phytochemical criteria. Also, interaction effect of nutrient source and drying method significantly affected antioxidant activity in which the highest amount of antioxidant activity was obtained in combination of vermicompost and microwave 900 W. In addition, application of vermicompost combined with oven drying at 60oC caused the lowest amount of antioxidant activity. Based on results of drying trend, microwave drying showed a faster drying rate than those oven and natural-shade drying in which by increasing microwave power and oven temperature, time of flower drying decreased whereas slope of moisture content reduction curve showed accelerated trend.

Keywords: drying kinetic, medicinal plant, organic fertilizer, phytochemical criteria

Procedia PDF Downloads 292
209 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh

Authors: Zahid Khalil, Saad Ul Haque, Asif Khan

Abstract:

Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).

Keywords: Remote sensing, GIS, AHP, RWH

Procedia PDF Downloads 356
208 Physicochemical Properties of Pea Protein Isolate (PPI)-Starch and Soy Protein Isolate (SPI)-Starch Nanocomplexes Treated by Ultrasound at Different pH Values

Authors: Gulcin Yildiz, Hao Feng

Abstract:

Soybean proteins are the most widely used and researched proteins in the food industry. Due to soy allergies among consumers, however, alternative legume proteins having similar functional properties have been studied in recent years. These alternative proteins are also expected to have a price advantage over soy proteins. One such protein that has shown good potential for food applications is pea protein. Besides the favorable functional properties of pea protein, it also contains fewer anti-nutritional substances than soy protein. However, a comparison of the physicochemical properties of pea protein isolate (PPI)-starch nanocomplexes and soy protein isolate (SPI)-starch nanocomplexes treated by ultrasound has not been well documented. This study was undertaken to investigate the effects of ultrasound treatment on the physicochemical properties of PPI-starch and SPI-starch nanocomplexes. Pea protein isolate (85% pea protein) provided by Roquette (Geneva, IL, USA) and soy protein isolate (SPI, Pro-Fam® 955) obtained from the Archer Daniels Midland Company were adjusted to different pH levels (2-12) and treated with 5 minutes of ultrasonication (100% amplitude) to form complexes with starch. The soluble protein content was determined by the Bradford method using BSA as the standard. The turbidity of the samples was measured using a spectrophotometer (Lambda 1050 UV/VIS/NIR Spectrometer, PerkinElmer, Waltham, MA, USA). The volume-weighted mean diameters (D4, 3) of the soluble proteins were determined by dynamic light scattering (DLS). The emulsifying properties of the proteins were evaluated by the emulsion stability index (ESI) and emulsion activity index (EAI). Both the soy and pea protein isolates showed a U-shaped solubility curve as a function of pH, with a high solubility above the isoelectric point and a low one below it. Increasing the pH from 2 to 12 resulted in increased solubility for both the SPI and PPI-starch complexes. The pea nanocomplexes showed greater solubility than the soy ones. The SPI-starch nanocomplexes showed better emulsifying properties determined by the emulsion stability index (ESI) and emulsion activity index (EAI) due to SPI’s high solubility and high protein content. The PPI had similar or better emulsifying properties at certain pH values than the SPI. The ultrasound treatment significantly decreased the particle sizes of both kinds of nanocomplex. For all pH levels with both proteins, the droplet sizes were found to be lower than 300 nm. The present study clearly demonstrated that applying ultrasonication under different pH conditions significantly improved the solubility and emulsify¬ing properties of the SPI and PPI. The PPI exhibited better solubility and emulsifying properties than the SPI at certain pH levels

Keywords: emulsifying properties, pea protein isolate, soy protein isolate, ultrasonication

Procedia PDF Downloads 275
207 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility

Procedia PDF Downloads 92
206 Validation of the Arabic Version of the Positive and Negative Syndrome Scale (PANSS)

Authors: Arij Yehya, Suhaila Ghuloum, Abdlmoneim Abdulhakam, Azza Al-Mujalli, Mark Opler, Samer Hammoudeh, Yahya Hani, Sundus Mari, Reem Elsherbiny, Ziyad Mahfoud, Hassen Al-Amin

Abstract:

Introduction: The Positive and Negative Syndrome Scale (PANSS) is a valid instrument developed by Kay and colleagues6 to assess symptoms of patients with schizophrenia. It consists of 30 items that factor the symptoms into three subscales: positive, negative and general psychopathology. This scale has been translated and validated in several languages. Objective: This study aims to determine the validity and psychometric properties of the Arabic version of the PANSS. Methods: A standardized translation and cultural adaptation method was adopted. Patients diagnosed with schizophrenia (n=98), according to psychiatrist’s diagnosis based on DSM-IV criteria, were recruited from the Psychiatry Department at Rumailah Hospital, Qatar. A first rater confirmed the diagnosis using the Arabic version of Mini International Neuropsychiatric Interview (MINI 6). A second and independent rater-administered the Arabic version of PANSS. Also, a control group (n=101), with no history of psychiatric disorder was recruited from the family and friends of the patients and from primary health care centers in Qatar. Results: There were more males than females in our sample of patients with schizophrenia (68.9% and 31.6%, respectively). On the other hand, in the control group the number of females outweighed that of males (58.4% and 41.6% respectively). The scale had a good internal consistency with Cronbach’s alpha 0.91. There was a significant difference between the scores on the three subscales of the PANSS. Patients with schizophrenia scored significantly higher (p<.0001) than the control subjects on subscales for positive symptoms 20.01(SD=7.21) and 7.30(SD=1.38), negative symptoms 18.89(SD=8.88) and 7.37(SD=2.38) and general psychopathology 34.41 (SD=11.56) and 16.93 (SD=3.93), respectively. Factor analysis and ROC curve were carried out to further test the psychometrics of the scale. Conclusions: The Arabic version of PANSS is a reliable and valid tool to assess both positive and negative symptoms of patients with schizophrenia in a balanced manner. In addition to providing the Arab population with a standardized tool to monitor symptoms of schizophrenia, this version provides a gateway to compare the prevalence of positive and negative symptoms in the Arab world which can be compared to others done elsewhere.

Keywords: Arabic version, assessment, diagnosis, schizophrenia, validation

Procedia PDF Downloads 599
205 Influence of Bottom Ash on the Geotechnical Parameters of Clayey Soil

Authors: Tanios Saliba, Jad Wakim, Elie Awwad

Abstract:

Clayey soils exhibit undesirable problems in civil engineering project: poor bearing soil capacity, shrinkage, cracking, …etc. On the other hand, the increasing production of bottom ash and its disposal in an eco-friendly manner is a matter of concern. Soil stabilization using bottom ash is a new technic in the geo-environmental engineering. It can be used wherever a soft clayey soil is encountered in foundations or road subgrade, instead of using old technics such as cement-soil mixing. This new technology can be used for road embankments and clayey foundations platform (shallow or deep foundations) instead of replacing bad soil or using old technics which aren’t eco-friendly. Moreover, applying this new technic in our geotechnical engineering projects can reduce the disposal of the bottom ash problem which is getting bigger day after day. The research consists of mixing clayey soil with different percentages of bottom ash at different values of water content, and evaluates the mechanical properties of every mix: the percentages of bottom ash are 10% 20% 30% 40% and 50% with values of water content of 25% 35% and 45% of the mix’s weight. Before testing the different mixes, clayey soil’s properties were determined: Atterbeg limits, soil’s cohesion and friction angle and particle size distribution. In order to evaluate the mechanical properties and behavior of every mix, different tests are conducted: -Direct shear test in order to determine the cohesion and internal friction angle of every mix. -Unconfined compressive strength (stress strain curve) to determine mix’s elastic modulus and compressive strength. Soil samples are prepared in accordance with the ASTM standards, and tested at different times, in order to be able to emphasize the influence of the curing period on the variation of the mix’s mechanical properties and characteristics. As of today, the results obtained are very promising: the mix’s cohesion and friction angle vary in function of the bottom ash percentage, water content and curing period: the cohesion increases enormously before decreasing for a long curing period (values of mix’s cohesion are larger than intact soil’s cohesion) while internal friction angle keeps on increasing even when the curing period is 28 days (the tests largest curing period), which give us a better soil behavior: less cracks and better soil bearing capacity.

Keywords: bottom ash, Clayey soil, mechanical properties, tests

Procedia PDF Downloads 146
204 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls

Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin

Abstract:

The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.

Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis

Procedia PDF Downloads 189
203 Rainwater Harvesting and Management of Ground Water (Case Study Weather Modification Project in Iran)

Authors: Samaneh Poormohammadi, Farid Golkar, Vahideh Khatibi Sarabi

Abstract:

Climate change and consecutive droughts have increased the importance of using rainwater harvesting methods. One of the methods of rainwater harvesting and, in other words, the management of atmospheric water resources is the use of weather modification technologies. Weather modification (also known as weather control) is the act of intentionally manipulating or altering the weather. The most common form of weather modification is cloud seeding, which increases rain or snow, usually for the purpose of increasing the local water supply. Cloud seeding operations in Iran have been married since 1999 in central Iran with the aim of harvesting rainwater and reducing the effects of drought. In this research, we analyze the results of cloud seeding operations in the Simindashtplain in northern Iran. Rainwater harvesting with the help of cloud seeding technology has been evaluated through its effects on surface water and underground water. For this purpose, two different methods have been used to estimate runoff. The first method is the US Soil Conservation Service (SCS) curve number method. Another method, known as the reasoning method, has also been used. In order to determine the infiltration rate of underground water, the balance reports of the comprehensive water plan of the country have been used. In this regard, the study areas located in the target area of each province have been extracted by drawing maps of the influence coefficients of each area in the GIS software. It should be mentioned that the infiltration coefficients were taken from the balance sheet reports of the country's comprehensive water plan. Then, based on the area of each study area, the weighted average of the infiltration coefficient of the study areas located in the target area of each province is considered as the infiltration coefficient of that province. Results show that the amount of water extracted from the rain with the help of cloud seeding projects in Simindasht is as follows: an increase in runoff 63.9 million cubic meters (with SCS equation) or 51.2 million cubic meters (with logical equation) and an increase in ground water resources: 40.5 million cubic meters.

Keywords: rainwater harvesting, ground water, atmospheric water resources, weather modification, cloud seeding

Procedia PDF Downloads 72
202 Enhanced Kinetic Solubility Profile of Epiisopiloturine Solid Solution in Hipromellose Phthalate

Authors: Amanda C. Q. M. Vieira, Cybelly M. Melo, Camila B. M. Figueirêdo, Giovanna C. R. M. Schver, Salvana P. M. Costa, Magaly A. M. de Lyra, Ping I. Lee, José L. Soares-Sobrinho, Pedro J. Rolim-Neto, Mônica F. R. Soares

Abstract:

Epiisopiloturine (EPI) is a drug candidate that is extracted from Pilocarpus microphyllus and isolated from the waste of Pilocarpine. EPI has demonstrated promising schistosomicidal, leishmanicide, anti-inflammatory and antinociceptive activities, according to in vitro studies that have been carried out since 2009. However, this molecule shows poor aqueous solubility, which represents a problem for the release of the drug candidate and its absorption by the organism. The purpose of the present study is to investigate the extent of enhancement of kinetic solubility of a solid solution (SS) of EPI in hipromellose phthalate HP-55 (HPMCP), an enteric polymer carrier. SS was obtained by the solvent evaporation methodology, using acetone/methanol (60:40) as solvent system. Both EPI and polymer (drug loading 10%) were dissolved in this solvent until a clear solution was obtained, and then dried in oven at 60ºC during 12 hours, followed by drying in a vacuum oven for 4 h. The results show a considerable modification in the crystalline structure of the drug candidate. For instance, X-ray diffraction (XRD) shows a crystalline behavior for the EPI, which becomes amorphous for the SS. Polarized light microscopy, a more sensitive technique than XRD, also shows completely absence of crystals in SS sample. Differential Scanning Calorimetric (DSC) curves show no signal of EPI melting point in SS curve, indicating, once more, no presence of crystal in this system. Interaction between the drug candidate and the polymer were found in Infrared microscopy, which shows a carbonyl 43.3 cm-1 band shift, indicating a moderate-strong interaction between them, probably one of the reasons to the SS formation. Under sink conditions (pH 6.8), EPI SS had its dissolution performance increased in 2.8 times when compared with the isolated drug candidate. EPI SS sample provided a release of more than 95% of the drug candidate in 15 min, whereas only 45% of EPI (alone) could be dissolved in 15 min and 70% in 90 min. Thus, HPMCP demonstrates to have a good potential to enhance the kinetic solubility profile of EPI. Future studies to evaluate the stability of SS are required to conclude the benefits of this system.

Keywords: epiisopiloturine, hipromellose phthalate HP-55, pharmaceuticaltechnology, solubility

Procedia PDF Downloads 581
201 Food Strategies in the Mediterranean Basin, Possible for Food Safety and Security

Authors: Lorenza Sganzetta, Nunzia Borrelli

Abstract:

The research intends to reflect on the current mapping of the Food Strategies, on the reasons why in the planning objectives panorama, such sustainability priorities are located in those geographic areas and on the evolutions of these priorities of the Mediterranean planning dispositions. The whirling population growth that is affecting global cities is causing an enormous challenge to conventional resource-intensive food production and supply and the urgent need to face food safety, food security and sustainability concerns. Urban or Territorial Food Strategies can provide an interesting path for the development of this new agenda within the imperative principle of sustainability. In the specific, it is relevant to explore what ‘sustainability’ means within these policies. Most of these plans include actions related to four main components and interpretations of sustainability that are food security and safety, food equity, environmental sustainability itself and cultural identity and, at the designing phase, they differ slightly from each other according to the degree of approximation to one of these dimensions. Moving from these assumptions, the article would analyze some practices and policies representatives of different Food Strategies of the world and focus on the Mediterranean ones, on the problems and negative externalities from which they start, on the first interventions that are implementing and on their main objectives. We will mainly use qualitative data from primary and secondary collections. So far, an essential observation could have been made about the relationship between these sustainability dimensions and geography. In statistical terms, the US and Canadian policies tended to devote a large research space to health issues and access to food; those northern European showed a special attention to the environmental issues and the shortening of the chain; and finally the policies that, even in limited numbers, were being developed in the Mediterranean basin, were characterized by a strong territorial and cultural imprint and their major aim was to preserve local production and the contact between the productive land and the end consumer. Recently, though, Mediterranean food planning strategies are focusing more on health related and food accessibility issues and analyzing our diets not just as a matter of culture and territorial branding but as tools for reducing public health costs and accessibility to fresh food for everyone. The article would reflect then on how Food Safety, Food Security and Health are entering the new agenda of the Mediterranean Food Strategies. The research hypothesis suggests that the economic crisis that in the last years invested both producers and consumers had a significant impact on the nutrition habits and on the redefinition of food poverty, even in the fatherland of the healthy Mediterranean diet. This trend and other variables influenced the orientation and the objectives of the food strategies.

Keywords: food security, food strategy, health, sustainability

Procedia PDF Downloads 184
200 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 19
199 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps

Procedia PDF Downloads 386
198 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 100