Search results for: curve approximation
249 Building Education Leader Capacity through an Integrated Information and Communication Technology Leadership Model and Tool
Authors: Sousan Arafeh
Abstract:
Educational systems and schools worldwide are increasingly reliant on information and communication technology (ICT). Unfortunately, most educational leadership development programs do not offer formal curricular and/or field experiences that prepare students for managing ICT resources, personnel, and processes. The result is a steep learning curve for the leader and his/her staff and dissipated organizational energy that compromises desired outcomes. To address this gap in education leaders’ development, Arafeh’s Integrated Technology Leadership Model (AITLM) was created. It is a conceptual model and tool that educational leadership students can use to better understand the ICT ecology that exists within their schools. The AITL Model consists of six 'infrastructure types' where ICT activity takes place: technical infrastructure, communications infrastructure, core business infrastructure, context infrastructure, resources infrastructure, and human infrastructure. These six infrastructures are further divided into 16 key areas that need management attention. The AITL Model was created by critically analyzing existing technology/ICT leadership models and working to make something more authentic and comprehensive regarding school leaders’ purview and experience. The AITL Model then served as a tool when it was distributed to over 150 educational leadership students who were asked to review it and qualitatively share their reactions. Students said the model presented crucial areas of consideration that they had not been exposed to before and that the exercise of reviewing and discussing the AITL Model as a group was useful for identifying areas of growth that they could pursue in the leadership development program and in their professional settings. While development in all infrastructures and key areas was important for students’ understanding of ICT, they noted that they were least aware of the importance of the intangible area of the resources infrastructure. The AITL Model will be presented and session participants will have an opportunity to review and reflect on its impact and utility. Ultimately, the AITL Model is one that could have significant policy and practice implications. At the very least, it might help shape ICT content in educational leadership development programs through curricular and pedagogical updates.Keywords: education leadership, information and communications technology, ICT, leadership capacity building, leadership development
Procedia PDF Downloads 116248 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow
Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam
Abstract:
Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety
Procedia PDF Downloads 302247 Risk Analysis of Flood Physical Vulnerability in Residential Areas of Mathare Nairobi, Kenya
Authors: James Kinyua Gitonga, Toshio Fujimi
Abstract:
Vulnerability assessment and analysis is essential to solving the degree of damage and loss as a result of natural disasters. Urban flooding causes a major economic loss and casualties, at Mathare residential area in Nairobi, Kenya. High population caused by rural-urban migration, Unemployment, and unplanned urban development are among factors that increase flood vulnerability in Mathare area. This study aims to analyse flood risk physical vulnerabilities in Mathare based on scientific data, research data that includes the Rainfall data, River Mathare discharge rate data, Water runoff data, field survey data and questionnaire survey through sampling of the study area have been used to develop the risk curves. Three structural types of building were identified in the study area, vulnerability and risk curves were made for these three structural types by plotting the relationship between flood depth and damage for each structural type. The results indicate that the structural type with mud wall and mud floor is the most vulnerable building to flooding while the structural type with stone walls and concrete floor is least vulnerable. The vulnerability of building contents is mainly determined by the number of floors, where households with two floors are least vulnerable, and households with a one floor are most vulnerable. Therefore more than 80% of the residential buildings including the property in the building are highly vulnerable to floods consequently exposed to high risk. When estimating the potential casualties/injuries we discovered that the structural types of houses were major determinants where the mud/adobe structural type had casualties of 83.7% while the Masonry structural type had casualties of 10.71% of the people living in these houses. This research concludes that flood awareness, warnings and observing the building codes will enable reduce damage to the structural types of building, deaths and reduce damage to the building contents.Keywords: flood loss, Mathare Nairobi, risk curve analysis, vulnerability
Procedia PDF Downloads 239246 Surface Water Flow of Urban Areas and Sustainable Urban Planning
Authors: Sheetal Sharma
Abstract:
Urban planning is associated with land transformation from natural areas to modified and developed ones which leads to modification of natural environment. The basic knowledge of relationship between both should be ascertained before proceeding for the development of natural areas. Changes on land surface due to build up pavements, roads and similar land cover, affect surface water flow. There is a gap between urban planning and basic knowledge of hydrological processes which should be known to the planners. The paper aims to identify these variations in surface flow due to urbanization for a temporal scale of 40 years using Storm Water Management Mode (SWMM) and again correlating these findings with the urban planning guidelines in study area along with geological background to find out the suitable combinations of land cover, soil and guidelines. For the purpose of identifying the changes in surface flows, 19 catchments were identified with different geology and growth in 40 years facing different ground water levels fluctuations. The increasing built up, varying surface runoff are studied using Arc GIS and SWMM modeling, regression analysis for runoff. Resulting runoff for various land covers and soil groups with varying built up conditions were observed. The modeling procedures also included observations for varying precipitation and constant built up in all catchments. All these observations were combined for individual catchment and single regression curve was obtained for runoff. Thus, it was observed that alluvial with suitable land cover was better for infiltration and least generation of runoff but excess built up could not be sustained on alluvial soil. Similarly, basalt had least recharge and most runoff demanding maximum vegetation over it. Sandstone resulted in good recharging if planned with more open spaces and natural soils with intermittent vegetation. Hence, these observations made a keystone base for planners while planning various land uses on different soils. This paper contributes and provides a solution to basic knowledge gap, which urban planners face during development of natural surfaces.Keywords: runoff, built up, roughness, recharge, temporal changes
Procedia PDF Downloads 278245 Application of Seismic Refraction Method in Geotechnical Study
Authors: Abdalla Mohamed M. Musbahi
Abstract:
The study area lies in Al-Falah area on Airport-Tripoli in Zone (16) Where planned establishment of complex multi-floors for residential and commercial, this part was divided into seven subzone. In each sup zone, were collected Orthogonal profiles by using Seismic refraction method. The overall aim with this project is to investigate the applicability of Seismic refraction method is a commonly used traditional geophysical technique to determine depth-to-bedrock, competence of bedrock, depth to the water table, or depth to other seismic velocity boundaries The purpose of the work is to make engineers and decision makers recognize the importance of planning and execution of a pre-investigation program including geophysics and in particular seismic refraction method. The overall aim with this thesis is achieved by evaluation of seismic refraction method in different scales, determine the depth and velocity of the base layer (bed-rock). Calculate the elastic property in each layer in the region by using the Seismic refraction method. The orthogonal profiles was carried out in every subzones of (zone 16). The layout of the seismic refraction set up is schematically, the geophones are placed on the linear imaginary line whit a 5 m spacing, the three shot points (in beginning of layout–mid and end of layout) was used, in order to generate the P and S waves. The 1st and last shot point is placed about 5 meters from the geophones and the middle shot point is put in between 12th to 13th geophone, from time-distance curve the P and S waves was calculated and the thickness was estimated up to three-layers. As we know any change in values of physical properties of medium (shear modulus, bulk modulus, density) leads to change waves velocity which passing through medium where any change in properties of rocks cause change in velocity of waves. because the change in properties of rocks cause change in parameters of medium density (ρ), bulk modulus (κ), shear modulus (μ). Therefore, the velocity of waves which travel in rocks have close relationship with these parameters. Therefore we can estimate theses parameters by knowing primary and secondary velocity (p-wave, s-wave).Keywords: application of seismic, geotechnical study, physical properties, seismic refraction
Procedia PDF Downloads 491244 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments
Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic
Abstract:
Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).Keywords: Croatia, forest fire, geospatial analysis, hydrological response
Procedia PDF Downloads 136243 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method
Authors: Emmanuel Ophel Gilbert, Williams Speret
Abstract:
The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids
Procedia PDF Downloads 176242 Three Foci of Trust as Potential Mediators in the Association Between Job Insecurity and Dynamic Organizational Capability: A Quantitative, Exploratory Study
Authors: Marita Heyns
Abstract:
Job insecurity is a distressing phenomenon which has far reaching consequences for both employees and their organizations. Previously, much attention has been given to the link between job insecurity and individual level performance outcomes, while less is known about how subjectively perceived job insecurity might transfer beyond the individual level to affect performance of the organization on an aggregated level. Research focusing on how employees’ fear of job loss might affect the organization’s ability to respond proactively to volatility and drastic change through applying its capabilities of sensing, seizing, and reconfiguring, appears to be practically non-existent. Equally little is known about the potential underlying mechanisms through which job insecurity might affect the dynamic capabilities of an organization. This study examines how job insecurity might affect dynamic organizational capability through trust as an underling process. More specifically, it considered the simultaneous roles of trust at an impersonal (organizational) level as well as trust at an interpersonal level (in leaders and co-workers) as potential underlying mechanisms through which job insecurity might affect the organization’s dynamic capability to respond to opportunities and imminent, drastic change. A quantitative research approach and a stratified random sampling technique enabled the collection of data among 314 managers at four different plant sites of a large South African steel manufacturing organization undergoing dramatic changes. To assess the study hypotheses, the following statistical procedures were employed: Structural equation modelling was performed in Mplus to evaluate the measurement and structural models. The Chi-square values test for absolute fit as well as alternative fit indexes such as the Comparative Fit Index and the Tucker-Lewis Index, the Root Mean Square Error of Approximation and the Standardized Root Mean Square Residual were used as indicators of model fit. Composite reliabilities were calculated to evaluate the reliability of the factors. Finally, interaction effects were tested by using PROCESS and the construction of two-sided 95% confidence intervals. The findings indicate that job insecurity had a lower-than-expected detrimental effect on evaluations of the organization’s dynamic capability through the conducive buffering effects of trust in the organization and in its leaders respectively. In contrast, trust in colleagues did not seem to have any noticeable facilitative effect. The study proposes that both job insecurity and dynamic capability can be managed more effectively by also paying attention to factors that could promote trust in the organization and its leaders; some practical recommendations are given in this regard.Keywords: dynamic organizational capability, impersonal trust, interpersonal trust, job insecurity
Procedia PDF Downloads 89241 A Study on the Effect of Different Climate Conditions on Time of Balance of Bleeding and Evaporation in Plastic Shrinkage Cracking of Concrete Pavements
Authors: Hasan Ziari, Hassan Fazaeli, Seyed Javad Vaziri Kang Olyaei, Asma Sadat Dabiri
Abstract:
The presence of cracks in concrete pavements is a place for the ingression of corrosive substances, acids, oils, and water into the pavement and reduces its long-term durability and level of service. One of the causes of early cracks in concrete pavements is the plastic shrinkage. This shrinkage occurs due to the formation of negative capillary pressures after the equilibrium of the bleeding and evaporation rates at the pavement surface. These cracks form if the tensile stresses caused by the restrained shrinkage exceed the tensile strength of the concrete. Different climate conditions change the rate of evaporation and thus change the balance time of the bleeding and evaporation, which changes the severity of cracking in concrete. The present study examined the relationship between the balance time of bleeding and evaporation and the area of cracking in the concrete slabs using the standard method ASTM C1579 in 27 different environmental conditions by using continuous video recording and digital image analyzing. The results showed that as the evaporation rate increased and the balance time decreased, the crack severity significantly increased so that by reducing the balance time from the maximum value to its minimum value, the cracking area increased more than four times. It was also observed that the cracking area- balance time curve could be interpreted in three sections. An examination of these three parts showed that the combination of climate conditions has a significant effect on increasing or decreasing these two variables. The criticality of a single factor cannot cause the critical conditions of plastic cracking. By combining two mild environmental factors with a severe climate factor (in terms of surface evaporation rate), a considerable reduction in balance time and a sharp increase in cracking severity can be prevented. The results of this study showed that balance time could be an essential factor in controlling and predicting plastic shrinkage cracking in concrete pavements. It is necessary to control this factor in the case of constructing concrete pavements in different climate conditions.Keywords: bleeding and cracking severity, concrete pavements, climate conditions, plastic shrinkage
Procedia PDF Downloads 146240 Validation of the Female Sexual Function Index and the Female Sexual Distress Scale-Desire/Arousal/Orgasm in Chinese Women
Authors: Lan Luo, Jingjing Huang, Huafang Li
Abstract:
Introduction: Distressing low sexual desire is common in China, while the lack of reliable and valid instruments to evaluate symptoms of hypoactive sexual desire disorder (HSDD) impedes related research and clinical services. Aim: This study aimed to validate the reliability and validity of the Female Sexual Function Index (FSFI) and the Female Sexual Distress Scale-Desire/Arousal/Orgasm (FSDS-DAO) in Chinese female HSDD patients. Methods: We administered FSFI and FSDS-DAO in a convenient sample of Chinese adult women. Participants were diagnosed by a psychiatrist according to the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR). Results: We had a valid analysis sample of 279 Chinese women, of which 107 were HSDD patients. The Cronbach's α of FSFI and FSDS-DAO were 0.947 and 0.956, respectively, and the intraclass correlation coefficients of which were 0.86 and 0.89, respectively (the interval was 13-15 days). The correlation coefficient between the Revised Adult Attachment Scale (RAAS) and FSFI (or FSDS-DAO) did not exceed 0.4; the area under the receiver operating characteristic (ROC) curve was 0. 83 when combined FSFI-d (the desire domain of FSFI) and FSDS-DAO to diagnose HSDD, which was significantly different from that of using these scales individually. FSFI-d of less than 2.7 (1.2-6) and FSDS-DAO of no less than 15 (0-60) (Sensitivity 65%, Specificity 83%), or FSFI-d of no more than 3.0 (1.2-6) and FSDS-DAO of no less than 14 (0-60) (Sensitivity 74%, Specificity 77%) can be used as cutoff scores in clinical research or outpatient screening. Clinical implications: FSFI (including FSFI-d) and FSDS-DAO are suitable for the screening and evaluation of Chinese female HSDD patients of childbearing age. Strengths and limitations: Strengths include a thorough validation of FSFI and FSDS-DAO and the exploration of the cutoff score combing FSFI-d and FSDS-DAO. Limitations include a small convenience sample and the requirement of being sexually active for HSDD patients. Conclusion: FSFI (including FSFI-d) and FSDS-DAO have good internal consistency, test-retest reliability, construct validity, and criterion validity in Chinese female HSDD patients of childbearing age.Keywords: sexual desire, sexual distress, hypoactive sexual desire disorder, scale
Procedia PDF Downloads 76239 Processing and Characterization of Oxide Dispersion Strengthened (ODS) Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (14YWT) Ferritic Steel
Authors: Farha Mizana Shamsudin, Shahidan Radiman, Yusof Abdullah, Nasri Abdul Hamid
Abstract:
Oxide dispersion strengthened (ODS) ferritic steels are amongst the most promising candidates for large scale structural materials to be applied in next generation fission and fusion nuclear power reactors. This kind of material is relatively stable at high temperature, possess remarkable mechanical properties and comparatively good resistance from neutron radiation damage. The superior performance of ODS ferritic steels over their conventional properties is attributed to the high number density of nano-sized dispersoids that act as nucleation sites and stable sinks for many small helium bubbles resulting from irradiation, and also as pinning points to dislocation movement and grain growth. ODS ferritic steels are usually produced by powder metallurgical routes involving mechanical alloying (MA) process of Y2O3 and pre-alloyed or elemental metallic powders, and then consolidated by hot isostatic pressing (HIP) or hot extrusion (HE) techniques. In this study, Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (designated as 14YWT) was produced by mechanical alloying process and followed by hot isostatic pressing (HIP) technique. Crystal structure and morphology of this sample were identified and characterized by using X-ray Diffraction (XRD) and field emission scanning electron microscope (FESEM) respectively. The magnetic measurement of this sample at room temperature was carried out by using a vibrating sample magnetometer (VSM). FESEM micrograph revealed a homogeneous microstructure constituted by fine grains of less than 650 nm in size. The ultra-fine dispersoids of size between 5 nm to 19 nm were observed homogeneously distributed within the BCC matrix. The EDS mapping reveals that the dispersoids contain Y-Ti-O nanoclusters and from the magnetization curve plotted by VSM, this sample approaches the behavior of soft ferromagnetic materials. In conclusion, ODS Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (14YWT) ferritic steel was successfully produced by HIP technique in this present study.Keywords: hot isostatic pressing, magnetization, microstructure, ODS ferritic steel
Procedia PDF Downloads 319238 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range
Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva
Abstract:
Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability
Procedia PDF Downloads 155237 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals
Authors: Carlos Teodoro, Oscar Bautista
Abstract:
In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation
Procedia PDF Downloads 216236 Role of von Willebrand Factor Antigen as Non-Invasive Biomarker for the Prediction of Portal Hypertensive Gastropathy in Patients with Liver Cirrhosis
Authors: Mohamed El Horri, Amine Mouden, Reda Messaoudi, Mohamed Chekkal, Driss Benlaldj, Malika Baghdadi, Lahcene Benmahdi, Fatima Seghier
Abstract:
Background/aim: Recently, the Von Willebrand factor antigen (vWF-Ag)has been identified as a new marker of portal hypertension (PH) and its complications. Few studies talked about its role in the prediction of esophageal varices. VWF-Ag is considered a non-invasive approach, In order to avoid the endoscopic burden, cost, drawbacks, unpleasant and repeated examinations to the patients. In our study, we aimed to evaluate the ability of this marker in the prediction of another complication of portal hypertension, which is portal hypertensive gastropathy (PHG), the one that is diagnosed also by endoscopic tools. Patients and methods: It is about a prospective study, which include 124 cirrhotic patients with no history of bleeding who underwent screening endoscopy for PH-related complications like esophageal varices (EVs) and PHG. Routine biological tests were performed as well as the VWF-Ag testing by both ELFA and Immunoturbidimetric techniques. The diagnostic performance of our marker was assessed using sensitivity, specificity, positive predictive value, negative predictive value, accuracy, and receiver operating characteristic curves. Results: 124 patients were enrolled in this study, with a mean age of 58 years [CI: 55 – 60 years] and a sex ratio of 1.17. Viral etiologies were found in 50% of patients. Screening endoscopy revealed the presence of PHG in 20.2% of cases, while for EVsthey were found in 83.1% of cases. VWF-Ag levels, were significantly increased in patients with PHG compared to those who have not: 441% [CI: 375 – 506], versus 279% [CI: 253 – 304], respectively (p <0.0001). Using the area under the receiver operating characteristic curve (AUC), vWF-Ag was a good predictor for the presence of PHG. With a value higher than 320% and an AUC of 0.824, VWF-Ag had an 84% sensitivity, 74% specificity, 44.7% positive predictive value, 94.8% negative predictive value, and 75.8% diagnostic accuracy. Conclusion: VWF-Ag is a good non-invasive low coast marker for excluding the presence of PHG in patients with liver cirrhosis. Using this marker as part of a selective screening strategy might reduce the need for endoscopic screening and the coast of the management of these kinds of patients.Keywords: von willebrand factor, portal hypertensive gastropathy, prediction, liver cirrhosis
Procedia PDF Downloads 205235 Flow Duration Curves and Recession Curves Connection through a Mathematical Link
Authors: Elena Carcano, Mirzi Betasolo
Abstract:
This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.Keywords: chronological sequence of discharges, recession curves, streamflow duration curves, water concession
Procedia PDF Downloads 185234 A Foodborne Cholera Outbreak in a School Caused by Eating Contaminated Fried Fish: Hoima Municipality, Uganda, February 2018
Authors: Dativa Maria Aliddeki, Fred Monje, Godfrey Nsereko, Benon Kwesiga, Daniel Kadobera, Alex Riolexus Ario
Abstract:
Background: Cholera is a severe gastrointestinal disease caused by Vibrio cholera. It has caused several pandemics. On 26 February 2018, a suspected cholera outbreak, with one death, occurred in School X in Hoima Municipality, western Uganda. We investigated to identify the scope and mode of transmission of the outbreak, and recommend evidence-based control measures. Methods: We defined a suspected case as onset of diarrhea, vomiting, or abdominal pain in a student or staff of School X or their family members during 14 February–10 March. A confirmed case was a suspected case with V. cholerae cultured from stool. We reviewed medical records at Hoima Hospital and searched for cases at School X. We conducted descriptive epidemiologic analysis and hypothesis-generating interviews of 15 case-patients. In a retrospective cohort study, we compared attack rates between exposed and unexposed persons. Results: We identified 15 cases among 75 students and staff of School X and their family members (attack rate=20%), with onset from 25-28 February. One patient died (case-fatality rate=6.6%). The epidemic curve indicated a point-source exposure. On 24 February, a student brought fried fish from her home in a fishing village, where a cholera outbreak was ongoing. Of the 21 persons who ate the fish, 57% developed cholera, compared with 5.6% of 54 persons who did not eat (RR=10; 95% CI=3.2-33). None of 4 persons who recooked the fish before eating, compared with 71% of 17 who did not recook it, developed cholera (RR=0.0, 95%CIFisher exact=0.0-0.95). Of 12 stool specimens cultured, 6 yielded V. cholerae. Conclusion: This cholera outbreak was caused by eating fried fish, which might have been contaminated with V. cholerae in a village with an ongoing outbreak. Lack of thorough cooking of the fish might have facilitated the outbreak. We recommended thoroughly cooking fish before consumption.Keywords: cholera, disease outbreak, foodborne, global health security, Uganda
Procedia PDF Downloads 199233 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions
Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba
Abstract:
Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial
Procedia PDF Downloads 171232 Pareto Optimal Material Allocation Mechanism
Authors: Peter Egri, Tamas Kis
Abstract:
Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling
Procedia PDF Downloads 332231 Maximizing Giant Prawn Resource Utilization in Banjar Regency, Indonesia: A CPUE and MSY Analysis
Authors: Ahmadi, Iriansyah, Raihana Yahman
Abstract:
The giant freshwater prawn (Macrobrachium rosenbergii de Man, 1879) is a valuable species for fisheries and aquaculture, especially in Southeast Asia, including Indonesia due to their high market demand and potential for export. The growing demand for prawns is straining the sustainability of the Banjar Regency fishery. To ensure the long-term sustainability and economic viability of the prawn fishing in this region, it is imperative to implement evidence-based management practices. This requires comprehensive data on the Catch per Unit Effort (CPUE), Maximum Sustainable Yield (MSY) and the current rate of prawn resource exploitation. it analyzed five years of prawn catch data (2019-2023) obtained from South Kalimantan Marine and Fisheries Services. Fishing gears (e.g. hook & line and cast net) were first standardized with Fishing Power Index, and then calculated effort and MSY. The intercept (a) and the slope (b) values of regression curve were used to estimate the catch-maximum sustainable yield (CMSY) and optimal fishing effort (Fopt) levels within the framework of the Surplus Production Model. The estimated rates of resource utilization were then compared to the criteria of The National Commission of Marine Fish Stock Assessment. The findings showed that the CPUE value peaked in 2019 at 33.48 kg/trip, while the lowest value observed in 2022 at 5.12 kg/trip. The CMSY value was estimated to be 17,396 kg/year, corresponding to the Fopt level of 1,636 trips/year. The highest utilization rate was 56.90% recorded in 2020, while the lowest rate was observed in 2021 at 46.16%. The annual utilization rates were classified as “medium”, suggesting that increasing fishing effort by 45% could potentially maximize prawn catches at an optimum level. These findings provide a baseline for sustainable fisheries management in the region.Keywords: giant prawns, CPUE, fishing power index, sustainable potential, utilization rate
Procedia PDF Downloads 16230 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia PDF Downloads 283229 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome
Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler
Abstract:
Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model
Procedia PDF Downloads 153228 Development of Adsorbents for Removal of Hydrogen Sulfide and Ammonia Using Pyrolytic Carbon Black form Waste Tires
Authors: Yang Gon Seo, Chang-Joon Kim, Dae Hyeok Kim
Abstract:
It is estimated that 1.5 billion tires are produced worldwide each year which will eventually end up as waste tires representing a major potential waste and environmental problem. Pyrolysis has been great interest in alternative treatment processes for waste tires to produce valuable oil, gas and solid products. The oil and gas products may be used directly as a fuel or a chemical feedstock. The solid produced from the pyrolysis of tires ranges typically from 30 to 45 wt% and have high carbon contents of up to 90 wt%. However, most notably the solid have high sulfur contents from 2 to 3 wt% and ash contents from 8 to 15 wt% related to the additive metals. Upgrading tire pyrolysis products to high-value products has concentrated on solid upgrading to higher quality carbon black and to activated carbon. Hydrogen sulfide and ammonia are one of the common malodorous compounds that can be found in emissions from many sewages treatment plants and industrial plants. Therefore, removing these harmful gasses from emissions is of significance in both life and industry because they can cause health problems to human and detrimental effects on the catalysts. In this work, pyrolytic carbon black from waste tires was used to develop adsorbent with good adsorption capacity for removal of hydrogen and ammonia. Pyrolytic carbon blacks were prepared by pyrolysis of waste tire chips ranged from 5 to 20 mm under the nitrogen atmosphere at 600℃ for 1 hour. Pellet-type adsorbents were prepared by a mixture of carbon black, metal oxide and sodium hydroxide or hydrochloric acid, and their adsorption capacities were estimated by using the breakthrough curve of a continuous fixed bed adsorption column at ambient condition. The adsorbent was manufactured with a mixture of carbon black, iron oxide(III), and sodium hydroxide showed the maximum working capacity of hydrogen sulfide. For ammonia, maximum working capacity was obtained by the adsorbent manufactured with a mixture of carbon black, copper oxide(II), and hydrochloric acid.Keywords: adsorbent, ammonia, pyrolytic carbon black, hydrogen sulfide, metal oxide
Procedia PDF Downloads 257227 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method
Authors: Defne Uz
Abstract:
Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration
Procedia PDF Downloads 145226 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique
Authors: Igor V. Savchenko, Dmitrii A. Samoshkin
Abstract:
The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity
Procedia PDF Downloads 198225 Ramadan as a Model of Intermittent Fasting: Effects on Gut Hormones, Appetite and Body Composition in Diabetes vs. Controls
Authors: Turki J. Alharbi, Jencia Wong, Dennis Yue, Tania P. Markovic, Julie Hetherington, Ted Wu, Belinda Brooks, Radhika Seimon, Alice Gibson, Stephanie L. Silviera, Amanda Sainsbury, Tanya J. Little
Abstract:
Fasting has been practiced for centuries and is incorporated into the practices of different religions including Islam, whose followers intermittently fast throughout the month of Ramadan. Thus, Ramadan presents a unique model of prolonged intermittent fasting (IF). Despite a growing body of evidence for a cardio-metabolic and endocrine benefit of IF, detailed studies of the effects of IF on these indices in type 2 diabetes are scarce. We studied 5 subjects with type 2 diabetes (T2DM) and 7 healthy controls (C) at baseline (pre), and in the last week of Ramadan (post). Fasting circulating levels of glucose, HbA1c and lipids, as well as body composition (with DXA) and resting energy expenditure (REE) were measured. Plasma gut hormone levels and appetite responses to a mixed meal were also studied. Data are means±SEM. Ramadan decreased total fat mass (-907±92 g, p=0.001) and trunk fat (-778±190 g, p=0.014) in T2DM but not in controls, without any reductions in lean mass or REE. There was a trend towards a decline in plasma FFA in both groups. Ramadan had no effect on body weight, glycemia, blood pressure, or plasma lipids in either group. In T2DM only, the area under the curve for post-meal plasma ghrelin concentrations increased after Ramadan (pre:6632±1737 vs. post:9025±2518 pg/ml.min-1, p=0.045). Despite this increase in orexigenic ghrelin, subjective appetite scores were not altered by Ramadan. Meal-induced plasma concentrations of the satiety hormone pancreatic polypeptide did not change during Ramadan, but were higher in T2DM compared to controls (post: C: 23486±6677 vs. T2DM: 62193±6880 pg/ml.min-1, p=0.003. In conclusion, Ramadan, as a model for IF appears to have more favourable effects on body composition in T2DM, without adverse effects on metabolic control or subjective appetite. These data suggest that IF may be particularly beneficial in T2DM as a nutritional intervention. Larger studies are warranted.Keywords: type 2 diabetes, obesity, intermittent fasting, appetite regulating hormones
Procedia PDF Downloads 312224 In-House Fatty Meal Cholescintigraphy as a Screening Tool in Patients Presenting with Dyspepsia
Authors: Avani Jain, S. Shelley, M. Indirani, Shilpa Kalal, Jaykanth Amalachandran
Abstract:
Aim: To evaluate the prevalence of gall bladder dysfunction in patients with dyspepsia using In-House fatty meal cholescintigraphy. Materials & Methods: This study is a prospective cohort study. 59 healthy volunteers with no dyspeptic complaints and negative ultrasound and endoscopy were recruited in study. 61 patients having complaint of dyspepsia for duration of more than 6 months were included. All of them underwent 99mTc-Mebrofenin fatty meal cholescintigraphy following a standard protocol. Dynamic acquisitions were acquired for 120 minutes with an In-House fatty meal being given at 45th minute. Gall bladder emptying kinetics was determined with gall bladder ejection fractions (GBEF) calculated at 30minutes, 45minutes and at 60 minutes (30min, 45min & 60 min). Standardization of fatty meal was done for volunteers. Receiver operating characteristic (ROC) analysis was used assess the diagnostic accuracy of 3 time points (30min, 45min & 60 min) used for measuring gall bladder emptying. On the basis of cut off derived from volunteers, the patients were assessed for gall bladder dysfunction. Results: In volunteers, the GBEF at 30 min was 74.42±8.26 % (mean ±SD), at 45 min was 82.61 ± 6.5 % and at 60 min was 89.37±4.48%, compared to patients where at 30min it was 33.73±22.87%, at 45 min it was 43.03±26.97% and at 60 min it was 51.85±29.60%. The lower limit of GBEF in volunteers at 30 min was 60%, 45 min was 69% and at 60 min was 81%. ROC analysis showed that area under curve was largest for 30 min GBEF (0.952; 95% CI = 0.914-0.989) and that all the 3 measures were statistically significant (p < 0.005). Majority of the volunteers had 74% of gall bladder emptying by 30 minutes; hence it was taken as an optimum cutoff time to assess gall bladder contraction. > 60% GBEF at 30 min post fatty meal was considered as normal and < 60% GBEF as indicative of gall bladder dysfunction. In patients, various causes for dyspepsia were identified: GB dysfunction (63.93%), Peptic ulcer (8.19 %), Gastroesophageal reflux disease (8.19%), Gastritis (4.91%). In 18.03% of cases GB dysfunction coexisted with other gastrointestinal conditions. The diagnosis of functional dyspepsia was made in 14.75% of cases. Conclusions: Gall bladder dysfunction contributes significantly to the causation of dyspepsia. It could coexist with various other gastrointestinal diseases. Fatty meal was well tolerated and devoid of any side effects. Many patients who are labeled as functional dyspeptics could actually have gall bladder dysfunction. Hence as an adjunct to ultrasound and endoscopy, fatty meal cholescintigraphy can also be used as a screening modality in characterization of dyspepsia.Keywords: in-house fatty meal, choescintigraphy, dyspepsia, gall bladder ejection fraction, functional dyspepsia
Procedia PDF Downloads 508223 The Effect of Degraded Shock Absorbers on the Safety-Critical Stationary and Non-Stationary Lateral Dynamics of Passenger Cars
Authors: Tobias Schramm, Günther Prokop
Abstract:
The average age of passenger cars is rising steadily around the world. Older vehicles are more sensitive to the degradation of chassis components. A higher age and a higher mileage of passenger cars correlate with an increased failure rate of vehicle shock absorbers. The most common degradation mechanism of vehicle shock absorbers is the loss of oil and gas. It is not yet fully understood how the loss of oil and gas in twin-tube shock absorbers affects the lateral dynamics of passenger cars. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers of passenger cars on their safety-critical lateral dynamics. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was used to simulate stationary cornering and steering wheel angle step maneuvers on road classes A to D. The simulations were carried out in a realistic parameter space in order to demonstrate the influence of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the understeer gradient of vehicles. For stationary lateral dynamics, degraded shock absorbers for high road excitations reduce the maximum lateral accelerations. Degraded rear axle shock absorbers can change the understeer gradient of a vehicle in the direction of oversteer. Degraded shock absorbers also lead to increased rolling angles. Furthermore, degraded shock absorbers have a major impact on driving stability during steering wheel angle steps. Degraded rear axle shock absorbers, in particular, can lead to unstable handling. Especially the tire stiffness, the unsprung mass and the stabilizer stiffness influence the effect of degraded shock absorbers on the lateral dynamics of passenger cars.Keywords: driving dynamics, numerical simulation, road safety, shock absorber degradation, stationary and nonstationary lateral dynamics.
Procedia PDF Downloads 8222 Properties of Magnesium-Based Hydrogen Storage Alloy Added with Palladium and Titanium Hydride
Authors: Jun Ying Lin, Tzu Hsiang Yen, Cha'o Kuang Chen
Abstract:
Nowadays, the great majority believe that there is great potentiality in hydrogen storage alloy storing hydrogen by physical and chemical absorption. However, the hydrogen storage alloy is limited by high operation temperature. Scientists find that adding transition elements can improve the properties of hydrogen storage alloy. In this research, outstanding improvements of kinetic and thermal properties are given by the addition of Palladium and Titanium hydride to Magnesium-based hydrogen storage alloy. Magnesium-based alloy is the main material, into which TiH2 / Pd are added separately. Following that, materials are milled by a Planetary Ball Miller at 650 rpm. TGA/DSC and PCT measure the capacity, spending time and temperature of abs/des-orption. Additionally, SEM and XRD analyze the structures and components of material. It is clearly shown that Pd is beneficial to kinetic properties. 2MgH2-0.1Pd has the highest capacity of all the alloys listed, approximately 5.5 wt%. Secondly, there are not any new Ti-related compounds found from XRD analysis. Thus, TiH2, considered as the catalyst, leads to the condition of 2MgH2-TiH2 and 2MgH2-TiH2-0.1Pd efficiently absorbing hydrogen in low temperature. 2MgH2-TiH2 can reach roughly 3.0 wt% in 82.4 minutes at 50°C and 8 minutes at 100°C, while2MgH2-TiH2-0.1Pd can reach 2.0 wt% in 400 minutes at 50°C and in 48 minutes at 100°C. The lowest temperature of 2MgH2-0.1Pd and 2MgH2-TiH2 is similar (320°C), otherwise the lowest temperature of 2MgH2-TiH2-0.1Pd decrease by 20°C. From XRD, it can be observed that PdTi2 and Pd3Ti are produced by mechanical alloying when adding Pd as well as TiH2 into MgH2. Due to the synergistic effects between Pd and TiH2, 2MgH2-TiH2-0.1Pd owns the lowest dehydrogenation temperature. Furthermore, the Pressure-Composition-Temperature (PCT) curve of 2MgH2-TiH2-0.1Pd is measured at different temperature, 370°C, 350°C, 320°C and 300°C separately. The plateau pressure is given form the PCT curves above. In accordance to different plateau pressures, enthalpy and entropy in the Van’t Hoff equation can be solved. In 2MgH2-TiH2-0.1Pd, the enthalpy is 74.9 KJ/mol and the entropy is 122.9 J/mol. Activation means that hydrogen storage alloy undergoes repeat abs/des-orpting processes. It plays an important role in the abs/des-orption. Activation shortens the abs/des-orption time because of the increase in surface area. From SEM, it is clear that the grain size and surface become smaller and rougherKeywords: hydrogen storage materials, magnesium hydride, abs-/des-orption performance, Plateau pressure
Procedia PDF Downloads 266221 Effects of Nutrient Source and Drying Methods on Physical and Phytochemical Criteria of Pot Marigold (Calendula offiCinalis L.) Flowers
Authors: Leila Tabrizi, Farnaz Dezhaboun
Abstract:
In order to study the effect of plant nutrient source and different drying methods on physical and phytochemical characteristics of pot marigold (Calendula officinalis L., Asteraceae) flowers, a factorial experiment was conducted based on completely randomized design with three replications in Research Laboratory of University of Tehran in 2010. Different nutrient sources (vermicompost, municipal waste compost, cattle manure, mushroom compost and control) which were applied in a field experiment for flower production and different drying methods including microwave (300, 600 and 900 W), oven (60, 70 and 80oC) and natural-shade drying in room temperature, were tested. Criteria such as drying kinetic, antioxidant activity, total flavonoid content, total phenolic compounds and total carotenoid of flowers were evaluated. Results indicated that organic inputs as nutrient source for flowers had no significant effects on quality criteria of pot marigold except of total flavonoid content, while drying methods significantly affected phytochemical criteria. Application of microwave 300, 600 and 900 W resulted in the highest amount of total flavonoid content, total phenolic compounds and antioxidant activity, respectively, while oven drying caused the lowest amount of phytochemical criteria. Also, interaction effect of nutrient source and drying method significantly affected antioxidant activity in which the highest amount of antioxidant activity was obtained in combination of vermicompost and microwave 900 W. In addition, application of vermicompost combined with oven drying at 60oC caused the lowest amount of antioxidant activity. Based on results of drying trend, microwave drying showed a faster drying rate than those oven and natural-shade drying in which by increasing microwave power and oven temperature, time of flower drying decreased whereas slope of moisture content reduction curve showed accelerated trend.Keywords: drying kinetic, medicinal plant, organic fertilizer, phytochemical criteria
Procedia PDF Downloads 336220 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh
Authors: Zahid Khalil, Saad Ul Haque, Asif Khan
Abstract:
Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).Keywords: Remote sensing, GIS, AHP, RWH
Procedia PDF Downloads 389