Search results for: statistical techniques
1429 Climate Change and Landslide Risk Assessment in Thailand
Authors: Shotiros Protong
Abstract:
The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand
Procedia PDF Downloads 5631428 Slope Stability Assessment in Metasedimentary Deposit of an Opencast Mine: The Case of the Dikuluwe-Mashamba (DIMA) Mine in the DR Congo
Authors: Dina Kon Mushid, Sage Ngoie, Tshimbalanga Madiba, Kabutakapua Kakanda
Abstract:
Slope stability assessment is still the biggest challenge in mining activities and civil engineering structures. The slope in an opencast mine frequently reaches multiple weak layers that lead to the instability of the pit. Faults and soft layers throughout the rock would increase weathering and erosion rates. Therefore, it is essential to investigate the stability of the complex strata to figure out how stable they are. In the Dikuluwe-Mashamba (DIMA) area, the lithology of the stratum is a set of metamorphic rocks whose parent rocks are sedimentary rocks with a low degree of metamorphism. Thus, due to the composition and metamorphism of the parent rock, the rock formation is different in hardness and softness, which means that when the content of dolomitic and siliceous is high, the rock is hard. It is softer when the content of argillaceous and sandy is high. Therefore, from the vertical direction, it appears as a weak and hard layer, and from the horizontal direction, it seems like a smooth and hard layer in the same rock layer. From the structural point of view, the main structures in the mining area are the Dikuluwe dipping syncline and the Mashamba dipping anticline, and the occurrence of rock formations varies greatly. During the folding process of the rock formation, the stress will concentrate on the soft layer, causing the weak layer to be broken. At the same time, the phenomenon of interlayer dislocation occurs. This article aimed to evaluate the stability of metasedimentary rocks of the Dikuluwe-Mashamba (DIMA) open-pit mine using limit equilibrium and stereographic methods Based on the presence of statistical structural planes, the stereographic projection was used to study the slope's stability and examine the discontinuity orientation data to identify failure zones along the mine. The results revealed that the slope angle is too steep, and it is easy to induce landslides. The numerical method's sensitivity analysis showed that the slope angle and groundwater significantly impact the slope safety factor. The increase in the groundwater level substantially reduces the stability of the slope. Among the factors affecting the variation in the rate of the safety factor, the bulk density of soil is greater than that of rock mass, the cohesion of soil mass is smaller than that of rock mass, and the friction angle in the rock mass is much larger than that in the soil mass. The analysis showed that the rock mass structure types are mostly scattered and fragmented; the stratum changes considerably, and the variation of rock and soil mechanics parameters is significant.Keywords: slope stability, weak layer, safety factor, limit equilibrium method, stereography method
Procedia PDF Downloads 2581427 Anaesthetic Management of Congenitally Corrected Transposition of Great Arteries with Complete Heart Block in a Parturient for Emergency Caesarean Section
Authors: Lokvendra S. Budania, Yogesh K Gaude, Vamsidhar Chamala
Abstract:
Introduction: Congenitally corrected transposition of great arteries (CCTGA) is a complex congenital heart disease where there are both atrioventricular and ventriculoarterial discordances, usually accompanied by other cardiovascular malformations. Case Report: A 24-year-old primigravida known case of CCTGA at 37 weeks of gestation was referred to our hospital for safe delivery. Her electrocardiogram showed HR-40/pm, echocardiography showed Ejection Fraction of 65% and CCTGA. Temporary pacemaker was inserted by cardiologist in catheterization laboratory, before giving trial of labour in view of complete heart block. She was planned for normal delivery, but emergency Caesarean section was planned due to non-reassuring foetal Cardiotocography Pre-op vitals showed PR-50 bpm with temporary pacemaker, Blood pressure-110/70 mmHg, SpO2-99% on room air. Nil per oral was inadequate. Patency of two peripheral IV cannula checked and left radial arterial line secured. Epidural Anaesthesia was planned, and catheter was placed at L2-L3. Test dose was given, Anaesthesia was provided with 5ml + 5ml of 2% Lignocaine with 25 mcg Fentanyl and further 2.5Ml of 0.5% Bupivacaine was given to achieve a sensory level of T6. Cesarean section was performed and baby was delivered. Cautery was avoided during this procedure. IV Oxytocin (15U) was added to 500 mL of ringer’s lactate. Hypotension was treated with phenylephrine boluses. Patient was shifted to post-operative care unit and later to high dependency unit for monitoring. Post op vitals remained stable. Temporary pacemaker was removed after 24 hours of surgery. Her post-operative period was uneventful and discharged from hospital. Conclusion: Rare congenital cardiac disorders require detail knowledge of pathophysiology and associated comorbidities with the disease. Meticulously planned and carefully titrated neuraxial techniques will be beneficial for such cases.Keywords: congenitally corrected transposition of great arteries, complete heart block, emergency LSCS, epidural anaesthesia
Procedia PDF Downloads 1291426 Digital Transformation: Actionable Insights to Optimize the Building Performance
Authors: Jovian Cheung, Thomas Kwok, Victor Wong
Abstract:
Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance
Procedia PDF Downloads 1531425 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 2011424 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 1411423 Impact of Sunflower Oil Supplemented Diet on Performance and Hematological Stress Indicators of Growing-Finishing Pigs Exposed to Hot Environment
Authors: Angela Cristina Da F. De Oliveira, Salma E. Asmar, Norbert P. Battlori, Yaz Vera, Uriel R. Valencia, Tâmara Duarte Borges, Antoni D. Bueno, Leandro Batista Costa
Abstract:
As homeothermic animals, pigs manifest maximum performance when kept at comfortable temperature levels, represented by a limit where thermoregulatory processes are minimal (18 - 20°C). In a stress situation where it will have a higher energy demand for thermal maintenance, the energy contribution to the productive functions will be reduced, generating health imbalances, drop in productive rates and welfare problems. The hypothesis of this project is that 5% starch replacement per 5% sunflower oil (SO), in growing and finishing pig’s diet (Iberic x Duroc), is effective as a nutritional strategy to reduce the negative impacts of thermal stress on performance and animal welfare. Seventy-two crossbred males (51± 6,29 kg body weight- BW) were housed according to the initial BW, in climate-controlled rooms, in collective pens, and exposed to heat stress conditions (30 - 32°C; 35% to 50% humidity). The experiment lasted 90 days, and it was carried out in a randomized block design, in a 2 x 2 factorial, composed of two diets (starch or sunflower oil (with or without) and two feed intake management (ad libitum and restriction). The treatments studied were: 1) control diet (5% starch x 0% SO) with ad libitum intake (n = 18); 2) SO diet (replacement of 5% of starch per 5% SO) with ad libitum intake (n = 18); 3) control diet with restriction feed intake (n = 18); or 4) SO diet with restriction feed intake (n = 18). Feed was provided in two phases, 50–100 Kg BW for growing and 100-140 Kg BW for finishing period, respectively. Hematological, biochemical and growth performance parameters were evaluated on all animals at the beginning of the environmental treatment, on the transition of feed (growing to finishing) and in the final of experiment. After the experimental period, when animals reached a live weight of 130-140 kg, they were slaughtered by carbon dioxide (CO2) stunning. Data have shown for the growing phase no statistical interaction between diet (control x SO) and management feed intake (ad libitum x restriction) on animal performance. At finishing phase, pigs fed with SO diet with restriction feed intake had the same average daily gain (ADG) compared with pigs in control diet with ad libitum feed intake. Furthermore, animals fed with the same diet (SO), presented a better feed gain (p < 0,05) due to feed intake reduce (p < 0,05) when compared with control group. To hematological and biochemical parameters, animals under heat stress had an increase in hematocrit, corpuscular volume, urea concentration, creatinine, calcium, alanine aminotransferase and aspartate aminotransferase (p < 0,05) when compared with the beginning of experiment. These parameters were efficient to characterize the heat stress, although the experimental treatments were not able to reduce the hematological and biochemical stress indicators. In addition, the inclusion of SO on pig diets improve feed gain in pigs at finishing phase, even with restriction feed intake.Keywords: hematological, performance, pigs, welfare
Procedia PDF Downloads 2791422 Influence of Spelling Errors on English Language Performance among Learners with Dysgraphia in Public Primary Schools in Embu County, Kenya
Authors: Madrine King'endo
Abstract:
This study dealt with the influence of spelling errors on English language performance among learners with dysgraphia in public primary schools in West Embu, Embu County, Kenya. The study purposed to investigate the influence of spelling errors on the English language performance among the class three pupils with dysgraphia in public primary schools. The objectives of the study were to identify the spelling errors that learners with dysgraphia make when writing English words and classify the spelling errors they make. Further, the study will establish how the spelling errors affect the performance of the language among the study participants, and suggest the remediation strategies that teachers could use to address the errors. The study could provide the stakeholders with relevant information in writing skills that could help in developing a responsive curriculum to accommodate the teaching and learning needs of learners with dysgraphia, and probably ensure training of teachers in teacher training colleges is tailored within the writing needs of the pupils with dysgraphia. The study was carried out in Embu county because the researcher did not find any study in related literature review concerning the influence of spelling errors on English language performance among learners with dysgraphia in public primary schools done in the area. Moreover, besides being relatively populated enough for a sample population of the study, the area was fairly cosmopolitan to allow a generalization of the study findings. The study assumed the sampled schools will had class three pupils with dysgraphia who exhibited written spelling errors. The study was guided by two spelling approaches: the connectionist stimulation of spelling process and orthographic autonomy hypothesis with a view to explain how participants with learning disabilities spell written words. Data were collected through interviews, pupils’ exercise books, and progress records, and a spelling test made by the researcher based on the spelling scope set for class three pupils by the ministry of education in the primary education syllabus. The study relied on random sampling techniques in identifying general and specific participants. Since the study used children in schools as participants, voluntary consent was sought from themselves, their teachers and the school head teachers who were their caretakers in a school setting.Keywords: dysgraphia, writing, language, performance
Procedia PDF Downloads 1541421 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 1121420 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping
Authors: Masato Saeki
Abstract:
Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level
Procedia PDF Downloads 4501419 “A Watched Pot Never Boils.” Exploring the Impact of Job Autonomy on Organizational Commitment among New Employees: A Comprehensive Study of How Empowerment and Independence Influence Workplace Loyalty and Engagement in Early Career Stages
Authors: Atnafu Ashenef Wondim
Abstract:
In today’s highly competitive business environment, employees are considered a source of competitive advantage. Researchers have looked into job autonomy's effect on organizational commitment and declared superior organizational performance strongly depends on the effort and commitment of employees. The purpose of this study was to explore the relationship between job autonomy and organizational commitment from newcomer’s point of view. The mediation role of employee engagement (physical, emotional, and cognitive) was also examined in the case of Ethiopian Commercial Banks. An exploratory survey research design with mixed-method approach that included partial least squares structural equation modeling and Fuzzy-Set Qualitative Comparative Analysis technique were using to address the sample size of 348 new employees. In-depth interviews with purposive and convenientsampling techniques are conducted with new employees (n=43). The results confirmed that job autonomy had positive, significant direct effects on physical engagement, emotional engagement, and cognitive engagement (path coeffs. = 0.874, 0.931, and 0.893).The results showed thatthe employee engagement driver, physical engagement, had a positive significant influence on affective commitment (path coeff. = 0.187) and normative commitment (path coeff. = 0.512) but no significant effect on continuance commitment. Employee engagement partially mediates the relationship between job autonomy and organizational commitment, which means supporting the indirect effects of job autonomy on affective, continuance, and normative commitment through physical engagement. The findings of this study add new perspectives by positioning it within a complex organizational African setting and by expanding the job autonomy and organizational commitment literature, which will benefit future research. Much of the literature on job autonomy and organizational commitment has been conducted within a well-established organizational business context in Western developed countries.The findings lead to fresh information on job autonomy and organizational commitment implementation enablers that can assist in the formulation of a better policy/strategy to efficiently adopt job autonomy and organizational commitment.Keywords: employee engagement, job autonomy, organizational commitment, social exchange theory
Procedia PDF Downloads 271418 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study
Authors: Anna Slupik, Anna Mosiolek, Sebastian Wojtowicz, Dariusz Bialoszewski
Abstract:
Introduction: Whole-body vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions. 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.Keywords: balance, joint position sense, proprioception, whole body vibration
Procedia PDF Downloads 3261417 The Impact of the Plagal Cadence on Nineteenth-Century Music
Authors: Jason Terry
Abstract:
Beginning in the mid-nineteenth century, hymns in the Anglo-American tradition often ended with the congregation singing ‘amen,’ most commonly set to a plagal cadence. While the popularity of this tradition is well-known still today, this research presents the origins of this custom. In 1861, Hymns Ancient & Modern deepened this convention by concluding each of its hymns with a published plagal-amen cadence. Subsequently, hymnals from a variety of denominations throughout Europe and the United States heavily adopted this practice. By the middle of the twentieth century the number of participants singing this cadence had suspiciously declined; however, it was not until the 1990s that the plagal-amen cadence all but disappeared from hymnals. Today, it is rare for songs to conclude with the plagal-amen cadence, although instrumentalists have continued to regularly play a plagal cadence underneath the singers’ sustained finalis. After examining a variety of music theory treatises, eighteenth-century newspaper articles, manuscripts & hymnals from the last five centuries, and conducting interviews with a number of scholars around the world, this study presents the context of the plagal-amen cadence through its history. The association of ‘amen’ and the plagal cadence was already being discussed during the late eighteenth century, and the plagal-amen cadence only grew in attractiveness from that time forward, most notably in the nineteenth and twentieth centuries. Throughout this research, the music of Thomas Tallis, primarily through his Preces and Responses, is reasonably shown to be the basis for the high status of the plagal-amen cadence in nineteenth- and twentieth-century society. Tallis’s immediate influence was felt among his contemporary English composers as well as posterity, all of whom were well-aware of his compositional styles and techniques. More importantly, however, was the revival of his music in nineteenth-century England, which had a greater impact on the plagal-amen tradition. With his historical title as the father of English cathedral music, Tallis was favored by the supporters of the Oxford Movement. Thus, with society’s view of Tallis, the simple IV–I cadence he chose to pair with ‘amen’ attained a much greater worth in the history of Western music. A musical device such as the once-revered plagal-amen cadence deserves to be studied and understood in a more factual light than has thus far been available to contemporary scholars.Keywords: amen cadence, Plagal-amen cadence, singing hymns with amen, Thomas Tallis
Procedia PDF Downloads 2321416 Discriminating Between Energy Drinks and Sports Drinks Based on Their Chemical Properties Using Chemometric Methods
Authors: Robert Cazar, Nathaly Maza
Abstract:
Energy drinks and sports drinks are quite popular among young adults and teenagers worldwide. Some concerns regarding their health effects – particularly those of the energy drinks - have been raised based on scientific findings. Differentiating between these two types of drinks by means of their chemical properties seems to be an instructive task. Chemometrics provides the most appropriate strategy to do so. In this study, a discrimination analysis of the energy and sports drinks has been carried out applying chemometric methods. A set of eleven samples of available commercial brands of drinks – seven energy drinks and four sports drinks – were collected. Each sample was characterized by eight chemical variables (carbohydrates, energy, sugar, sodium, pH, degrees Brix, density, and citric acid). The data set was standardized and examined by exploratory chemometric techniques such as clustering and principal component analysis. As a preliminary step, a variable selection was carried out by inspecting the variable correlation matrix. It was detected that some variables are redundant, so they can be safely removed, leaving only five variables that are sufficient for this analysis. They are sugar, sodium, pH, density, and citric acid. Then, a hierarchical clustering `employing the average – linkage criterion and using the Euclidian distance metrics was performed. It perfectly separates the two types of drinks since the resultant dendogram, cut at the 25% similarity level, assorts the samples in two well defined groups, one of them containing the energy drinks and the other one the sports drinks. Further assurance of the complete discrimination is provided by the principal component analysis. The projection of the data set on the first two principal components – which retain the 71% of the data information – permits to visualize the distribution of the samples in the two groups identified in the clustering stage. Since the first principal component is the discriminating one, the inspection of its loadings consents to characterize such groups. The energy drinks group possesses medium to high values of density, citric acid, and sugar. The sports drinks group, on the other hand, exhibits low values of those variables. In conclusion, the application of chemometric methods on a data set that features some chemical properties of a number of energy and sports drinks provides an accurate, dependable way to discriminate between these two types of beverages.Keywords: chemometrics, clustering, energy drinks, principal component analysis, sports drinks
Procedia PDF Downloads 1051415 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits
Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski
Abstract:
Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits
Procedia PDF Downloads 1361414 The Impact of WhatsApp Groups as Supportive Technology in Teaching
Authors: Pinn Tsin Isabel Yee
Abstract:
With the advent of internet technologies, students are increasingly turning toward social media and cross-platform messaging apps such as WhatsApp, Line, and WeChat to support their teaching and learning processes. Although each messaging app has varying features, WhatsApp remains one of the most popular cross-platform apps that allow for fast, simple, secure messaging and free calls anytime, anywhere. With a plethora of advantages, students could easily assimilate WhatsApp as a supportive technology in their learning process. There could be peer to peer learning, and a teacher will be able to share knowledge digitally via the creation of WhatsApp groups. Content analysis techniques were utilized to analyze data collected by closed-ended question forms. Studies demonstrated that 98.8% of college students (n=80) from the Monash University foundation year agreed that the employment of WhatsApp groups was helpful as a learning tool. Approximately 71.3% disagreed that notifications and alerts from the WhatsApp group were disruptions in their studies. Students commented that they could silence the notifications and hence, it would not disturb their flow of thoughts. In fact, an overwhelming majority of students (95.0%) found it enjoyable to participate in WhatsApp groups for educational purposes. It was a common perception that some students felt pressured to post a reply in such groups, but data analysis showed that 72.5% of students did not feel pressured to comment or reply. It was good that 93.8% of students felt satisfactory if their posts were not responded to speedily, but was eventually attended to. Generally, 97.5% of students found it useful if their teachers provided their handphone numbers to be added to a WhatsApp group. If a teacher posts an explanation or a mathematical working in the group, all students would be able to view the post together, as opposed to individual students asking their teacher a similar question. On whether students preferred using Facebook as a learning tool, there was a 50-50 divide in the replies from the respondents as 51.3% of students liked WhatsApp, while 48.8% preferred Facebook as a supportive technology in teaching and learning. Taken altogether, the utilization of WhatsApp groups as a supportive technology in teaching and learning should be implemented in all classes to continuously engage our generation Y students in the ever-changing digital landscape.-Keywords: education, learning, messaging app, technology, WhatsApp groups
Procedia PDF Downloads 1561413 Understanding the Effects of Lamina Stacking Sequence on Structural Response of Composite Laminates
Authors: Awlad Hossain
Abstract:
Structural weight reduction with improved functionality is one of the targeted desires of engineers, which drives materials and structures to be lighter. One way to achieve this objective is through the replacement of metallic structures with composites. The main advantages of composite materials are to be lightweight and to offer high specific strength and stiffness. Composite materials can be classified in various ways based on the fiber types and fiber orientations. Fiber reinforced composite laminates are prepared by stacking single sheet of continuous fibers impregnated with resin in different orientation to get the desired strength and stiffness. This research aims to understand the effects of Lamina Stacking Sequence (LSS) on the structural response of a symmetric composite laminate, defined by [0/60/-60]s. The Lamina Stacking Sequence (LSS) represents how the layers are stacked together in a composite laminate. The [0/60/-60]s laminate represents a composite plate consists of 6 layers of fibers, which are stacked at 0, 60, -60, -60, 60 and 0 degree orientations. This laminate is also called symmetric (defined by subscript s) as it consists of same material and having identical fiber orientations above and below the mid-plane. Therefore, the [0/60/-60]s, [0/-60/60]s, [60/-60/0]s, [-60/60/0]s, [60/0/-60]s, and [-60/0/60]s represent the same laminate but with different LSS. In this research, the effects of LSS on laminate in-plane and bending moduli was investigated first. The laminate moduli dictate the in-plane and bending deformations upon loading. This research also provided all the setup and techniques for measuring the in-plane and bending moduli, as well as how the stress distribution was assessed. Then, the laminate was subjected to in-plane force load and bending moment. The strain and stress distribution at each ply for different LSS was investigated using the concepts of Macro-Mechanics. Finally, several numerical simulations were conducted using the Finite Element Analysis (FEA) software ANSYS to investigate the effects of LSS on deformations and stress distribution. The FEA results were also compared to the Macro-Mechanics solutions obtained by MATLAB. The outcome of this research helps composite users to determine the optimum LSS requires to minimize the overall deformation and stresses. It would be beneficial to predict the structural response of composite laminates analytically and/or numerically before in-house fabrication.Keywords: composite, lamina, laminate, lamina stacking sequence, laminate moduli, laminate strength
Procedia PDF Downloads 61412 Contrast Media Effects and Radiation Dose Assessment in Contrast Enhanced Computed Tomography
Authors: Buhari Samaila, Sabiu Abdullahi, Buhari Maidamma
Abstract:
Background: Contrast-enhanced computed tomography (CE-CT) is a technique that uses contrast media to improve image quality and diagnostic accuracy. It is a widely used imaging modality in medical diagnostics, offering high-resolution images for accurate diagnosis. However, concerns regarding the potential adverse effects of contrast media and radiation dose exposure have prompted ongoing investigation and assessment. It is important to assess the effects of contrast media and radiation dose in CE-CT procedures. Objective: This study aims to assess the effects of contrast media and radiation dose in contrast-enhanced computed tomography (CECT) procedures. Methods: A comprehensive review of the literature was conducted to identify studies related to contrast media effects and radiation dose assessment in CECT. Relevant data, including location, type of research, objective, method, findings, conclusion, authors, and year of publications, were extracted, analyzed, and reported. Results: The findings revealed that several studies have investigated the impacts of contrast media and radiation doses in CECT procedures, with iodinated contrast agents being the most commonly employed. Adverse effects associated with contrast media administration were reported, including allergic reactions, nephrotoxicity, and thyroid dysfunction, albeit at relatively low incidence rates. Additionally, radiation dose levels varied depending on the imaging protocol and anatomical region scanned. Efforts to minimize radiation exposure through optimization techniques were evident across studies. Conclusion: Contrast-enhanced computed tomography (CECT) remains an invaluable tool in medical imaging; however, careful consideration of contrast media effects and radiation dose exposure is imperative. Healthcare practitioners should weigh the diagnostic benefits against potential risks, employing strategies to mitigate adverse effects and optimize radiation dose levels for patient safety and effective diagnosis. Further research is warranted to enhance the understanding and management of contrast media effects and radiation dose optimization in CECT procedures.Keywords: CT, contrast media, radiation dose, effect of radiation
Procedia PDF Downloads 181411 Community Resilience in Response to the Population Growth in Al-Thahabiah Neighborhood
Authors: Layla Mujahed
Abstract:
Amman, the capital of Jordan, is the main political, economic, social and cultural center of Jordan and beyond. The city faces multitude demographic challenges related to the unstable political situation in the surrounded countries. It has regional and local migrants who left their homes to find better life in the capital. This resulted with random and unequaled population distribution. Some districts have high population and pressure on the infrastructure and services more than other districts.Government works to resolve this challenge in compliance with 100 Cities Resilience Framework (CRF). Amman participated in this framework as a member in December 2014 to work in achieving the four goals: health and welfare, infrastructure and utilities, economy and education as well as administration and government. Previous research studies lack in studying Amman resilient work in neighborhood scale and the population growth as resilient challenge. For that, this study focuses on Al-Thahabiah neighborhood in Shafa Badran district in Amman. This paper studies the reasons and drivers behind this population growth during the selected period in this area then provide strategies to improve the resilient work in neighborhood scale. The methodology comprises of primary and secondary data. The primary data consist of interviews with chief officer in the executive part in Great Amman Municipality and resilient officer. The secondary data consist of papers, journals, newspaper, articles and book’s reading. The other part of data consists of maps and statistical data which describe the infrastructural and social situation in the neighborhood and district level during the studying period. Based upon those data, more detailed information will be found, e.g., the centralizing position of population and the provided infrastructure for them. This will help to provide these services and infrastructure to other neighborhoods and enhance population distribution. This study develops an analytical framework to assess urban demographical time series in accordance with the criteria of CRF to make accurate detailed projections on the requirements for the future development in the neighborhood scale and organize the human requirements for affordable quality housing, employment, transportation, health and education in this neighborhood to improve the social relations between its inhabitants and the community. This study highlights on the localization of resilient work in neighborhood scale and spread the resilient knowledge related to the shortage of its research in Jordan. Studying the resilient work from population growth challenge perspective helps improve the facilities provide to the inhabitants and improve their quality of life.Keywords: city resilience framework, demography, population growth, stakeholders, urban resilience
Procedia PDF Downloads 1761410 Spatio-Temporal Analysis of Land Use Change and Green Cover Index
Authors: Poonam Sharma, Ankur Srivastav
Abstract:
Cities are complex and dynamic systems that constitute a significant challenge to urban planning. The increasing size of the built-up area owing to growing population pressure and economic growth have lead to massive Landuse/Landcover change resulted in the loss of natural habitat and thus reducing the green covers in urban areas. Urban environmental quality is influenced by several aspects, including its geographical configuration, the scale, and nature of human activities occurring and environmental impacts generated. Cities have transformed into complex and dynamic systems that constitute a significant challenge to urban planning. Cities and their sustainability are often discussed together as the cities stand confronted with numerous environmental concerns as the world becoming increasingly urbanized, and the cities are situated in the mesh of global networks in multiple senses. A rapid transformed urban setting plays a crucial role to change the green area of natural habitats. To examine the pattern of urban growth and to measure the Landuse/Landcover change in Gurgoan in Haryana, India through the integration of Geospatial technique is attempted in the research paper. Satellite images are used to measure the spatiotemporal changes that have occurred in the land use and land cover resulting into a new cityscape. It has been observed from the analysis that drastically evident changes in land use has occurred with the massive rise in built up areas and the decrease in green cover and therefore causing the sustainability of the city an important area of concern. The massive increase in built-up area has influenced the localised temperatures and heat concentration. To enhance the decision-making process in urban planning, a detailed and real world depiction of these urban spaces is the need of the hour. Monitoring indicators of key processes in land use and economic development are essential for evaluating policy measures.Keywords: cityscape, geospatial techniques, green cover index, urban environmental quality, urban planning
Procedia PDF Downloads 2761409 Growth and Differentiation of Mesenchymal Stem Cells on Titanium Alloy Ti6Al4V and Novel Beta Titanium Alloy Ti36Nb6Ta
Authors: Eva Filová, Jana Daňková, Věra Sovková, Matej Daniel
Abstract:
Titanium alloys are biocompatible metals that are widely used in clinical practice as load bearing implants. The chemical modification may influence cell adhesion, proliferation, and differentiation as well as stiffness of the material. The aim of the study was to evaluate the adhesion, growth and differentiation of pig mesenchymal stem cells on the novel beta titanium alloy Ti36Nb6Ta compared to standard medical titanium alloy Ti6Al4V. Discs of Ti36Nb6Ta and Ti6Al4V alloy were sterilized by ethanol, put in 48-well plates, and seeded by pig mesenchymal stem cells at the density of 60×103/cm2 and cultured in Minimum essential medium (Sigma) supplemented with 10% fetal bovine serum and penicillin/streptomycin. Cell viability was evaluated using MTS assay (CellTiter 96® AQueous One Solution Cell Proliferation Assay;Promega), cell proliferation using Quant-iT™ ds DNA Assay Kit (Life Technologies). Cells were stained immunohistochemically using monoclonal antibody beta-actin, and secondary antibody conjugated with AlexaFluor®488 and subsequently the spread area of cells was measured. Cell differentiation was evaluated by alkaline phosphatase assay using p-nitrophenyl phosphate (pNPP) as a substrate; the reaction was stopped by NaOH, and the absorbance was measured at 405 nm. Osteocalcin, specific bone marker was stained immunohistochemically and subsequently visualized using confocal microscopy; the fluorescence intensity was analyzed and quantified. Moreover, gene expression of osteogenic markers osteocalcin and type I collagen was evaluated by real-time reverse transcription-PCR (qRT-PCR). For statistical evaluation, One-way ANOVA followed by Student-Newman-Keuls Method was used. For qRT-PCR, the nonparametric Kruskal-Wallis Test and Dunn's Multiple Comparison Test were used. The absorbance in MTS assay was significantly higher on titanium alloy Ti6Al4V compared to beta titanium alloy Ti36Nb6Ta on days 7 and 14. Mesenchymal stem cells were well spread on both alloys, but no difference in spread area was found. No differences in alkaline phosphatase assay, fluorescence intensity of osteocalcin as well as the expression of type I collagen, and osteocalcin genes were observed. Higher expression of type I collagen compared to osteocalcin was observed for cells on both alloys. Both beta titanium alloy Ti36Nb6Ta and titanium alloy Ti6Al4V Ti36Nb6Ta supported mesenchymal stem cellsˈ adhesion, proliferation and osteogenic differentiation. Novel beta titanium alloys Ti36Nb6Ta is a promising material for bone implantation. The project was supported by the Czech Science Foundation: grant No. 16-14758S, the Grant Agency of the Charles University, grant No. 1246314 and by the Ministry of Education, Youth and Sports NPU I: LO1309.Keywords: beta titanium, cell growth, mesenchymal stem cells, titanium alloy, implant
Procedia PDF Downloads 3151408 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices
Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.Keywords: anthropometry, childhood obesity, gender, morbid obesity
Procedia PDF Downloads 3241407 A Case Study on an Integrated Analysis of Well Control and Blow out Accident
Authors: Yasir Memon
Abstract:
The complexity and challenges in the offshore industry are increasing more than the past. The oil and gas industry is expanding every day by accomplishing these challenges. More challenging wells such as longer and deeper are being drilled in today’s environment. Blowout prevention phenomena hold a worthy importance in oil and gas biosphere. In recent, so many past years when the oil and gas industry was growing drilling operation were extremely dangerous. There was none technology to determine the pressure of reservoir and drilling hence was blind operation. A blowout arises when an uncontrolled reservoir pressure enters in wellbore. A potential of blowout in the oil industry is the danger for the both environment and the human life. Environmental damage, state/country regulators, and the capital investment causes in loss. There are many cases of blowout in the oil the gas industry caused damage to both human and the environment. A huge capital investment is being in used to stop happening of blowout through all over the biosphere to bring damage at the lowest level. The objective of this study is to promote safety and good resources to assure safety and environmental integrity in all operations during drilling. This study shows that human errors and management failure is the main cause of blowout therefore proper management with the wise use of precautions, prevention methods or controlling techniques can reduce the probability of blowout to a minimum level. It also discusses basic procedures, concepts and equipment involved in well control methods and various steps using at various conditions. Furthermore, another aim of this study work is to highlight management role in oil gas operations. Moreover, this study analyze the causes of Blowout of Macondo well occurred in the Gulf of Mexico on April 20, 2010, and deliver the recommendations and analysis of various aspect of well control methods and also provides the list of mistakes and compromises that British Petroleum and its partner were making during drilling and well completion methods and also the Macondo well disaster happened due to various safety and development rules violation. This case study concludes that Macondo well blowout disaster could be avoided with proper management of their personnel’s and communication between them and by following safety rules/laws it could be brought to minimum environmental damage.Keywords: energy, environment, oil and gas industry, Macondo well accident
Procedia PDF Downloads 1851406 Assessment of Acute Oral Toxicity Studies and Anti Diabetic Activity of Herbal Mediated Nanomedicine
Authors: Shanker Kalakotla, Krishna Mohan Gottumukkala
Abstract:
Diabetes is a metabolic disorder characterized by hyperglycemia, carbohydrates, altered lipids and proteins metabolism. In recent research nanotechnology is a blazing field for the researchers; latterly there has been prodigious excitement in the nanomedicine and nano pharmacological area for the study of silver nanoparticles synthesis using natural products. Biological methods have been used to synthesize silver nanoparticles in presence of medicinally active antidiabetic plants, and this intention made us assess the biologically synthesized silver nanoparticles from the seed extract of Psoralea corylfolia using 1 mM silver nitrate solution. The synthesized herbal mediated silver nanoparticles (HMSNP’s) then subjected to various characterization techniques such as XRD, SEM, EDX, TEM, DLS, UV and FT-IR respectively. In current study, the silver nanoparticles tested for in-vitro anti-diabetic activity and possible toxic effects in healthy female albino mice by following OECD guidelines-425. Herbal mediated silver nanoparticles were successfully obtained from bioreduction of silver nitrate using Psoralea corylifolia plant extract. Silver nanoparticles have been appropriately characterized and confirmed using different types of equipment viz., UV-vis spectroscopy, XRD, FTIR, DLS, SEM and EDX analysis. From the behavioral observations of the study, the female albino mice did not show sedation, respiratory arrest, and convulsions. Test compounds did not cause any mortality at the dose level tested (i.e., 2000 mg/kg body weight) doses till the end of 14 days of observation and were considered safe. It may be concluded that LD50 of the HMSNPs was 2000mg/kg body weight. Since LD50 of the HMSNPs was 2000mg/kg body weight, so the preferred dose range for HMSNPs falls between the levels of 200 and 400 mg/kg. Further In-vivo pharmacological models and biochemical investigations will clearly elucidate the mechanism of action and will be helpful in projecting the currently synthesized silver nanoparticles as a therapeutic target in treating chronic ailments.Keywords: herbal mediated silver nanoparticles, HMSNPs, toxicity of silver nanoparticles, PTP1B in-vitro anti-diabetic assay female albino mice, 425 OECD guidelines
Procedia PDF Downloads 2711405 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route
Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku
Abstract:
Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.Keywords: inducing labor, misoprostol, pregnancy termination, third trimester
Procedia PDF Downloads 1841404 A Systematic Review of the Psychometric Properties of Augmentative and Alternative Communication Assessment Tools in Adolescents with Complex Communication Needs
Authors: Nadwah Onwi, Puspa Maniam, Azmawanie A. Aziz, Fairus Mukhtar, Nor Azrita Mohamed Zin, Nurul Haslina Mohd Zin, Nurul Fatehah Ismail, Mohamad Safwan Yusoff, Susilidianamanalu Abd Rahman, Siti Munirah Harris, Maryam Aizuddin
Abstract:
Objective: Malaysia has a growing number of individuals with complex communication needs (CCN). The initiation of augmentative and alternative communication (AAC) intervention may facilitate individuals with CCN to understand and express themselves optimally and actively participate in activities in their daily life. AAC is defined as multimodal use of communication ability to allow individuals to use every mode possible to communicate with others using a set of symbols or systems that may include the symbols, aids, techniques, and strategies. It is consequently critical to evaluate the deficits to inform treatment for AAC intervention. However, no known measurement tools are available to evaluate the user with CCN available locally. Design: A systematic review (SR) is designed to analyze the psychometric properties of AAC assessment for adolescents with CCN published in peer-reviewed journals. Tools are rated by the methodological quality of studies and the psychometric measurement qualities of each tool. Method: A literature search identifying AAC assessment tools with psychometrically robust properties and conceptual framework was considered. Two independent reviewers screened the abstracts and full-text articles and review bibliographies for further references. Data were extracted using standardized forms and study risk of bias was assessed. Result: The review highlights the psychometric properties of AAC assessment tools that can be used by speech-language therapists applicable to be used in the Malaysian context. The work outlines how systematic review methods may be applied to the consideration of published material that provides valuable data to initiate the development of Malay Language AAC assessment tools. Conclusion: The synthesis of evidence has provided a framework for Malaysia Speech-Language therapists in making an informed decision for AAC intervention in our standard operating procedure in the Ministry of Health, Malaysia.Keywords: augmentative and alternative communication, assessment, adolescents, complex communication needs
Procedia PDF Downloads 1501403 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors
Authors: Tom Nakotte, Hongmei Luo
Abstract:
Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots
Procedia PDF Downloads 1251402 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia PDF Downloads 1821401 Knowledge Creation Environment in the Iranian Universities: A Case Study
Authors: Mahdi Shaghaghi, Amir Ghaebi, Fariba Ahmadi
Abstract:
Purpose: The main purpose of the present research is to analyze the knowledge creation environment at a Iranian University (Alzahra University) as a typical University in Iran, using a combination of the i-System and Ba models. This study is necessary for understanding the determinants of knowledge creation at Alzahra University as a typical University in Iran. Methodology: To carry out the present research, which is an applied study in terms of purpose, a descriptive survey method was used. In this study, a combination of the i-System and Ba models has been used to analyze the knowledge creation environment at Alzahra University. i-System consists of 5 constructs including intervention (input), intelligence (process), involvement (process), imagination (process), and integration (output). The Ba environment has three pillars, namely the infrastructure, the agent, and the information. The integration of these two models resulted in 11 constructs which were as follows: intervention (input), infrastructure-intelligence, agent-intelligence, information-intelligence (process); infrastructure-involvement, agent-involvement, information-involvement (process); infrastructure-imagination, agent-imagination, information-imagination (process); and integration (output). These 11 constructs were incorporated into a 52-statement questionnaire and the validity and reliability of the questionnaire were examined and confirmed. The statistical population included the faculty members of Alzahra University (344 people). A total of 181 participants were selected through the stratified random sampling technique. The descriptive statistics, binomial test, regression analysis, and structural equation modeling (SEM) methods were also utilized to analyze the data. Findings: The research findings indicated that among the 11 research constructs, the levels of intervention, information-intelligence, infrastructure-involvement, and agent-imagination constructs were average and not acceptable. The levels of infrastructure-intelligence and information-imagination constructs ranged from average to low. The levels of agent-intelligence and information-involvement constructs were also completely average. The level of infrastructure-imagination construct was average to high and thus was considered acceptable. The levels of agent-involvement and integration constructs were above average and were in a highly acceptable condition. Furthermore, the regression analysis results indicated that only two constructs, viz. the information-imagination and agent-involvement constructs, positively and significantly correlate with the integration construct. The results of the structural equation modeling also revealed that the intervention, intelligence, and involvement constructs are related to the integration construct with the complete mediation of imagination. Discussion and conclusion: The present research suggests that knowledge creation at Alzahra University relatively complies with the combination of the i-System and Ba models. Unlike this model, the intervention, intelligence, and involvement constructs are not directly related to the integration construct and this seems to have three implications: 1) the information sources are not frequently used to assess and identify the research biases; 2) problem finding is probably of less concern at the end of studies and at the time of assessment and validation; 3) the involvement of others has a smaller role in the summarization, assessment, and validation of the research.Keywords: i-System, Ba model , knowledge creation , knowledge management, knowledge creation environment, Iranian Universities
Procedia PDF Downloads 991400 Atomic Scale Storage Mechanism Study of the Advanced Anode Materials for Lithium-Ion Batteries
Authors: Xi Wang, Yoshio Bando
Abstract:
Lithium-ion batteries (LIBs) can deliver high levels of energy storage density and offer long operating lifetimes, but their power density is too low for many important applications. Therefore, we developed some new strategies and fabricated novel electrodes for fast Li transport and its facile synthesis including N-doped graphene-SnO2 sandwich papers, bicontinuous nanoporous Cu/Li4Ti5O12 electrode, and binder-free N-doped graphene papers. In addition, by using advanced in-TEM, STEM techniques and the theoretical simulations, we systematically studied and understood their storage mechanisms at the atomic scale, which shed a new light on the reasons of the ultrafast lithium storage property and high capacity for these advanced anodes. For example, by using advanced in-situ TEM, we directly investigated these processes using an individual CuO nanowire anode and constructed a LIB prototype within a TEM. Being promising candidates for anodes in lithium-ion batteries (LIBs), transition metal oxide anodes utilizing the so-called conversion mechanism principle typically suffer from the severe capacity fading during the 1st cycle of lithiation–delithiation. Also we report on the atomistic insights of the GN energy storage as revealed by in situ TEM. The lithiation process on edges and basal planes is directly visualized, the pyrrolic N "hole" defect and the perturbed solid-electrolyte-interface (SEI) configurations are observed, and charge transfer states for three N-existing forms are also investigated. In situ HRTEM experiments together with theoretical calculations provide a solid evidence that enlarged edge {0001} spacings and surface "hole" defects result in improved surface capacitive effects and thus high rate capability and the high capacity is owing to short-distance orderings at the edges during discharging and numerous surface defects; the phenomena cannot be understood previously by standard electron or X-ray diffraction analyses.Keywords: in-situ TEM, STEM, advanced anode, lithium-ion batteries, storage mechanism
Procedia PDF Downloads 351