Search results for: partial shading conditions
1480 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator
Authors: Gómez R. Marta, Martín M. Jesús María
Abstract:
The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.Keywords: atmospheric dispersion, dioxin, furan, incinerator
Procedia PDF Downloads 2171479 Non-Invasive Viscosity Determination of Liquid Organic Hydrogen Carriers by Alteration of Temperature and Flow Velocity Using Cavity Based Permittivity Measurement
Authors: I. Wiemann, N. Weiß, E. Schlücker, M. Wensing, A. Kölpin
Abstract:
Chemical storage of hydrogen by liquid organic hydrogen carriers (LOHC) is a very promising alternative to compression or cryogenics. These carriers have high energy density and allow at the same time efficient and safe storage of hydrogen under ambient conditions and without leakage losses. Another benefit of LOHC is the possibility to transport it using already available infrastructure for transport of fossil fuels. Efficient use of LOHC is related to a precise process control, which requires a number of sensors in order to measure all relevant process parameters, for example, to measure the level of hydrogen loading of the carrier. The degree of loading is relevant for the energy content of the storage carrier and represents simultaneously the modification in chemical structure of the carrier molecules. This variation can be detected in different physical properties like viscosity, permittivity or density. Thereby, each degree of loading corresponds to different viscosity values. Conventional measurements currently use invasive viscosity measurements or near-line measurements to obtain quantitative information. Avoiding invasive measurements has several severe advantages. Efforts are currently taken to provide a precise, non-invasive measurement method with equal or higher precision of the obtained results. This study investigates a method for determination of the viscosity of LOHC. Since the viscosity can retroactively derived from the degree of loading, permittivity is a target parameter as it is a suitable for determining the hydrogenation degree. This research analyses the influence of common physical properties on permittivity. The permittivity measurement system is based on a cavity resonator, an electromagnetic resonant structure, whose resonation frequency depends on its dimensions as well as the permittivity of the medium inside. For known resonator dimensions, the resonation frequency directly characterizes the permittivity. In order to determine the dependency of the permittivity on temperature and flow velocity, an experimental setup with heating device and flow test bench was designed. By varying temperature in the range of 293,15 K -393,15 K and flow velocity up to 140 mm/s, corresponding changes in the resonation frequency were measured in the hundredths of the GHz range.Keywords: liquid organic hydrogen carriers, measurement, permittivity, viscosity., temperature, flow process
Procedia PDF Downloads 1001478 Three-Dimensional Fluid-Structure-Thermal Coupling Dynamics Simulation Model of a Gas-Filled Fluid-Resistance Damper and Experimental Verification
Authors: Wenxue Xu
Abstract:
Fluid resistance damper is an important damping element to attenuate vehicle vibration. It converts vibration energy into thermal energy dissipation through oil throttling. It is a typical fluid-solid-heat coupling problem. A complete three-dimensional flow-structure-thermal coupling dynamics simulation model of a gas-filled fluid-resistance damper was established. The flow-condition-based interpolation (FCBI) method and direct coupling calculation method, the unit's FCBI-C fluid numerical analysis method and iterative coupling calculation method are used to achieve the damper dynamic response of the piston rod under sinusoidal excitation; the air chamber inflation pressure, spring compression characteristics, constant flow passage cross-sectional area and oil parameters, etc. The system parameters, excitation frequency, and amplitude and other excitation parameters are analyzed and compared in detail for the effects of differential pressure characteristics, velocity characteristics, flow characteristics and dynamic response of valve opening, floating piston response and piston rod output force characteristics. Experiments were carried out on some simulation analysis conditions. The results show that the node-based FCBI (flow-condition-based interpolation) fluid numerical analysis method and direct coupling calculation method can better guarantee the conservation of flow field calculation, and the calculation step is larger, but the memory is also larger; if the chamber inflation pressure is too low, the damper will become cavitation. The inflation pressure will cause the speed characteristic hysteresis to increase, and the sealing requirements are too strict. The spring compression characteristics have a great influence on the damping characteristics of the damper, and reasonable damping characteristic needs to properly design the spring compression characteristics; the larger the cross-sectional area of the constant flow channel, the smaller the maximum output force, but the more stable when the valve plate is opening.Keywords: damper, fluid-structure-thermal coupling, heat generation, heat transfer
Procedia PDF Downloads 1441477 Reformed Land: Extent of Use and Contribution to Livelihoods in the Waterberg District
Authors: A. J. Netshipale, M. L. Mashiloane, S. J. Oosting, I. J. M. De Boer, E. N. Raidimi
Abstract:
Three tier land reform programme (land restitution, land redistribution and land tenure reform) had been implemented for the past two decades in South Africa with an aim of redressing the unjust land ownership patterns of the past. Land restitution and redistribution seeked to make land available for beneficiaries’ ownership based on policy guidelines. Attention given to the two sub-programmes was mostly land reform focused with the quantity of land that exchanged ownership being used as a measure of success with disregard for how the land is used by the beneficiaries for their livelihoods. In few cases that the land use assessment was done for the two sub-programmes it was assessed on a case basis or few selected cases. The current study intended to shed light on a broader scope. This study investigated the extent to which land reform farms were used and contribution made by farms to the livelihoods of active beneficiaries. Seventy six farms that represented restitution (16 farms) and redistribution (60) programmes were selected for land use investigation. Land use data were collected from farm representatives by means of semi-structured questionnaire. A stratified sample of 87 households (38 for restitution and 49 for redistribution) were selected for livelihood investigations. Data on income generating activities and passive income sources were collected from household heads using semi-structured questionnaire. Additional data were collected through focus group discussions and from stakeholders through key-informants interviews. Livestock production used more land per farm on average (45%) in relation to the amount of average total land used per farm of 77% under land redistribution programme. Land restitution transformed crop farms into mixed farming and unused farms to be under use while land redistribution converted conservation land into agricultural land and also unused farms to be used. Livestock production contributed on average 25% to the livelihoods of 48% of the households whereas crop production contributed 31% on average to the livelihoods of 67% of the households. Government grants had the highest contribution of 54% on average and contributed to most households (72%). Agriculture was the sole source of livelihoods to only three per cent of the households. Most households (40%) had a mix of three livelihoods sources as their livelihood strategy. It could be concluded that the use of reformed land would be mainly influenced by the agro-ecological conditions of the area and agriculture could not be the main source of livelihoods for households that benefited from land reform. Land reform policies which accommodate diverse livelihoods activities could contribute to sustainable livelihoods.Keywords: active beneficiaries, households, land reform, land use, livelihoods
Procedia PDF Downloads 1981476 Nursing Education in Estonia During the Years of Occupation: Paternalism and Ideology
Authors: Merle Talvik, Taimi Tulva, Kristi Puusepp, Ülle Ernits
Abstract:
Background data. In 1940–1941 and 1945–1991 Estonia was occupied by Soviet Union. Paternalism was a common principle in Soviet social policy, including health care. The Soviet government, not the individuals themselves, decided on achieving a person’s quality of life. With the help of Soviet ideology, the work culture of nurses was constructed and the education system was also reshaped according to the ideology. The “new period of awakening” was initiated under Gorbachev’s perestroika and glasnost (1985–1991), leading to democratization. Aim. The qualitative study aimed to analyze nursing education in Soviet Estonia in the conditions of paternalistic orientation and ideological pressure. Method. The research was conducted in 2021 and 2023. Senior nurses (aged 69–87) who had worked for at least 20 years during the Soviet era were surveyed. Thematic interviews were conducted in written form and orally (13 interviewees), followed by a focus group interview (8 interviewees). A thematic content analysis was performed. Results. Nursing is part of society’s culture and in this sense, in - terviews with nurses provide us with critical information about the functioning of society and cultural identity at a given time. During the Soviet era the training of nurses occured within vocational training institutions. The curricula underwent a shift towards a Soviet-oriented approach. A significant portion of lessons were dedicated to imparting knowledge on the principles and tenets of Communist-Marxist ideology. Therefore, practical subjects and nursing theory were frequently allocated limited space. A paternalistic orientation prevailed in health care: just as the state regulated how to cure, spread hygiene, and healthy lifestyles propaganda, training was also determined by the management of the institution, thereby limiting the person´s autonomy to decide what kind of training was needed. The research is of significant value in the context of the history of nursing, as it helps to understand the difficulties and complexity of the development of nursing on the timeline. The Soviet era still affects Estonian society today and will continue to do so in the future. The same type of developments occurred in other post-Soviet countries.Keywords: Estonian SSR, nursing education, paternalism, senior nurse, Soviet ideology
Procedia PDF Downloads 671475 FEM and Experimental Modal Analysis of Computer Mount
Authors: Vishwajit Ghatge, David Looper
Abstract:
Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration
Procedia PDF Downloads 3211474 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines
Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz
Abstract:
Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines
Procedia PDF Downloads 5171473 The Accuracy of an 8-Minute Running Field Test to Estimate Lactate Threshold
Authors: Timothy Quinn, Ronald Croce, Aliaksandr Leuchanka, Justin Walker
Abstract:
Many endurance athletes train at or just below an intensity associated with their lactate threshold (LT) and often the heart rate (HR) that these athletes use for their LT are above their true LT-HR measured in a laboratory. Training above their true LT-HR may lead to overtraining and injury. Few athletes have the capability of measuring their LT in a laboratory and rely on perception to guide them, as accurate field tests to determine LT are limited. Therefore, the purpose of this study was to determine if an 8-minute field test could accurately define the HR associated with LT as measured in the laboratory. On Day 1, fifteen male runners (mean±SD; age, 27.8±4.1 years; height, 177.9±7.1 cm; body mass, 72.3±6.2 kg; body fat, 8.3±3.1%) performed a discontinuous treadmill LT/maximal oxygen consumption (LT/VO2max) test using a portable metabolic gas analyzer (Cosmed K4b2) and a lactate analyzer (Analox GL5). The LT (and associated HR) was determined using the 1/+1 method, where blood lactate increased by 1 mMol•L-1 over baseline followed by an additional 1 mMol•L-1 increase. Days 2 and 3 were randomized, and the athletes performed either an 8-minute run on the treadmill (TM) or on a 160-m indoor track (TR) in an effort to cover as much distance as possible while maintaining a high intensity throughout the entire 8 minutes. VO2, HR, ventilation (VE), and respiratory exchange ratio (RER) were measured using the Cosmed system, and rating of perceived exertion (RPE; 6-20 scale) was recorded every minute. All variables were averaged over the 8 minutes. The total distance covered over the 8 minutes was measured in both conditions. At the completion of the 8-minute runs, blood lactate was measured. Paired sample t-tests and pairwise Pearson correlations were computed to determine the relationship between variables measured in the field tests versus those obtained in the laboratory at LT. An alpha level of <0.05 was required for statistical significance. The HR (mean +SD) during the TM (167+9 bpm) and TR (172+9 bpm) tests were strongly correlated to the HR measured during the laboratory LT (169+11 bpm) test (r=0.68; p<0.03 and r=0.88; p<0.001, respectively). Blood lactate values during the TM and TR tests were not different from each other but were strongly correlated with the laboratory LT (r=0.73; p<0.04 and r=0.66; p<0.05, respectively). VE (Lmin-1) was significantly greater during the TR (134.8+11.4 Lmin-1) as compared to the TM (123.3+16.2 Lmin-1) with moderately strong correlations to the laboratory threshold values (r=0.38; p=0.27 and r=0.58; p=0.06, respectively). VO2 was higher during TR (51.4 mlkg-1min-1) compared to TM (47.4 mlkg-1min-1) with correlations of 0.33 (p=0.35) and 0.48 (p=0.13), respectively to threshold values. Total distance run was significantly greater during the TR (2331.6+180.9 m) as compared to the TM (2177.0+232.6 m), but they were strongly correlated with each other (r=0.82; p<0.002). These results suggest that an 8-minute running field test can accurately predict the HR associated with the LT and may be a simple test that athletes and coaches could implement to aid in training techniques.Keywords: blood lactate, heart rate, running, training
Procedia PDF Downloads 2521472 Investigating the Indoor Air Quality of the Respiratory Care Wards
Authors: Yu-Wen Lin, Chin-Sheng Tang, Wan-Yi Chen
Abstract:
Various biological specimens, drugs, and chemicals exist in the hospital. The medical staffs and hypersensitive inpatients expose might expose to multiple hazards while they work or stay in the hospital. Therefore, the indoor air quality (IAQ) of the hospital should be paid more attention. Respiratory care wards (RCW) are responsible for caring the patients who cannot spontaneously breathe without the ventilators. The patients in RCW are easy to be infected. Compared to the bacteria concentrations of other hospital units, RCW came with higher values in other studies. This research monitored the IAQ of the RCW and checked the compliances of the indoor air quality standards of Taiwan Indoor Air Quality Act. Meanwhile, the influential factors of IAQ and the impacts of ventilator modules, with humidifier or with filter, were investigated. The IAQ of two five-bed wards and one nurse station of a RCW in a regional hospital were monitored. The monitoring was proceeded for 16 hours or 24 hours during the sampling days with a sampling frequency of 20 minutes per hour. The monitoring was performed for two days in a row and the AIQ of the RCW were measured for eight days in total. The concentrations of carbon dioxide (CO₂), carbon monoxide (CO), particulate matter (PM), nitrogen oxide (NOₓ), total volatile organic compounds (TVOCs), relative humidity (RH) and temperature were measured by direct reading instruments. The bioaerosol samples were taken hourly. The hourly air change rate (ACH) was calculated by measuring the air ventilation volume. Human activities were recorded during the sampling period. The linear mixed model (LMM) was applied to illustrate the impact factors of IAQ. The concentrations of CO, CO₂, PM, bacterial and fungi exceeded the Taiwan IAQ standards. The major factors affecting the concentrations of CO, PM₁ and PM₂.₅ were location and the number of inpatients. The significant factors to alter the CO₂ and TVOC concentrations were location and the numbers of in-and-out staff and inpatients. The number of in-and-out staff and the level of activity affected the PM₁₀ concentrations statistically. The level of activity and the numbers of in-and-out staff and inpatients are the significant factors in changing the bacteria and fungi concentrations. Different models of the patients’ ventilators did not affect the IAQ significantly. The results of LMM can be utilized to predict the pollutant concentrations under various environmental conditions. The results of this study would be a valuable reference for air quality management of RCW.Keywords: respiratory care ward, indoor air quality, linear mixed model, bioaerosol
Procedia PDF Downloads 1071471 Rare Internal Organ Trauma in Adolescent Athletes: Insights from a Pancreatic Injury Case Study
Authors: Muhandiram Rallage Ruvini Nisansala Yatigammana, Anuruddhika Kumudu Kumari Rajakaruna Jayathilaka
Abstract:
Sports injuries are common among teenagers and children engaged in organized sports. While most sports injuries are typical, some rare occurrences involve conditions such as eye, dental, cervical, and rare internal organ injuries, such as pancreatic injuries. These injuries, especially traumatic pancreatitis, require prompt attention due to their potential for severe and sometimes fatal complications. This case revolves around a real accident involving a 12-year-old girl, Piyumi, who suffered a face-to-face collision during netball practice, resulting in severe abdominal pain. After a medical examination, she was diagnosed with a rare pancreatic injury, uncommon in children compared to adults. In Piyumi’s case, she had a grade 3 pancreatic injury and underwent non-surgical management, successfully healing her wound without surgery. The study attempts to fill empirical and population gaps, addressing a rarely discussed injury experienced by a 12-year-old female netball player. The paper will also provide an in-depth understanding of pancreatic injury, which is a rare sports injury. The study’s main objective was to investigate the incidence and characteristics of pancreatic injury, particularly focusing on pancreatic trauma, among children and adolescents engaged in high-impact sports, such as netball. This research adopted a case study strategy, employing interviews as the primary data collection method. Interviews were conducted with Piyumi, her parents, and the two specialist doctors directly involved in her treatment, providing firsthand accounts and insights. By examining the case, the paper arrives at three main conclusions. Firstly, pancreatic damage is uncommon, especially in the sports world, and proper diagnosis is essential to avoiding health concerns, particularly for minors. Secondly, CT (Computed Tomography) was useful in locating the injury, as injuries can be diagnosed very well with Computed Tomography (CT) images. Finally, and most importantly, pancreatic injuries are infrequent, but trauma can still occur, particularly in high-impact sports or accidents involving extreme force or falls. These injuries should be accurately diagnosed and treated promptly.Keywords: child athlete, pancreatic injury, rare sports injuries, sportswoman
Procedia PDF Downloads 731470 The Willingness to Pay of People in Taiwan for Flood Protection Standard of Regions
Authors: Takahiro Katayama, Hsueh-Sheng Chang
Abstract:
Due to the global climate change, it has increased the extreme rainfall that led to serious floods around the world. In recent years, urbanization and population growth also tend to increase the number of impervious surfaces, resulting in significant loss of life and property during floods especially for the urban areas of Taiwan. In the past, the primary governmental response to floods was structural flood control and the only flood protection standards in use were the design standards. However, these design standards of flood control facilities are generally calculated based on current hydrological conditions. In the face of future extreme events, there is a high possibility to surpass existing design standards and cause damages directly and indirectly to the public. To cope with the frequent occurrence of floods in recent years, it has been pointed out that there is a need for a different standard called FPSR (Flood Protection Standard of Regions) in Taiwan. FPSR is mainly used for disaster reduction and used to ensure that hydraulic facilities draining regional flood immediately under specific return period. FPSR could convey a level of flood risk which is useful for land use planning and reflect the disaster situations that a region can bear. However, little has been reported on FPSR and its impacts to the public in Taiwan. Hence, this study proposes a quantity procedure to evaluate the FPSR. This study aimed to examine FPSR of the region and public perceptions of and knowledge about FPSR, as well as the public’s WTP (willingness to pay) for FPSR. The research is conducted via literature review and questionnaire method. Firstly, this study will review the domestic and international research on the FPSR, and provide the theoretical framework of FPSR. Secondly, CVM (Contingent Value Method) has been employed to conduct this survey and using double-bounded dichotomous choice, close-ended format elicits households WTP for raising the protection level to understand the social costs. The samplings of this study are citizens living in Taichung city, Taiwan and 700 samplings were chosen in this study. In the end, this research will continue working on surveys, finding out which factors determining WTP, and provide some recommendations for adaption policies for floods in the future.Keywords: climate change, CVM (Contingent Value Method), FPSR (Flood Protection Standard of Regions), urban flooding
Procedia PDF Downloads 2491469 LWD Acquisition of Caliper and Drilling Mechanics in a Geothermal Well, A Case Study in Sorik Marapi Field – Indonesia
Authors: Vinda B. Manurung, Laila Warkhaida, David Hutabarat, Sentanu Wisnuwardhana, Christovik Simatupang, Dhani Sanjaya, Ashadi, Redha B. Putra, Kiki Yustendi
Abstract:
The geothermal drilling environment presents many obstacles that have limited the use of directional drilling and logging-while-drilling (LWD) technologies, such as borehole washout, mud losses, severe vibration, and high temperature. The case study presented in this paper demonstrates a practice to enhance data logging in geothermal drilling by deploying advanced telemetry and LWD technologies. This operation is aiming continuous improvement in geothermal drilling operations. The case study covers a 12.25-in. hole section of well XX-05 in Pad XX of the Sorik Marapi Geothermal Field. LWD string consists of electromagnetic (EM) telemetry, pressure while drilling (PWD), vibration (DDSr), and acoustic calliper (ACAL). Through this tool configuration, the operator acquired drilling mechanics and caliper logs in real-time and recorded mode, enabling effective monitoring of wellbore stability. Throughout the real-time acquisition, EM-PPM telemetry had provided a three times faster data rate to the surface unit. With the integration of Caliper data and Drilling mechanics data (vibration and ECD -equivalent circulating density), the borehole conditions were more visible to the directional driller, allowing for better control of drilling parameters to minimize vibration and achieve optimum hole cleaning in washed-out or tight formation sequences. After reaching well TD, the recorded data from the caliper sensor indicated an average of 8.6% washout for the entire 12.25-in. interval. Washout intervals were compared with loss occurrence, showing potential for the caliper to be used as an indirect indicator of fractured intervals and validating fault trend prognosis. This LWD case study has given added value in geothermal borehole characterization for both drilling operation and subsurface. Identified challenges while running LWD in this geothermal environment need to be addressed for future improvements, such as the effect of tool eccentricity and the impact of vibration. A perusal of both real-time and recorded drilling mechanics and caliper data has opened various possibilities for maximizing sensor usage in future wells.Keywords: geothermal drilling, geothermal formation, geothermal technologies, logging-while-drilling, vibration, caliper, case study
Procedia PDF Downloads 1311468 Nurses as Being Participants of Sexual Health of Women
Authors: Malika Turganova, Aigul Abduldayeva
Abstract:
Modern conditions require nursing innovations at the primary ambulatory stage in the health system of Kazakhstan. There is a growing need for nurses involved in before-doctor attendance for preventive interview with a female population about reproductive health. We conducted questionnaire survey of the population of Astana in 2015. Questionnaires were drawn up according to the criteria of sexual health of World Health Organization. 3593 respondents out of 8000 questionnaires agreed to answer the questions anonymously, mM=±2,1. The average age of women comprised 37,4±11,2, Ме=31,7 years of age. Analysis of awareness about marriage hygiene revealed that 72,7% of respondents did not receive information about marriage hygiene and 89,1% respondents consider it more advisable before marriage. 45,9% of respondents specified the internet as a source of information on marriage hygiene issues, 24,5% of respondents pointed out friends, and 21,5% specified doctor. Comparing female age groups under and after 40 years old we see that proportion of cases when parents provide information about marriage hygiene issues comprises 4.3% (χ2 =9.8, p<0.05). The most important factor of preservation of women reproductive health is handling a problem of unwanted pregnancy. The responsibility lies equally in men and women. Data analysis of contraceptive methods by ranking showed three most frequently used methods: contraception sheath – 29.3%, then coitus interruptus – 18.7% and hormonal preparations – 16.9%. The most important factor of women's reproductive health preservation is a solving of the problem of unwanted pregnancy, and in this respect, the responsibility lies equally in men and women. Analyzing obtained data on contraceptive methods by ranking three of the most frequently used methods are condoms – 29,3%, then coitus interruptus – 18,7% and hormonal preparations – 16,9%. Additional oral survey of the population showed a low level of informational support of female population by family physicians, health care professionals of educational organizations (schools, universities, and colleges) about hormonal contraceptive. Females of both age groups used to think that hormonal contraceptives cause collateral damage such as blastoma, cancer, increased body weight, varix dilatation of lower limbs. Satisfaction with the frequency of sexual relations of the respondents comprised 57,6%. At that, women under 40 years of age are the most satisfied women among age groups (χ2 =5,8, p<0,05).Keywords: nurse, public health service of Kazakhstan, reproductive and sexual health, trust of population
Procedia PDF Downloads 2731467 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis
Authors: Yao Cheng, Weihua Zhang
Abstract:
Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution
Procedia PDF Downloads 3741466 Assessing the Social Impacts of a Circular Economy in the Global South
Authors: Dolores Sucozhañay, Gustavo Pacheco, Paul Vanegas
Abstract:
In the context of sustainable development and the transition towards a sustainable circular economy (CE), evaluating the social dimension remains a challenge. Therefore, developing a respective methodology is highly important. First, the change of the economic model may cause significant social effects, which today remain unaddressed. Second, following the current level of globalization, CE implementation requires targeting global material cycles and causes social impacts on potentially vulnerable social groups. A promising methodology is the Social Life Cycle Assessment (SLCA), which embraces the philosophy of life cycle thinking and provides complementary information to environmental and economic assessments. In this context, the present work uses the updated Social Life Cycle Assessment (SLCA) Guidelines 2020 to assess the social performance of the recycling system of Cuenca, Ecuador, to exemplify a social assessment method. Like many other developing countries, Ecuador heavily depends on the work of informal waste pickers (recyclers), who, even contributing to a CE, face harsh socio-economic circumstances, including inappropriate working conditions, social exclusion, exploitation, etc. Under a Reference Scale approach (Type 1), 12 impact subcategories were assessed through 73 site-specific inventory indicators, using an ascending reference scale ranging from -2 to +2. Findings reveal a social performance below compliance levels with local and international laws, basic societal expectations, and practices in the recycling sector; only eight and five indicators present a positive score. In addition, a social hotspot analysis depicts collection as the most time-consuming lifecycle stage and the one with the most hotspots, mainly related to working hours and health and safety aspects. This study provides an integrated view of the recyclers’ contributions, challenges, and opportunities within the recycling system while highlighting the relevance of assessing the social dimension of CE practices. It also fosters an understanding of the social impact of CE operations in developing countries, highlights the need for a close north-south relationship in CE, and enables the connection among the environmental, economic, and social dimensions.Keywords: SLCA, circular economy, recycling, social impact assessment
Procedia PDF Downloads 1511465 Shaping Work Engagement through Intra-Organizational Coopetition: Case Study of the University of Zielona Gora in Poland
Authors: Marta Moczulska
Abstract:
One of the most important aspects of human management in an organization is the work engagement. In spite of the different perspectives of engagement, it is possible to see that it is expressed in the activity of the individual involved in the performance of tasks, the functioning of the organization. At the same time is considered not only in behavioural but also cognitive and emotional dimensions. Previous studies were related to sources, predictors of engagement and determinants, including organizational ones. Attention was paid to the importance of needs (including belonging, success, development, sense of work), values (such as trust, honesty, respect, justice) or interpersonal relationships, especially with the supervisor. Taking them into account and theories related to human acting, behaviour in the organization, interactions, it was recognized that engagement can be shaped through cooperation and competition. It was assumed that to shape the work engagement, it is necessary to simultaneously cooperate and compete in order to reduce the weaknesses of each of these activities and strengthen the strengths. Combining cooperation and competition is defined as 'coopetition'. However, research conducted in this field is primarily concerned with relations between companies. Intra-organizational coopetition is mainly considered as competing organizational branches or units (cross-functional coopetition). Less attention is paid to competing groups or individuals. It is worth noting the ambiguity of the concepts of cooperation and rivalry. Taking into account the terms used and their meaning, different levels of cooperation and forms of competition can be distinguished. Thus, several types of intra-organizational coopetition can be identified. The article aims at defining the potential for work engagement through intra-organizational coopetition. The aim of research was to know how levels of cooperation in competition conditions influence engagement. It is assumed that rivalry (positive competition) between teams (the highest level of cooperation) is a type of coopetition that contributes to working engagement. Qualitative research will be carried out among students of the University of Zielona Gora, realizing various types of projects. The first research groups will be students working in groups on one project for three months. The second research group will be composed of students working in groups on several projects in the same period (three months). Work engagement will be determined using the UWES questionnaire. Levels of cooperation will be determined using the author's research tool. Due to the fact that the research is ongoing, results will be presented in the final paper.Keywords: competition, cooperation, intra-organizational coopetition, work engagement
Procedia PDF Downloads 1461464 Molecular Implication of Interaction of Human Enteric Pathogens with Phylloplane of Tomato
Authors: Shilpi, Indu Gaur, Neha Bhadauria, Susmita Goswami, Prabir K. Paul
Abstract:
Cultivation and consumption of organically grown fruits and vegetables have increased by several folds. However, the presence of Human Enteric Pathogens on the surface of organically grown vegetables causing Gastro-intestinal diseases, are most likely due to contaminated water and fecal matter of farm animals. Human Enteric Pathogens are adapted to colonize the human gut, and also colonize plant surface. Microbes on plant surface communicate with each other to establish quorum sensing. The cross talk study is important because the enteric pathogens on phylloplane have been reported to mask the beneficial resident bacteria of plant. In the present study, HEPs and bacterial colonizers were identified using 16s rRNA sequencing. Microbial colonization patterns after interaction between Human Enteric Pathogens and natural bacterial residents on tomato phylloplane was studied. Tomato plants raised under aseptic conditions were inoculated with a mixture of Serratia fonticola and Klebsiella pneumoniae. The molecules involved in cross-talk between Human Enteric Pathogens and regular bacterial colonizers were isolated and identified using molecular techniques and HPLC. The colonization pattern was studied by leaf imprint method after 48 hours of incubation. The associated protein-protein interaction in the host cytoplasm was studied by use of crosslinkers. From treated leaves the crosstalk molecules and interaction proteins were separated on 1D SDS-PAGE and analyzed by MALDI-TOF-TOF analysis. The study is critical in understanding the molecular aspects of HEP’s adaption to phylloplane. The study revealed human enteric pathogens aggressively interact among themselves and resident bacteria. HEPs induced establishment of a signaling cascade through protein-protein interaction in the host cytoplasm. The study revealed that the adaptation of Human Enteric Pathogens on phylloplane of Solanum lycopersicum involves the establishment of complex molecular interaction between the microbe and the host including microbe-microbe interaction leading to an establishment of quorum sensing. The outcome will help in minimizing the HEP load on fresh farm produce, thereby curtailing incidences of food-borne diseases.Keywords: crosslinkers, human enteric pathogens (HEPs), phylloplane, quorum sensing
Procedia PDF Downloads 2791463 Anaerobic Co-digestion in Two-Phase TPAD System of Sewage Sludge and Fish Waste
Authors: Rocio López, Miriam Tena, Montserrat Pérez, Rosario Solera
Abstract:
Biotransformation of organic waste into biogas is considered an interesting alternative for the production of clean energy from renewable sources by reducing the volume and organic content of waste Anaerobic digestion is considered one of the most efficient technologies to transform waste into fertilizer and biogas in order to obtain electrical energy or biofuel within the concept of the circular economy. Currently, three types of anaerobic processes have been developed on a commercial scale: (1) single-stage process where sludge bioconversion is completed in a single chamber, (2) two-stage process where the acidogenic and methanogenic stages are separated into two chambers and, finally, (3) temperature-phase sequencing (TPAD) process that combines a thermophilic pretreatment unit prior to mesophilic anaerobic digestion. Two-stage processes can provide hydrogen and methane with easier control of the first and second stage conditions producing higher total energy recovery and substrate degradation than single-stage processes. On the other hand, co-digestion is the simultaneous anaerobic digestion of a mixture of two or more substrates. The technology is similar to anaerobic digestion but is a more attractive option as it produces increased methane yields due to the positive synergism of the mixtures in the digestion medium thus increasing the economic viability of biogas plants. The present study focuses on the energy recovery by anaerobic co-digestion of sewage sludge and waste from the aquaculture-fishing sector. The valorization is approached through the application of a temperature sequential phase process or TPAD technology (Temperature - Phased Anaerobic Digestion). Moreover, two-phase of microorganisms is considered. Thus, the selected process allows the development of a thermophilic acidogenic phase followed by a mesophilic methanogenic phase to obtain hydrogen (H₂) in the first stage and methane (CH₄) in the second stage. The combination of these technologies makes it possible to unify all the advantages of these anaerobic digestion processes individually. To achieve these objectives, a sequential study has been carried out in which the biochemical potential of hydrogen (BHP) is tested followed by a BMP test, which will allow checking the feasibility of the two-stage process. The best results obtained were high total and soluble COD yields (59.8% and 82.67%, respectively) as well as H₂ production rates of 12LH₂/kg SVadded and methane of 28.76 L CH₄/kg SVadded for TPAD.Keywords: anaerobic co-digestion, TPAD, two-phase, BHP, BMP, sewage sludge, fish waste
Procedia PDF Downloads 1561462 The Representation of the Medieval Idea of Ugliness in Messiaen's Saint François d’Assise
Authors: Nana Katsia
Abstract:
This paper explores the ways both medieval and medievalist conceptions of ugliness might be linked to the physical and spiritual transformation of the protagonists and how it is realised through specific musical rhythm, such as the dochmiac rhythm in the opera. As Eco and Henderson note, only one kind of ugliness could be represented in conformity with nature in the Middle Ages without destroying all aesthetic pleasure and, in turn, artistic beauty: namely, a form of ugliness which arouses disgust. Moreover, Eco explores the fact that the enemies of Christ who condemn, martyr, and crucify him are represented as wicked inside. In turn, the representation of inner wickedness and hostility toward God brings with it outward ugliness, coarseness, barbarity, and rage. Ultimately these result in the deformation of the figure. In all these regards, the non-beautiful is represented here as a necessary phase, which is not the case with classical (the ancient Greek) concepts of Beauty. As we can see, the understanding of disfigurement and ugliness in the Middle Ages was both varied and complex. In the Middle Ages, the disfigurement caused by leprosy (and other skin and bodily conditions) was interpreted, in a somewhat contradictory manner, as both a curse and a gift from God. Some saints’ lives even have the saint appealing to be inflicted with the disease as part of their mission toward true humility. We shall explore that this ‘different concept’ of ugliness (non-classical beauty) might be represented in Messiaen’s opera. According to Messiaen, the Leper and Saint François are the principal characters of the third scene, as both of them will be transformed, and a double miracle will take place in the process. Messiaen mirrors the idea of the true humility of Saint’s life and positions Le Baiser au Lépreux as the culmination of the first act. The Leper’s character represents his physical and spiritual disfigurement, which are healed after the miracle. So, the scene can be viewed as an encounter between beauty and ugliness, and that much of it is spent in a study of ugliness. Dochmiac rhythm is one of the most important compositional elements in the opera. It plays a crucial role in the process of creating a dramatic musical narrative and structure in the composition. As such, we shall explore how Messiaen represents the medieval idea of ugliness in the opera through particular musical elements linked to the main protagonists’ spiritual or physical ugliness; why Messiaen makes reference to dochmiac rhythm, and how they create the musical and dramatic context in the opera for the medieval aesthetic category of ugliness.Keywords: ugliness in music, medieval time, saint françois d’assise, messiaen
Procedia PDF Downloads 1461461 The Yield of Neuroimaging in Patients Presenting to the Emergency Department with Isolated Neuro-Ophthalmological Conditions
Authors: Dalia El Hadi, Alaa Bou Ghannam, Hala Mostafa, Hana Mansour, Ibrahim Hashim, Soubhi Tahhan, Tharwat El Zahran
Abstract:
Introduction: Neuro-ophthalmological emergencies require prompt assessment and management to avoid vision or life-threatening sequelae. Some would require neuroimaging. Most commonly used are the CT and MRI of the Brain. They can be over-used when not indicated. Their yield remains dependent on multiple factors relating to the clinical scenario. Methods: A retrospective cross-sectional study was conducted by reviewing the electronic medical records of patients presenting to the Emergency Department (ED) with isolated neuro-ophthalmologic complaints. For each patient, data were collected on the clinical presentation, whether neuroimaging was performed (and which type), and the result of neuroimaging. Analysis of the performed neuroimaging was made, and its yield was determined. Results: A total of 211 patients were reviewed. The complaints or symptoms at presentation were: blurry vision, change in the visual field, transient vision loss, floaters, double vision, eye pain, eyelid droop, headache, dizziness and others such as nausea or vomiting. In the ED, a total of 126 neuroimaging procedures were performed. Ninety-four imagings (74.6%) were normal, while 32 (25.4%) had relevant abnormal findings. Only 2 symptoms were significant for abnormal imaging: blurry vision (p-value= 0.038) and visual field change (p-value= 0.014). While 4 physical exam findings had significant abnormal imaging: visual field defect (p-value= 0.016), abnormal pupil reactivity (p-value= 0.028), afferent pupillary defect (p-value= 0.018), and abnormal optic disc exam (p-value= 0.009). Conclusion: Risk indicators for abnormal neuroimaging in the setting of neuro-ophthalmological emergencies are blurred vision or changes in the visual field on history taking. While visual field irregularities, abnormal pupil reactivity with or without afferent pupillary defect, or abnormal optic discs, are risk factors related to physical testing. These findings, when present, should sway the ED physician towards neuroimaging but still individualizing each case is of utmost importance to prevent time-consuming, resource-draining, and sometimes unnecessary workup. In the end, it suggests a well-structured patient-centered algorithm to be followed by ED physicians.Keywords: emergency department, neuro-ophthalmology, neuroimaging, risk indicators
Procedia PDF Downloads 1791460 Calcein Release from Liposomes Mediated by Phospholipase A₂ Activity: Effect of Cholesterol and Amphipathic Di and Tri Blocks Copolymers
Authors: Marco Soto-Arriaza, Eduardo Cena-Ahumada, Jaime Melendez-Rojel
Abstract:
Background: Liposomes have been widely used as a model of lipid bilayer to study the physicochemical properties of biological membrane, encapsulation, transport and release of different molecules. Furthermore, extensive research has focused on improving the efficiency in the transport of drugs, developing tools that improve the release of the encapsulated drug from liposomes. In this context, the enzymatic activity of PLA₂, despite having been shown to be an effective tool to promote the release of drugs from liposomes, is still an open field of research. Aim: The aim of the present study is to explore the effect of cholesterol (Cho) and amphipathic di- and tri-block copolymers, on calcein release mediated by enzymatic activity of PLA2 in Dipalmitoylphosphatidylcholine (DPPC) liposomes under physiological conditions. Methods: Different dispersions of DPPC, cholesterol, di-block POE₄₅-PCL₅₂ or tri-block PCL₁₂-POE₄₅-PCL₁₂ were prepared by the extrusion method after five freezing/thawing cycles; in Phosphate buffer 10mM pH 7.4 in presence of calcein. DPPC liposomes/Calcein were centrifuged at 15000rpm 10 min to separate free calcein. Enzymatic activity assays of PLA₂ were performed at 37°C using the TBS buffer pH 7.4. The size distribution, polydispersity, Z-potential and Calcein encapsulation of DPPC liposomes was monitored. Results: PLA₂ activity showed a slower kinetic of calcein release up to 20 mol% of cholesterol, evidencing a minimum at 10 mol% and then a maximum at 18 mol%. Regardless of the percentage of cholesterol, up to 18 mol% a one-hundred percentage release of calcein was observed. At higher cholesterol concentrations, PLA₂ showed to be inefficient or not to be involved in calcein release. In assays where copolymers were added in a concentration lower than their cmc, a similar behavior to those showed in the presence of Cho was observed, that is a slower kinetic in calcein release. In both experimental approaches, a one-hundred percentage of calcein release was observed. PLA₂ was shown to be sensitive to the 4-(4-Octadecylphenyl)-4-oxobutenoic acid inhibitor and calcium, reducing the release of calcein to 0%. Cell viability of HeLa cells decreased 7% in the presence of DPPC liposomes after 3 hours of incubation and 17% and 23% at 5 and 15 hours, respectively. Conclusion: Calcein release from DPPC liposomes, mediated by PLA₂ activity, depends on the percentage of cholesterol and the presence of copolymers. Both, cholesterol up to 20 mol% and copolymers below it cmc could be applied to the regulation of the kinetics of antitumoral drugs release without inducing cell toxicity per se.Keywords: amphipathic copolymers, calcein release, cholesterol, DPPC liposome, phospholipase A₂
Procedia PDF Downloads 1631459 Recycling Waste Product for Metal Removal from Water
Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful
Abstract:
The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.Keywords: adsorption, industrial waste, recycling, slag, treatment
Procedia PDF Downloads 1461458 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India
Authors: Disha Bhanot, Vinish Kathuria
Abstract:
This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.Keywords: distress sale, horticulture, income loss, India, price uncertainity
Procedia PDF Downloads 2431457 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case
Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza
Abstract:
The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype
Procedia PDF Downloads 4181456 Seismic Assessment of a Pre-Cast Recycled Concrete Block Arch System
Authors: Amaia Martinez Martinez, Martin Turek, Carlos Ventura, Jay Drew
Abstract:
This study aims to assess the seismic performance of arch and dome structural systems made from easy to assemble precast blocks of recycled concrete. These systems have been developed by Lock Block Ltd. Company from Vancouver, Canada, as an extension of their currently used retaining wall system. The characterization of the seismic behavior of these structures is performed by a combination of experimental static and dynamic testing, and analytical modeling. For the experimental testing, several tilt tests, as well as a program of shake table testing were undertaken using small scale arch models. A suite of earthquakes with different characteristics from important past events are chosen and scaled properly for the dynamic testing. Shake table testing applying the ground motions in just one direction (in the weak direction of the arch) and in the three directions were conducted and compared. The models were tested with increasing intensity until collapse occurred; which determines the failure level for each earthquake. Since the failure intensity varied with type of earthquake, a sensitivity analysis of the different parameters was performed, being impulses the dominant factor. For all cases, the arches exhibited the typical four-hinge failure mechanism, which was also shown in the analytical model. Experimental testing was also performed reinforcing the arches using a steel band over the structures anchored at both ends of the arch. The models were tested with different pretension levels. The bands were instrumented with strain gauges to measure the force produced by the shaking. These forces were used to develop engineering guidelines for the design of the reinforcement needed for these systems. In addition, an analytical discrete element model was created using 3DEC software. The blocks were designed as rigid blocks, assigning all the properties to the joints including also the contribution of the interlocking shear key between blocks. The model is calibrated to the experimental static tests and validated with the obtained results from the dynamic tests. Then the model can be used to scale up the results to the full scale structure and expanding it to different configurations and boundary conditions.Keywords: arch, discrete element model, seismic assessment, shake-table testing
Procedia PDF Downloads 2061455 Characteristics of Clinical and Diagnostic Aspects of Benign Diseases of Cervi̇x in Women
Authors: Gurbanova J., Majidova N., Ali-Zade S., Hasanova A., Mikailzade P.
Abstract:
Currently, the problem of oncogynecological diseases is widespread and remains relevant in terms of quantitative growth. It is known that due to the increase in the number of benign diseases of the cervix, the development of precancerous conditions occurs. Benign diseases of the cervix represent the most common gynecological problem, which are often precursors of malignant neoplasms, especially cervical cancer. According to statistics, benign diseases of the cervix cover 25-45% of all gynecological diseases. Among women's oncogynecological diseases, cervical cancer ranks second in the world after breast cancer and ranks first in the mortality rate among oncological diseases in economically underdeveloped countries. We performed a comprehensive clinical and laboratory examination of 130 women aged 18 to 73 with benign cervical diseases. 59 (38.5%) women of reproductive age, as well as 39 (30%) premenopausal and 41 (31.5%) menopausal patients, participated in the study. Detailed anamnesis was collected from all patients, objective and gynecological examination was performed, laboratory and instrumental examinations (USM, IPV DNA, smear microscopy, and PCR bacteriological examination of sexually transmitted infections), simple and extended colposcopy, liquid-based РАР-smear smear and РАР-classic smear examinations were conducted. As a result of the research, the following nosological forms were found in women with benign diseases of the cervix: non-specific vaginitis in 10 (7.7%) cases; ectopia, endocervicitis - 60(46.2%); cervical ectropion - 7(5.4%); cervical polyp - 9(6.9%); cervical leukoplakia - 15(11.5%); atrophic vaginitis - 7(5.4%); condyloma - 12(9.2%); cervical stenosis - 2(1.5%); endometriosis of the cervix - was noted in 8 (6.2%) cases (p<0.001), respectively. Characteristics of the menstrual cycle among the examined women: normal cycle in 97 (74.6%) cases; oligomenorrhea – 23 (17.7%); polymenorrhea – 4(3.1%); algomenorrhea – noted in 6 (4.6%) cases (p<0.001). Cytological examination showed that: the specificity of liquid-based cytology was 76.2%, and the traditional PAP test was set at 70.6%. The overall diagnostic value was calculated to be 86% in liquid-based cytology and 78.5% in conventional PAP tests. Treatment of women with benign diseases of the cervix was carried out by diathermocoagulation method and "FOTEK EA 141M" device. It should be noted that 6 months after the treatment, after treatment with the "FOTEK EA 141M" device, there was no relapse in any patient. Recurrence was found in 23.7% of patients after diathermoelectrocoagulation. Thus, it is clear from the above that the study of cervical pathologies, the determination of optimal examinations, and effective treatment methods is one of the urgent problems facing obstetrics and gynecology.Keywords: cervical cancer, cytological examination, PAP-smear, non-specific vaginitis
Procedia PDF Downloads 1181454 GC-MS-Based Untargeted Metabolomics to Study the Metabolism of Pectobacterium Strains
Authors: Magdalena Smoktunowicz, Renata Wawrzyniak, Malgorzata Waleron, Krzysztof Waleron
Abstract:
Pectobacterium spp. were previously classified into the Erwinia genus founded in 1917 to unite at that time all Gram-negative, fermentative, nonsporulating and peritrichous flagellated plant pathogenic bacteria. After work of Waldee (1945), on Approved Lists of Bacterial Names and bacteriology manuals in 1980, they were described either under the species named Erwinia or Pectobacterium. The Pectobacterium genus was formally described in 1998 of 265 Pectobacterium strains. Currently, there are 21 species of Pectobacterium bacteria, including Pectobacterium betavasculorum since 2003, which caused soft rot on sugar beet tubers. Based on the biochemical experiments carried out for this, it is known that these bacteria are gram-negative, catalase-positive, oxidase-negative, facultatively anaerobic, using gelatin and causing symptoms of soft rot on potato and sugar beet tubers. The mere fact of growing on sugar beet may indicate a metabolism characteristic only for this species. Metabolomics, broadly defined as the biology of the metabolic systems, which allows to make comprehensive measurements of metabolites. Metabolomics, in combination with genomics, are complementary tools for the identification of metabolites and their reactions, and thus for the reconstruction of metabolic networks. The aim of this study was to apply the GC-MS-based untargeted metabolomics to study the metabolism of P. betavasculorum in different growing conditions. The metabolomic profiles of biomass and biomass media were determined. For sample preparation the following protocol was used: extraction with 900 µl of methanol: chloroform: water mixture (10: 3: 1, v: v) were added to 900 µl of biomass from the bottom of the tube and up to 900 µl of nutrient medium from the bacterial biomass. After centrifugation (13,000 x g, 15 min, 4oC), 300µL of the obtained supernatants were concentrated by rotary vacuum and evaporated to dryness. Afterwards, two-step derivatization procedure was performed before GC-MS analyses. The obtained results were subjected to statistical calculations with the use of both uni- and multivariate tests. The obtained results were evaluated using KEGG database, to asses which metabolic pathways are activated and which genes are responsible for it, during the metabolism of given substrates contained in the growing environment. The observed metabolic changes, combined with biochemical and physiological tests, may enable pathway discovery, regulatory inference and understanding of the homeostatic abilities of P. betavasculorum.Keywords: GC-MS chromatograpfy, metabolomics, metabolism, pectobacterium strains, pectobacterium betavasculorum
Procedia PDF Downloads 791453 Characterization, Replication and Testing of Designed Micro-Textures, Inspired by the Brill Fish, Scophthalmus rhombus, for the Development of Bioinspired Antifouling Materials
Authors: Chloe Richards, Adrian Delgado Ollero, Yan Delaure, Fiona Regan
Abstract:
Growing concern about the natural environment has accelerated the search for non-toxic, but at the same time, economically reasonable, antifouling materials. Bioinspired surfaces, due to their nano and micro topographical antifouling capabilities, provide a hopeful approach to the design of novel antifouling surfaces. Biological organisms are known to have highly evolved and complex topographies, demonstrating antifouling potential, i.e. shark skin. Previous studies have examined the antifouling ability of topographic patterns, textures and roughness scales found on natural organisms. One of the mechanisms used to explain the adhesion of cells to a substrate is called attachment point theory. Here, the fouling organism experiences increased attachment where there are multiple attachment points and reduced attachment, where the number of attachment points are decreased. In this study, an attempt to characterize the microtopography of the common brill fish, Scophthalmus rhombus, was undertaken. Scophthalmus rhombus is a small flatfish of the family Scophthalmidae, inhabiting regions from Norway to the Mediterranean and the Black Sea. They reside in shallow sandy and muddy coastal areas at depths of around 70 – 80 meters. Six engineered surfaces (inspired by the Brill fish scale) produced by a 2-photon polymerization (2PP) process were evaluated for their potential as an antifouling solution for incorporation onto tidal energy blades. The micro-textures were analyzed for their AF potential under both static and dynamic laboratory conditions using two laboratory grown diatom species, Amphora coffeaeformis and Nitzschia ovalis. The incorporation of a surface topography was observed to cause a disruption in the growth of A. coffeaeformis and N. ovalis cells on the surface in comparison to control surfaces. This work has demonstrated the importance of understanding cell-surface interaction, in particular, topography for the design of novel antifouling technology. The study concluded that biofouling can be controlled by physical modification, and has contributed significant knowledge to the use of a successful novel bioinspired AF technology, based on Brill, for the first time.Keywords: attachment point theory, biofouling, Scophthalmus rhombus, topography
Procedia PDF Downloads 1071452 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling
Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud
Abstract:
Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy
Procedia PDF Downloads 1201451 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 154