Search results for: threshold graphs
130 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 23129 Structural Performance of Mechanically Connected Stone Panels under Cyclic Loading: Application to Aesthetic and Environmental Building Skin Design
Authors: Michel Soto Chalhoub
Abstract:
Building designers in the Mediterranean region and other parts of the world utilize natural stone panels on the exterior façades as skin cover. This type of finishing is not only intended for aesthetic reasons but also environmental. The stone, since the earliest ages of civilization, has been used in construction and to-date some of the most appealing buildings owe their beauty to stone finishing. The stone also provides warmth in winter and freshness in summer as it moderates heat transfer and absorbs radiation. However, as structural codes became increasingly stringent about the dynamic performance of buildings, it became essential to study the performance of stone panels under cyclic loading – a condition that arises under the building is subjected to wind or earthquakes. The present paper studies the performance of stone panels using mechanical connectors when subjected to load reversal. In this paper, we present a theoretical model that addresses modes of failure in the steel connectors, by yield, and modes of failure in the stone, by fracture. Then we provide an experimental set-up and test results for rectangular stone panels of varying thickness. When the building is subjected to an earthquake, its rectangular panels within the structural system are subjected to shear deformations, which in turn impart stress into the stone cover. Rectangular stone panels, which typically range from 40cmx80cm to 60cmx120cm, need to be designed to withstand transverse loading from the direct application of lateral loads, and to withstand simultaneously in-plane loading (membrane stress) caused by inter-story drift and overall building lateral deflection. Results show correlation between the theoretical model which we derive from solid mechanics fundamentals and the experimental results, and lead to practical design recommendations. We find that for panel thickness below a certain threshold, it is more advantageous to utilize structural adhesive materials to connect stone panels to the main structural system of the building. For larger panel thicknesses, it is recommended to utilize mechanical connectors with special detailing to ensure a minimum level of ductility and energy dissipation.Keywords: solid mechanics, cyclic loading, mechanical connectors, natural stone, seismic, wind, building skin
Procedia PDF Downloads 255128 The Personal Characteristics of Nurse Managers and the Personal and Professional Factors That Affect Them
Authors: Handan Alan, Ulkü Baykal
Abstract:
Personal characteristics help people understand and recognize both themselves and other people. They are also known to have direct effects on managerial behaviors. Managers’ personalities indicate how they think, perceive reality and relate to others, and affect their decision-making and problem-solving methods. This descriptive study aims to determine the personal characteristics of nurse managers and the personal and professional factors that affect them since sufficient data does not exist on personal characteristics despite the focus on the leadership and managerial characteristics in nursing. The study population consisted of nurses working in administrative positions at hospitals affiliated with the public hospitals union, research and practice hospitals affiliated with universities and private hospitals in cities in the Marmara Region. The study sample consisted of nurse managers working in the hospitals that permitted conducting the study (excluding private branch hospitals). The data were collected after obtaining the approval of the Clinical Research Ethics Committee of Çanakkale Onsekiz Mart University (Approval date: 1.7.2015, Decision No: 2015-01) and written official permissions from the administrations of the hospitals included in the study. The data analysis was carried out using means and standard deviations (SD) as descriptive statistics, one-way analysis of variance for inter-group comparisons and the independent samples t-test for paired group comparisons. A significance threshold of p < 0.05 was used to evaluate the findings. The data were collected using the Five Factor Personality Inventory. The study included 900 nurse managers, who obtained the highest mean score on the conscientiousness dimension (X=4.22 ±0.35). This dimension was followed by their mean scores on the agreeableness (X=4.06±0.40), intelligence (X=4.05±0.37), extroversion (X=3.50±0.43), and emotional instability (X=2.07±0.53) dimensions. Statistically significant differences were found between the independent variables of age, gender, marital status, education level, work institution, professional experience, institutional experience, managerial experience, administrative position, work unit and managerial education when compared using the five factor personality inventory (p < 0.05). In conclusion, the nurse managers described themselves having high conscientiousness. Statistically significant differences were found between the five factor personality inventory mean scores and their personal and professional characteristics.Keywords: nurse manager, personality, personal characteristics, professional characteristics
Procedia PDF Downloads 256127 An Analysis of the Recent Flood Scenario (2017) of the Southern Districts of the State of West Bengal, India
Authors: Soumita Banerjee
Abstract:
The State of West Bengal is mostly watered by innumerable rivers, and they are different in nature in both the northern and the southern part of the state. The southern part of West Bengal is mainly drained with the river Bhagirathi-Hooghly, and its major distributaries and tributaries have divided this major river basin into many subparts like the Ichamati-Bidyadhari, Pagla-Bansloi, Mayurakshi-Babla, Ajay, Damodar, Kangsabati Sub-basin to name a few. These rivers basically drain the Districts of Bankura, Burdwan, Hooghly, Nadia and Purulia, Birbhum, Midnapore, Murshidabad, North 24-Parganas, Kolkata, Howrah and South 24-Parganas. West Bengal has a huge number of flood-prone blocks in the southern part of the state of West Bengal, the responsible factors for flood situation are the shape and size of the catchment area, its steep gradient starting from plateau to flat terrain, the river bank erosion and its siltation, tidal condition especially in the lower Ganga Basin and very low maintenance of the embankments which are mostly used as communication links. Along with these factors, DVC (Damodar Valley Corporation) plays an important role in the generation (with the release of water) and controlling the flood situation. This year the whole Gangetic West Bengal is being flooded due to high intensity and long duration rainfall, and the release of water from the Durgapur Barrage As most of the rivers are interstate in nature at times floods also take place with release of water from the dams of the neighbouring states like Jharkhand. Other than Embankments, there is no such structural measures for combatting flood in West Bengal. This paper tries to analyse the reasons behind the flood situation this year especially with the help of climatic data collected from the Indian Metrological Department, flood related data from the Irrigation and Waterways Department, West Bengal and GPM (General Precipitation Measurement) data for rainfall analysis. Based on the threshold value derived from the calculation of the past available flood data, it is possible to predict the flood events which may occur in the near future and with the help of social media it can be spread out within a very short span of time to aware the mass. On a larger or a governmental scale, heightening the settlements situated on the either banks of the river can yield a better result than building up embankments.Keywords: dam failure, embankments, flood, rainfall
Procedia PDF Downloads 224126 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern
Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li
Abstract:
Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.
Procedia PDF Downloads 16125 Benefits of Monitoring Acid Sulfate Potential of Coffee Rock (Indurated Sand) across Entire Dredge Cycle in South East Queensland
Authors: S. Albert, R. Cossu, A. Grinham, C. Heatherington, C. Wilson
Abstract:
Shipping trends suggest increasing vessel size and draught visiting Australian ports highlighting potential challenges to port infrastructure and requiring optimization of shipping channels to ensure safe passage for vessels. The Port of Brisbane in Queensland, Australia has an 80 km long access shipping channel which vessels must transit 15 km of relatively shallow coffee rock (generic class of indurated sands where sand grains are bound within an organic clay matrix) outcrops towards the northern passage in Moreton Bay. This represents a risk to shipping channel deepening and maintenance programs as the dredgeability of this material is more challenging due to its high cohesive strength compared with the surrounding marine sands and potential higher acid sulfate risk. In situ assessment of acid sulfate sediment for dredge spoil control is an important tool in mitigating ecological harm. The coffee rock in an anoxic undisturbed state does not pose any acid sulfate risk, however when disturbed via dredging it’s vital to ensure that any present iron sulfides are either insignificant or neutralized. To better understand the potential risk we examined the reduction potential of coffee rock across the entire dredge cycle in order to accurately portray the true outcome of disturbed acid sulfate sediment in dredging operations in Moreton Bay. In December 2014 a dredge trial was undertaken with a trailing suction hopper dredger. In situ samples were collected prior to dredging revealed acid sulfate potential above threshold guidelines which could lead to expensive dredge spoil management. However, potential acid sulfate risk was then monitored in the hopper and subsequent discharge, both showing a significant reduction in acid sulfate potential had occurred. Additionally, the acid neutralizing capacity significantly increased due to the inclusion of shell fragments (calcium carbonate) from the dredge target areas. This clearly demonstrates the importance of assessing potential acid sulfate risk across the entire dredging cycle and highlights the need to carefully evaluate sources of acidity.Keywords: acid sulfate, coffee rock, indurated sand, dredging, maintenance dredging
Procedia PDF Downloads 368124 Probabilistic Health Risk Assessment of Polycyclic Aromatic Hydrocarbons in Repeatedly Used Edible Oils and Finger Foods
Authors: Suraj Sam Issaka, Anita Asamoah, Abass Gibrilla, Joseph Richmond Fianko
Abstract:
Polycyclic aromatic hydrocarbons (PAHs) are a group of organic compounds that can form in edible oils during repeated frying and accumulate in fried foods. This study assesses the chances of health risks (carcinogenic and non-carcinogenic) due to PAHs levels in popular finger foods (bean cakes, plantain chips, doughnuts) fried in edible oils (mixed vegetable, sunflower, soybean) from the Ghanaian market. Employing probabilistic health risk assessment that considers variability and uncertainty in exposure and risk estimates provides a more realistic representation of potential health risks. Monte Carlo simulations with 10,000 iterations were used to estimate carcinogenic, mutagenic, and non-carcinogenic risks for different age groups (A: 6-10 years, B: 11-20 years, C: 20-70 years), food types (bean cake, plantain chips, doughnut), oil types (soybean, mixed vegetable, sunflower), and re-usage frying oil frequencies (once, twice, thrice). Our results suggest that, for age Group A, doughnuts posed the highest probability of carcinogenic risk (91.55%) exceeding the acceptable threshold, followed by bean cakes (43.87%) and plantain chips (7.72%), as well as the highest probability of unacceptable mutagenic risk (89.2%), followed by bean cakes (40.32%). Among age Group B, doughnuts again had the highest probability of exceeding carcinogenic risk limits (51.16%) and mutagenic risk limits (44.27%). At the same time, plantain chips exhibited the highest maximum carcinogenic risk. For adults age Group C, bean cakes had the highest probability of unacceptable carcinogenic (50.88%) and mutagenic risks (46.44%), though plantain chips showed the highest maximum values for both carcinogenic and mutagenic risks in this age group. Also, on non-carcinogenic risks across different age groups, it was found that age Group A) who consumed doughnuts had a 68.16% probability of a hazard quotient (HQ) greater than 1, suggesting potential cognitive impairment and lower IQ scores due to early PAH exposure. This group also faced risks from consuming plantain chips and bean cake. For age Group B, the consumption of plantain chips was associated with a 36.98% probability of HQ greater than 1, indicating a potential risk of reduced lung function. In age Group C, the consumption of plantain chips was linked to a 35.70% probability of HQ greater than 1, suggesting a potential risk of cardiovascular diseases.Keywords: PAHs, fried foods, carcinogenic risk, non-carcinogenic risk, Monte Carlo simulations
Procedia PDF Downloads 12123 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes
Authors: Chih Ming Tseng
Abstract:
Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation
Procedia PDF Downloads 28122 De Novo Assembly and Characterization of the Transcriptome from the Fluoroacetate Producing Plant, Dichapetalum Cymosum
Authors: Selisha A. Sooklal, Phelelani Mpangase, Shaun Aron, Karl Rumbold
Abstract:
Organically bound fluorine (C-F bond) is extremely rare in nature. Despite this, the first fluorinated secondary metabolite, fluoroacetate, was isolated from the plant Dichapetalum cymosum (commonly known as Gifblaar). However, the enzyme responsible for fluorination (fluorinase) in Gifblaar was never isolated and very little progress has been achieved in understanding this process in higher plants. Fluorinated compounds have vast applications in the pharmaceutical, agrochemical and fine chemicals industries. Consequently, an enzyme capable of catalysing a C-F bond has great potential as a biocatalyst in the industry considering that the field of fluorination is virtually synthetic. As with any biocatalyst, a range of these enzymes are required. Therefore, it is imperative to expand the exploration for novel fluorinases. This study aimed to gain molecular insights into secondary metabolite biosynthesis in Gifblaar using a high-throughput sequencing-based approach. Mechanical wounding studies were performed using Gifblaar leaf tissue in order to induce expression of the fluorinase. The transcriptome of the wounded and unwounded plant was then sequenced on the Illumina HiSeq platform. A total of 26.4 million short sequence reads were assembled into 77 845 transcripts using Trinity. Overall, 68.6 % of transcripts were annotated with gene identities using public databases (SwissProt, TrEMBL, GO, COG, Pfam, EC) with an E-value threshold of 1E-05. Sequences exhibited the greatest homology to the model plant, Arabidopsis thaliana (27 %). A total of 244 annotated transcripts were found to be differentially expressed between the wounded and unwounded plant. In addition, secondary metabolic pathways present in Gifblaar were successfully reconstructed using Pathway tools. Due to lack of genetic information for plant fluorinases, a transcript failed to be annotated as a fluorinating enzyme. Thus, a local database containing the 5 existing bacterial fluorinases was created. Fifteen transcripts having homology to partial regions of existing fluorinases were found. In efforts to obtain the full coding sequence of the Gifblaar fluorinase, primers were designed targeting the regions of homology and genome walking will be performed to amplify the unknown regions. This is the first genetic data available for Gifblaar. It has provided novel insights into the mechanisms of metabolite biosynthesis and will allow for the discovery of the first eukaryotic fluorinase.Keywords: biocatalyst, fluorinase, gifblaar, transcriptome
Procedia PDF Downloads 273121 Climate Change and Economic Performance in Selected Oil-Producing African Countries: A Trend Analysis Approach
Authors: Waheed O. Majekodunmi
Abstract:
Climate change is a real global phenomenon and an unquestionable threat to our quest for a healthy and livable planet. It is now regarded as potentially the most monumental environmental challenge people and the planet will be confronted with over the next centuries. Expectedly, climate change mitigation was one of the central themes of COP 28. Despite contributing the least to climate change, Africa is and remains the hardest hit by the negative consequences of climate change including poor growth performance. Currently, it is being hypothesized that the high level of vulnerability and exposure to climate-related disasters, low adaptive capacity against global warming and high mitigation costs of climate change across the continent could be linked to the recent abysmal economic performance of African countries, especially in oil-producing countries where greenhouse gas emissions, is potentially more prevalent. This paper examines the impact of climate change on the economic performance of selected oil-producing countries in Africa using evidence from Nigeria, Algeria and Angola. The objective of the study is to determine whether or not climate change influences the economic performance of oil-producing countries in Africa by examining the nexus between economic growth and climate-related variables. The study seeks to investigate the effect of climate change on the pace of economic growth in African oil-producing countries. To achieve the research objectives, this study utilizes a quantitative approach by using historical and current secondary data sets to determine the relationship between climate-related variables and economic growth variables in the selected countries. The study employed numbers, percentages, tables and trend graphs to explain the trends or common patterns between climate change, economic growth and determinants of economic growth: governance effectiveness, infrastructure, macroeconomic stability and regulatory efficiency. Results from the empirical analysis of data show that the trends of economic growth and climate-related variables in the selected oil-producing countries are in the opposite directions as the increasing share of renewable energy sources in total energy consumption and the reduction in greenhouse gas emissions per capita in the oil-producing countries did not translate to higher economic growth. Further findings show that annual surface temperatures in the selected countries do not share similar trends with the food imports ratio and GDP per capita annual growth rate suggesting that climate change does not impact significantly agricultural productivity and economic growth in oil-producing countries in Africa. Annual surface temperature was also found to not share a similar pattern with governance effectiveness, macroeconomic stability and regulatory efficiency reinforcing the claim that some economic growth variables are independent of climate change. The policy implication of this research is that oil-producing African countries need to focus more on improving the macroeconomic environment and streamlining governance and institutional processes to boost their economic performance before considering the adoption of climate change adaptation and mitigation strategies.Keywords: climate change, climate vulnerability, economic growth, greenhouse gas emissions per capita, oil-producing countries, share of renewable energy in total energy consumption
Procedia PDF Downloads 52120 Evaluation of Simple, Effective and Affordable Processing Methods to Reduce Phytates in the Legume Seeds Used for Feed Formulations
Authors: N. A. Masevhe, M. Nemukula, S. S. Gololo, K. G. Kgosana
Abstract:
Background and Study Significance: Legume seeds are important in agriculture as they are used for feed formulations due to their nutrient-dense, low-cost, and easy accessibility. Although they are important sources of energy, proteins, carbohydrates, vitamins, and minerals, they contain abundant quantities of anti-nutritive factors that reduce the bioavailability of nutrients, digestibility of proteins, and mineral absorption in livestock. However, the removal of these factors is too costly as it requires expensive state-of-the-art techniques such as high pressure and thermal processing. Basic Methodologies: The aim of the study was to investigate cost-effective methods that can be used to reduce the inherent phytates as putative antinutrients in the legume seeds. The seeds of Arachis hypogaea, Pisum sativum and Vigna radiata L. were subjected to the single processing methods viz raw seeds plus dehulling (R+D), soaking plus dehulling (S+D), ordinary cooking plus dehulling (C+D), infusion plus dehulling (I+D), autoclave plus dehulling (A+D), microwave plus dehulling (M+D) and five combined methods (S+I+D; S+A+D; I+M+D; S+C+D; S+M+D). All the processed seeds were dried, ground into powder, extracted, and analyzed on a microplate reader to determine the percentage of phytates per dry mass of the legume seeds. Phytic acid was used as a positive control, and one-way ANOVA was used to determine the significant differences between the means of the processing methods at a threshold of 0.05. Major Findings: The results of the processing methods showed the percentage yield ranges of 39.1-96%, 67.4-88.8%, and 70.2-93.8% for V. radiata, A. hypogaea and P. sativum, respectively. Though the raw seeds contained the highest contents of phytates that ranged between 0.508 and 0.527%, as expected, the R+D resulted in a slightly lower phytate percentage range of 0.469-0.485%, while other processing methods resulted in phytate contents that were below 0.35%. The M+D and S+M+D methods showed low phytate percentage ranges of 0.276-0.296% and 0.272-0.294%, respectively, where the lowest percentage yield was determined in S+M+D of P. sativum. Furthermore, these results were found to be significantly different (p<0.05). Though phytates cause micronutrient deficits as they chelate important minerals such as calcium, zinc, iron, and magnesium, their reduction may enhance nutrient bioavailability since they cannot be digested by the ruminants. Concluding Statement: Despite the nutritive aspects of the processed legume seeds, which are still in progress, the M+D and S+M+D methods, which significantly reduced the phytates in the investigated legume seeds, may be recommended to the local farmers and feed-producing industries so as to enhance animal health and production at an affordable cost.Keywords: anti-nutritive factors, extraction, legume seeds, phytate
Procedia PDF Downloads 28119 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season
Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada
Abstract:
A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model
Procedia PDF Downloads 555118 The Influence of Morphology and Interface Treatment on Organic 6,13-bis (triisopropylsilylethynyl)-Pentacene Field-Effect Transistors
Authors: Daniel Bülz, Franziska Lüttich, Sreetama Banerjee, Georgeta Salvan, Dietrich R. T. Zahn
Abstract:
For the development of electronics, organic semiconductors are of great interest due to their adjustable optical and electrical properties. Especially for spintronic applications they are interesting because of their weak spin scattering, which leads to longer spin life times compared to inorganic semiconductors. It was shown that some organic materials change their resistance if an external magnetic field is applied. Pentacene is one of the materials which exhibit the so called photoinduced magnetoresistance which results in a modulation of photocurrent when varying the external magnetic field. Also the soluble derivate of pentacene, the 6,13-bis (triisopropylsilylethynyl)-pentacene (TIPS-pentacene) exhibits the same negative magnetoresistance. Aiming for simpler fabrication processes, in this work, we compare TIPS-pentacene organic field effect transistors (OFETs) made from solution with those fabricated by thermal evaporation. Because of the different processing, the TIPS-pentacene thin films exhibit different morphologies in terms of crystal size and homogeneity of the substrate coverage. On the other hand, the interface treatment is known to have a high influence on the threshold voltage, eliminating trap states of silicon oxide at the gate electrode and thereby changing the electrical switching response of the transistors. Therefore, we investigate the influence of interface treatment using octadecyltrichlorosilane (OTS) or using a simple cleaning procedure with acetone, ethanol, and deionized water. The transistors consist of a prestructured OFET substrates including gate, source, and drain electrodes, on top of which TIPS-pentacene dissolved in a mixture of tetralin and toluene is deposited by drop-, spray-, and spin-coating. Thereafter we keep the sample for one hour at a temperature of 60 °C. For the transistor fabrication by thermal evaporation the prestructured OFET substrates are also kept at a temperature of 60 °C during deposition with a rate of 0.3 nm/min and at a pressure below 10-6 mbar. The OFETs are characterized by means of optical microscopy in order to determine the overall quality of the sample, i.e. crystal size and coverage of the channel region. The output and transfer characteristics are measured in the dark and under illumination provided by a white light LED in the spectral range from 450 nm to 650 nm with a power density of (8±2) mW/cm2.Keywords: organic field effect transistors, solution processed, surface treatment, TIPS-pentacene
Procedia PDF Downloads 447117 Research on Tight Sandstone Oil Accumulation Process of the Third Member of Shahejie Formation in Dongpu Depression, China
Authors: Hui Li, Xiongqi Pang
Abstract:
In recent years, tight oil has become a hot spot for unconventional oil and gas exploration and development in the world. Dongpu Depression is a typical hydrocarbon-rich basin in the southwest of Bohai Bay Basin, in which tight sandstone oil and gas have been discovered in deep reservoirs, most of which are buried more than 3500m. The distribution and development characteristics of deep tight sandstone reservoirs need to be studied. The main source rocks in study area are dark mudstone and shale of the middle and lower third sub-member of Shahejie Formation. Total Organic Carbon (TOC) content of source rock is between 0.08-11.54%, generally higher than 0.6% and the value of S1+S2 is between 0.04–72.93 mg/g, generally higher than 2 mg/g. It can be evaluated as middle to fine level overall. The kerogen type of organic matter is predominantly typeⅡ1 andⅡ2. Vitrinite reflectance (Ro) is mostly greater than 0.6% indicating that the source rock entered the hydrocarbon generation threshold. The physical property of reservoir was poor, the most reservoir has a porosity lower than 12% and a permeability of less than 1×10⁻³μm. The rocks in this area showed great heterogeneity, some areas developed desserts with high porosity and permeability. According to SEM, thin section image, inclusion test and so on, the reservoir was affected by compaction and cementation during early diagenesis stage (44-31Ma). The diagenesis caused the tight reservoir in Huzhuangji, Pucheng, Weicheng Area while the porosity in Machang, Qiaokou, Wenliu Area was still over 12%. In the process of middle diagenesis phase stage A (31-17Ma), the reservoir porosity in Machang, Pucheng, Huzhuangji Area increased due to dissolution; after that the oil generation window of source rock was achieved for the first phase hydrocarbon charging (31-23Ma), formed the conventional oil deposition in Machang, Qiaokou, Wenliu, Huzhuangji Area and unconventional tight reservoir in Pucheng, Weicheng Area. Then came to stage B of middle diagenesis phase (17-7Ma), in this stage, the porosity of reservoir continued to decrease after the dissolution and led to a situation that the reservoirs were generally compacted. And since then, the second hydrocarbon filling has been processing since 7Ma. Most of the pools charged and formed in this procedure are tight sandstone oil reservoir. In conclusion, tight sandstone oil was formed in two patterns in Dongpu Depression, which could be concluded as ‘density fist then accumulation’ pattern and ‘accumulation fist next density’ pattern.Keywords: accumulation process, diagenesis, dongpu depression, tight sandstone oil
Procedia PDF Downloads 116116 A Review on Silicon Based Induced Resistance in Plants against Insect Pests
Authors: Asim Abbasi, Muhammad Sufyan, Muhammad Kamran, Iqra
Abstract:
Development of resistance in insect pests against various groups of insecticides has prompted the use of alternative integrated pest management approaches. Among these induced host plant resistance represents an important strategy as it offers a practical, cheap and long lasting solution to keep pests populations below economic threshold level (ETL). Silicon (Si) has a major role in regulating plant eco-relationship by providing strength to the plant in the form of anti-stress mechanism which was utilized in coping with the environmental extremes to get a better yield and quality end produce. Among biotic stresses, insect herbivore signifies one class against which Si provide defense. Silicon in its neutral form (H₄SiO₄) is absorbed by the plants via roots through an active process accompanied by the help of different transporters which were located in the plasma membrane of root cells or by a passive process mostly regulated by transpiration stream, which occurs via the xylem cells along with the water. Plants tissues mainly the epidermal cell walls are the sinks of absorbed silicon where it polymerizes in the form of amorphous silica or monosilicic acid. The noteworthy function of this absorbed silicon is to provide structural rigidity to the tissues and strength to the cell walls. Silicon has both direct and indirect effects on insect herbivores. Increased abrasiveness and hardness of epidermal plant tissues and reduced digestibility as a result of deposition of Si primarily as phytoliths within cuticle layer is now the most authenticated mechanisms of Si in enhancing plant resistance to insect herbivores. Moreover, increased Si content in the diet also impedes the efficiency by which insects transformed consumed food into the body mass. The palatability of food material has also been changed by Si application, and it also deters herbivore feeding for food. The production of defensive compounds of plants like silica and phenols have also been amplified by the exogenous application of silicon sources which results in reduction of the probing time of certain insects. Some studies also highlighted the role of silicon at the third trophic level as it also attracts natural enemies of insects attacking the crop. Hence, the inclusion of Si in pest management approaches can be a healthy and eco-friendly tool in future.Keywords: defensive, phytoliths, resistance, stresses
Procedia PDF Downloads 188115 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours
Authors: Fikret Yalcinkaya, Hamza Unsal
Abstract:
To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models
Procedia PDF Downloads 180114 The Interaction of Climate Change and Human Health in Italy
Authors: Vito Telesca, Giuseppina A. Giorgio, M. Ragosta
Abstract:
The effects of extreme heat events are increasing in recent years. Humans are forced to adjust themselves to adverse climatic conditions. The impact of weather on human health has become public health significance, especially in light of climate change and rising frequency of devasting weather events (e.g., heat waves and floods). The interest of scientific community is widely known. In particular, the associations between temperature and mortality are well studied. Weather conditions are natural factors that affect the human organism. Recent works show that the temperature threshold at which an impact is seen varies by geographic area and season. These results suggest heat warning criteria should consider local thresholds to account for acclimation to local climatology as well as the seasonal timing of a forecasted heat wave. Therefore, it is very important the problem called ‘local warming’. This is preventable with adequate warning tools and effective emergency planning. Since climate change has the potential to increase the frequency of these types of events, improved heat warning systems are urgently needed. This would require a better knowledge of the full impact of extreme heat on morbidity and mortality. The majority of researchers who analyze the associations between human health and weather variables, investigate the effect of air temperature and bioclimatic indices. These indices combine air temperature, relative humidity, and wind speed and are very important to determine the human thermal comfort. Health impact studies of weather events showed that the prevention is an essential element to dramatically reduce the impact of heat waves. The summer Italian of 2012 was characterized with high average temperatures (con un +2.3°C in reference to the period 1971-2000), enough to be considered as the second hottest summer since 1800. Italy was the first among countries in Europe which adopted tools for to predict these phenomena with 72 hours in advance (Heat Health Watch Warning System - HHWWS). Furthermore, in Italy heat alert criteria relies on the different Indexes, for example Apparent temperature, Scharlau index, Thermohygrometric Index, etc. This study examines the importance of developing public health policies that protect the most vulnerable people (such as the elderly) to extreme temperatures, highlighting the factors that confer susceptibility.Keywords: heat waves, Italy, local warming, temperature
Procedia PDF Downloads 243113 Transition from Linear to Circular Business Models with Service Design Methodology
Authors: Minna-Maari Harmaala, Hanna Harilainen
Abstract:
Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.Keywords: business model innovation, circular economy, circular economy business models, service design
Procedia PDF Downloads 135112 Smart Meters and In-Home Displays to Encourage Water Conservation through Behavioural Change
Authors: Julia Terlet, Thomas H. Beach, Yacine Rezgui
Abstract:
Urbanization, population growth, climate change and the current increase in water demand have made the adoption of innovative demand management strategies crucial to the water industry. Water conservation in urban areas has to be improved by encouraging consumers to adopt more sustainable habits and behaviours. This includes informing and educating them about their households’ water consumption and advising them about ways to achieve significant savings on a daily basis. This paper presents a study conducted in the context of the European FP7 WISDOM Project. By integrating innovative Information and Communication Technologies (ICT) frameworks, this project aims at achieving a change in water savings. More specifically, behavioural change will be attempted by implementing smart meters and in-home displays in a trial group of selected households within Cardiff (UK). Using this device, consumers will be able to receive feedback and information about their consumption but will also have the opportunity to compare their consumption to the consumption of other consumers and similar households. Following an initial survey, it appeared necessary to implement these in-home displays in a way that matches consumer's motivations to save water. The results demonstrated the importance of various factors influencing people’s daily water consumption. Both the relevant literature on the subject and the results of our survey therefore led us to include within the in-home device a variety of elements. It first appeared crucial to make consumers aware of the economic aspect of water conservation and especially of the significant financial savings that can be achieved by reducing their household’s water consumption on the long term. Likewise, reminding participants of the impact of their consumption on the environment by making them more aware of water scarcity issues around the world will help increasing their motivation to save water. Additionally, peer pressure and social comparisons with neighbours and other consumers, accentuated by the use of online social networks such as Facebook or Twitter, will likely encourage consumers to reduce their consumption. Participants will also be able to compare their current consumption to their past consumption and to observe the consequences of their efforts to save water through diverse graphs and charts. Finally, including a virtual water game within the display will help the whole household, children and adults, to achieve significant reductions by providing them with simple tips and advice to save water on a daily basis. Moreover, by setting daily and weekly goals for them to reach, the game will expectantly generate cooperation between family members. Members of each household will indeed be encouraged to work together to reduce their water consumption within different rooms of the house, such as the bathroom, the kitchen, or the toilets. Overall, this study will allow us to understand the elements that attract consumers the most and the features that are most commonly used by the participants. In this way, we intend to determine the main factors influencing water consumption in order to identify the measures that will most encourage water conservation in both the long and short term.Keywords: behavioural change, ICT technologies, water consumption, water conservation
Procedia PDF Downloads 335111 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 169110 Occupational Heat Stress Related Adverse Pregnancy Outcome: A Pilot Study in South India Workplaces
Authors: Rekha S., S. J. Nalini, S. Bhuvana, S. Kanmani, Vidhya Venugopal
Abstract:
Introduction: Pregnant women's occupational heat exposure has been linked to foetal abnormalities and pregnancy complications. The presence of heat in the workplace is expected to lead to Adverse Pregnancy Outcomes (APO), especially in tropical countries where temperatures are rising and workplace cooling interventions are minimal. For effective interventions, in-depth understanding and evidence about occupational heat stress and APO are required. Methodology: Approximately 800 pregnant women in and around Chennai who were employed in jobs requiring moderate to hard labour participated in the cohort research. During the study period (2014-2019), environmental heat exposures were measured using a Questemp WBGT monitor, and heat strain markers, such as Core Body Temperature (CBT) and Urine Specific Gravity (USG), were evaluated using an Infrared Thermometer and a refractometer, respectively. Using a valid HOTHAPS questionnaire, self-reported health symptoms were collected. In addition, a postpartum follow-up with the mothers was done to collect APO-related data. Major findings of the study: Approximately 47.3% of pregnant workers have workplace WBGTs over the safe manual work threshold value for moderate/heavy employment (Average WBGT of 26.6°C±1.0°C). About 12.5% of the workers had CBT levels above the usual range, and 24.8% had USG levels above 1.020, both of which suggested mild dehydration. Miscarriages (3%), stillbirths/preterm births (3.5%), and low birth weights (8.8%) were the most common unfavorable outcomes among pregnant employees. In addition, WBGT exposures above TLVs during all trimesters were associated with a 2.3-fold increased risk of adverse fetal/maternal outcomes (95% CI: 1.4-3.8), after adjusting for potential confounding variables including age, education, socioeconomic status, abortion history, stillbirth, preterm, LBW, and BMI. The study determined that WBGTs in the workplace had direct short- and long-term effects on the health of both the mother and the foetus. Despite the study's limited scope, the findings provided valuable insights and highlighted the need for future comprehensive cohort studies and extensive data in order to establish effective policies to protect vulnerable pregnant women from the dangers of heat stress and to promote reproductive health.Keywords: adverse outcome, heat stress, interventions, physiological strain, pregnant women
Procedia PDF Downloads 73109 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 248108 Dietary Patterns and Hearing Loss in Older People
Authors: N. E. Gallagher, C. E. Neville, N. Lyner, J. Yarnell, C. C. Patterson, J. E. Gallacher, Y. Ben-Shlomo, A. Fehily, J. V. Woodside
Abstract:
Hearing loss is highly prevalent in older people and can reduce quality of life substantially. Emerging research suggests that potentially modifiable risk factors, including risk factors previously related to cardiovascular disease risk, may be associated with a decreased or increased incidence of hearing loss. This has prompted investigation into the possibility that certain nutrients, foods or dietary patterns may also be associated with incidence of hearing loss. The aim of this study was to determine any associations between dietary patterns and hearing loss in men enrolled in the Caerphilly study. The Caerphilly prospective cohort study began in 1979-1983 with recruitment of 2512 men aged 45-59 years. Dietary data was collected using a self-administered, semi-quantitative, 56-item food frequency questionnaire (FFQ) at baseline (1979-1983), and 7-day weighed food intake (WI) in a 30% sub-sample, while pure-tone unaided audiometric threshold was assessed at 0.5, 1, 2 and 4 kHz, between 1984 and 1988. Principal components analysis (PCA) was carried out to determine a posteriori dietary patterns and multivariate linear and logistic regression models were used to examine associations with hearing level (pure tone average (PTA) of frequencies 0.5, 1, 2 and 4 kHz in decibels (dB)) for linear regression and with hearing loss (PTA>25dB) for logistic regression. Three dietary patterns were determined using PCA on the FFQ data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, both linear and logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P<0.001) and linear regression analysis showed a significant association between the High sugar/Alcohol avoider pattern and hearing loss (P=0.04). Three similar dietary patterns were determined using PCA on the WI data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P=0.02) and a significant association between the Traditional pattern and hearing loss (P=0.04). A Healthy dietary pattern was found to be significantly inversely associated with hearing loss in middle-aged men in the Caerphilly study. Furthermore, a High sugar/Alcohol avoider pattern (FFQ) and a Traditional pattern (WI) were associated with poorer hearing levels. Consequently, the role of dietary factors in hearing loss remains to be fully established and warrants further investigation.Keywords: ageing, diet, dietary patterns, hearing loss
Procedia PDF Downloads 230107 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 108106 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly
Authors: Alex Eldo Simon, Abhishek Yadav
Abstract:
This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio
Procedia PDF Downloads 81105 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions
Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin
Abstract:
In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography
Procedia PDF Downloads 269104 Analog Railway Signal Object Controller Development
Authors: Ercan Kızılay, Mustafa Demi̇rel, Selçuk Coşkun
Abstract:
Railway signaling systems consist of vital products that regulate railway traffic and provide safe route arrangements and maneuvers of trains. SIL 4 signal lamps are produced by many manufacturers today. There is a need for systems that enable these signal lamps to be controlled by commands from the interlocking. These systems should act as fail-safe and give error indications to the interlocking system when an unexpected situation occurs for the safe operation of railway systems from the RAMS perspective. In the past, driving and proving the lamp in relay-based systems was typically done via signaling relays. Today, the proving of lamps is done by comparing the current values read over the return circuit, the lower and upper threshold values. The purpose is an analog electronic object controller with the possibility of easy integration with vital systems and the signal lamp itself. During the study, the EN50126 standard approach was considered, and the concept, definition, risk analysis, requirements, architecture, design, and prototyping were performed throughout this study. FMEA (Failure Modes and Effects Analysis) and FTA (Fault Tree) Analysis) have been used for safety analysis in accordance with EN 50129. Concerning these analyzes, the 1oo2D reactive fail-safe hardware design of a controller has been researched. Electromagnetic compatibility (EMC) effects on the functional safety of equipment, insulation coordination, and over-voltage protection were discussed during hardware design according to EN 50124 and EN 50122 standards. As vital equipment for railway signaling, railway signal object controllers should be developed according to EN 50126 and EN 50129 standards which identify the steps and requirements of the development in accordance with the SIL 4(Safety Integrity Level) target. In conclusion of this study, an analog railway signal object controller, which takes command from the interlocking system, is processed in driver cards. Driver cards arrange the voltage level according to desired visibility by means of semiconductors. Additionally, prover cards evaluate the current upper and lower thresholds. Evaluated values are processed via logic gates which are composed as 1oo2D by means of analog electronic technologies. This logic evaluates the voltage level of the lamp and mitigates the risks of undue dimming.Keywords: object controller, railway electronic, analog electronic, safety, railway signal
Procedia PDF Downloads 99103 Root Cause Analysis of a Catastrophically Failed Output Pin Bush Coupling of a Raw Material Conveyor Belt
Authors: Kaushal Kishore, Suman Mukhopadhyay, Susovan Das, Manashi Adhikary, Sandip Bhattacharyya
Abstract:
In integrated steel plants, conveyor belts are widely used for transferring raw materials from one location to another. An output pin bush coupling attached with a conveyor transferring iron ore fines and fluxes failed after two years of service life. This led to an operational delay of approximately 15 hours. This study is focused on failure analysis of the coupling and recommending counter-measures to prevent any such failures in the future. Investigation consisted of careful visual observation, checking of operating parameters, stress calculation and analysis, macro and micro-fractography, material characterizations like chemical and metallurgical analysis and tensile and impact testings. The fracture occurred from an unusually sharp double step. There were multiple corrosion pits near the step that aggravated the situation. Inner contact surface of the coupling revealed differential abrasion that created a macroscopic difference in the height of the component. This pointed towards misalignment of the coupling beyond a threshold limit. In addition to these design and installation issues, material of the coupling did not meet the quality standards. These were made up of grey cast iron having graphite morphology intermediate between random distribution (Type A) and rosette pattern (Type B). This manifested as a marked reduction in impact toughness and tensile strength of the component. These findings corroborated well with the brittle mode of fracture that might have occurred during minor impact loading while loading of conveyor belt with raw materials from height. Simulated study was conducted to examine the effect of corrosion pits on tensile and impact toughness of grey cast iron. It was observed that pitting marginally reduced tensile strength and ductility. However, there was marked (up to 45%) reduction in impact toughness due to pitting. Thus, it became evident that failure of the coupling occurred due to combination of factors like inferior material, misalignment, poor step design and corrosion pitting. Recommendation for life enhancement of coupling included the use of tougher SG 500/7 grade, incorporation of proper fillet radius for the step, correction of alignment and application of corrosion resistant organic coating to prevent pitting.Keywords: brittle fracture, cast iron, coupling, double step, pitting, simulated impact tests
Procedia PDF Downloads 132102 A Novel Concept of Optical Immunosensor Based on High-Affinity Recombinant Protein Binders for Tailored Target-Specific Detection
Authors: Alena Semeradtova, Marcel Stofik, Lucie Mareckova, Petr Maly, Ondrej Stanek, Jan Maly
Abstract:
Recently, novel strategies based on so-called molecular evolution were shown to be effective for the production of various peptide ligand libraries with high affinities to molecular targets of interest comparable or even better than monoclonal antibodies. The major advantage of these peptide scaffolds is mainly their prevailing low molecular weight and simple structure. This study describes a new high-affinity binding molecules based immunesensor using a simple optical system for human serum albumin (HSA) detection as a model molecule. We present a comparison of two variants of recombinant binders based on albumin binding domain of the protein G (ABD) performed on micropatterned glass chip. Binding domains may be tailored to any specific target of interest by molecular evolution. Micropatterened glass chips were prepared using UV-photolithography on chromium sputtered glasses. Glass surface was modified by (3-aminopropyl)trietoxysilane and biotin-PEG-acid using EDC/NHS chemistry. Two variants of high-affinity binding molecules were used to detect target molecule. Firstly, a variant is based on ABD domain fused with TolA chain. This molecule is in vivo biotinylated and each molecule contains one molecule of biotin and one ABD domain. Secondly, the variant is ABD domain based on streptavidin molecule and contains four gaps for biotin and four ABD domains. These high-affinity molecules were immobilized to the chip surface via biotin-streptavidin chemistry. To eliminate nonspecific binding 1% bovine serum albumin (BSA) or 6% fetal bovine serum (FBS) were used in every step. For both variants range of measured concentrations of fluorescently labelled HSA was 0 – 30 µg/ml. As a control, we performed a simultaneous assay without high-affinity binding molecules. Fluorescent signal was measured using inverse fluorescent microscope Olympus IX 70 with COOL LED pE 4000 as a light source, related filters, and camera Retiga 2000R as a detector. The fluorescent signal from non-modified areas was substracted from the signal of the fluorescent areas. Results were presented in graphs showing the dependence of measured grayscale value on the log-scale of HSA concentration. For the TolA variant the limit of detection (LOD) of the optical immunosensor proposed in this study is calculated to be 0,20 µg/ml for HSA detection in 1% BSA and 0,24 µg/ml in 6% FBS. In the case of streptavidin-based molecule, it was 0,04 µg/ml and 0,07 µg/ml respectively. The dynamical range of the immunosensor was possible to estimate just in the case of TolA variant and it was calculated to be 0,49 – 3,75 µg/ml and 0,73-1,88 µg/ml respectively. In the case of the streptavidin-based the variant we didn´t reach the surface saturation even with the 480 ug/ml concentration and the upper value of dynamical range was not estimated. Lower value was calculated to be 0,14 µg/ml and 0,17 µg/ml respectively. Based on the obtained results, it´s clear that both variants are useful for creating the bio-recognizing layer on immunosensors. For this particular system, it is obvious that the variant based on streptavidin molecule is more useful for biosensing on glass planar surfaces. Immunosensors based on this variant would exhibit better limit of detection and wide dynamical range.Keywords: high affinity binding molecules, human serum albumin, optical immunosensor, protein G, UV-photolitography
Procedia PDF Downloads 367101 Changes in Heavy Metals Bioavailability in Manure-Derived Digestates and Subsequent Hydrochars to Be Used as Soil Amendments
Authors: Hellen L. De Castro e Silva, Ana A. Robles Aguilar, Erik Meers
Abstract:
Digestates are residual by-products, rich in nutrients and trace elements, which can be used as organic fertilisers on soils. However, due to the non-digestibility of these elements and reduced dry matter during the anaerobic digestion process, metal concentrations are higher in digestates than in feedstocks, which might hamper their use as fertilisers according to the threshold values of some country policies. Furthermore, there is uncertainty regarding the required assimilated amount of these elements by some crops, which might result in their bioaccumulation. Therefore, further processing of the digestate to obtain safe fertilizing products has been recommended. This research aims to analyze the effect of applying the hydrothermal carbonization process to manure-derived digestates as a thermal treatment to reduce the bioavailability of heavy metals in mono and co-digestates derived from pig manure and maize from contaminated land in France. This study examined pig manure collected from a novel stable system (VeDoWs, province of East Flanders, Belgium) that separates the collection of pig urine and feces, resulting in a solid fraction of manure with high up-concentration of heavy metals and nutrients. Mono-digestion and co-digestion processes were conducted in semi-continuous reactors for 45 days at mesophilic conditions, in which the digestates were dried at 105 °C for 24 hours. Then, hydrothermal carbonization was applied to a 1:10 solid/water ratio to guarantee controlled experimental conditions in different temperatures (180, 200, and 220 °C) and residence times (2 h and 4 h). During the process, the pressure was generated autogenously, and the reactor was cooled down after completing the treatments. The solid and liquid phases were separated through vacuum filtration, in which the solid phase of each treatment -hydrochar- was dried and ground for chemical characterization. Different fractions (exchangeable / adsorbed fraction - F1, carbonates-bound fraction - F2, organic matter-bound fraction - F3, and residual fraction – F4) of some heavy metals (Cd, Cr, Ni, and Cr) have been determined in digestates and derived hydrochars using the modified Community Bureau of Reference (BCR) sequential extraction procedure. The main results indicated a difference in the heavy metals fractionation between digestates and their derived hydrochars; however, the hydrothermal carbonization operating conditions didn’t have remarkable effects on heavy metals partitioning between the hydrochars of the proposed treatments. Based on the estimated potential ecological risk assessment, there was one level decrease (considerate to moderate) when comparing the HMs partitioning in digestates and derived hydrochars.Keywords: heavy metals, bioavailability, hydrothermal treatment, bio-based fertilisers, agriculture
Procedia PDF Downloads 100