Search results for: water conveyance structure
162 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor
Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez
Abstract:
Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste
Procedia PDF Downloads 115161 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 133160 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica
Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat
Abstract:
Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.Keywords: carmustine, silica, controlled, release
Procedia PDF Downloads 264159 Explanation of the Main Components of the Unsustainability of Cooperative Institutions in Cooperative Management Projects to Combat Desertification in South Khorasan Province
Authors: Yaser Ghasemi Aryan, Firoozeh Moghiminejad, Mohammadreza Shahraki
Abstract:
Background: The cooperative institution is considered the first and most essential pillar of strengthening social capital, whose sustainability is the main guarantee of survival and continued participation of local communities in natural resource management projects. The Village Development Group and the Microcredit Fund are two important social and economic institutions in the implementation of the International Project for the Restoration of Degraded Forest Lands (RFLDL) in Sarayan City, South Khorasan Province, which has learned positive lessons from the participation of the beneficiaries in the implementation. They have brought more effective projects to deal with desertification. However, the low activity or liquidation of some of these institutions has become one of the important challenges and concerns of project executive experts. The current research was carried out with the aim of explaining the main components of the instability of these institutions. Materials and Methods: This research is descriptive-analytical in terms of method, practical in terms of purpose, and the method of collecting information is two documentary and survey methods. The statistical population of the research included all the members of the village development groups and microcredit funds in the target villages of the RFLDL project of Sarayan city, based on the Kochran formula and matching with the Karjesi and Morgan table. Net people were selected as a statistical sample. After confirming the validity of the expert's opinions, the reliability of the questionnaire was 0.83, which shows the appropriate reliability of the researcher-made questionnaire. Data analysis was done using SPSS software. Results: The results related to the extraction of obstacles to the stability of social and economic networks were classified and prioritized in the form of 5 groups of social-cultural, economic, administrative, educational-promotional and policy-management factors. Based on this, in the socio-cultural factors, the items ‘not paying attention to the structural characteristics and composition of groups’, ‘lack of commitment and moral responsibility in some members of the group,’ and ‘lack of a clear pattern for the preservation and survival of groups’, in the disciplinary factors, The items ‘Irregularity in holding group meetings’ and ‘Irregularity of members to participate in meetings’, in the economic factors of the items "small financial capital of the fund’, ‘the low amount of loans of the fund’ and ‘the fund's inability to conclude contracts and attract capital from other sources’, in the educational-promotional factors of the items ‘non-simultaneity of job training with the granting of loans to create jobs’ and ‘insufficient training for the effective use of loans and job creation’ and in the policy-management factors of the item ‘failure to provide government facilities for support From the funds, they had the highest priority. Conclusion: In general, the results of this research show that policy-management factors and social factors, especially the structure and composition of social and economic institutions, are the most important obstacles to their sustainability. Therefore, it is suggested to form cooperative institutions based on network analysis studies in order to achieve the appropriate composition of members.Keywords: cooperative institution, social capital, network analysis, participation, Sarayan.
Procedia PDF Downloads 55158 Mangroves in the Douala Area, Cameroon: The Challenges of Open Access Resources for Forest Governance
Authors: Bissonnette Jean-François, Dossa Fabrice
Abstract:
The project focuses on analyzing the spatial and temporal evolution of mangrove forest ecosystems near the city of Douala, Cameroon, in response to increasing human and environmental pressures. The selected study area, located in the Wouri River estuary, has a unique combination of economic importance, and ecological prominence. The study included valuable insights by conducting semi-structured interviews with resource operators and local officials. The thorough analysis of socio-economic data, farmer surveys, and satellite-derived information was carried out utilizing quantitative approaches in Excel and SPSS. Simultaneously, qualitative data was subjected to rigorous classification and correlation with other sources. The use of ArcGIS and CorelDraw facilitated the visual representation of the gradual changes seen in various land cover classifications. The research reveals complex processes that characterize mangrove ecosystems on Manoka and Cape Cameroon Islands. The lack of regulations in urbanization and the continuous growth of infrastructure have led to a significant increase in land conversion, causing negative impacts on natural landscapes and forests. The repeated instances of flooding and coastal erosion have further shaped landscape alterations, fostering the proliferation of water and mudflat areas. The unregulated use of mangrove resources is a significant factor in the degradation of these ecosystems. Activities including the use of wood for smoking and fishing, together with the coastal pollution resulting from the absence of waste collection, have had a significant influence. In addition, forest operators contribute to the degradation of vegetation, hence exacerbating the harmful impact of invasive species on the ecosystem. Strategic interventions are necessary to guarantee the sustainable management of these ecosystems. The proposals include advocating for sustainable wood exploitation techniques, using appropriate techniques, along with regeneration, and enforcing rules to prevent wood overexploitation. By implementing these measures, the ecological balance can be preserved, safeguarding the long-term viability of these precious ecosystems. On a conceptual level, this paper uses the framework developed by Elinor Ostrom and her colleagues to investigate the consequences of open access resources, where local actors have not been able to enforce measures to prevent overexploitation of mangrove wood resources. Governmental authorities have demonstrated limited capacity to enforce sustainable management of wood resources and have not been able to establish effective relationships with local fishing communities and with communities involved in the purchase of wood. As a result, wood resources in the mangrove areas remain largely accessible, while authorities do not monitor wood volumes extracted nor methods of exploitation. There have only been limited and punctual attempts at forest restoration with no significant consequence on mangrove forests dynamics.Keywords: Mangroves, forest management, governance, open access resources, Cameroon
Procedia PDF Downloads 63157 The Perspective of British Politicians on English Identity: Qualitative Study of Parliamentary Debates, Blogs, and Interviews
Authors: Victoria Crynes
Abstract:
The question of England’s role in Britain is increasingly relevant due to the ongoing rise in citizens identifying as English. Furthermore, the Brexit Referendum was predominantly supported by constituents identifying as English. Few politicians appear to comprehend how Englishness is politically manifested. Politics and the media have depicted English identity as a negative and extremist problem - an inaccurate representation that ignores the breadth of English identifying citizens. This environment prompts the question, 'How are British Politicians Addressing the Modern English Identity Question?' Parliamentary debates, political blogs, and interviews are synthesized to establish a more coherent understanding of the current political attitudes towards English identity, the perceived nature of English identity, and the political manifestation of English representation and governance. Analyzed parliamentary debates addressed the democratic structure of English governance through topics such as English votes for English laws, devolution, and the union. The blogs examined include party-based, multi-author style blogs, and independently authored blogs by politicians, which provide a dynamic and up-to-date representation of party and politician viewpoints. Lastly, fourteen semi-structured interviews of British politicians provide a nuanced perspective on how politicians conceptualize Englishness. Interviewee selection was based on three criteria: (i) Members of Parliament (MP) known for discussing English identity politics, (ii) MPs of strongly English identifying constituencies, (iii) MPs with minimal English identity affiliation. Analysis of parliamentary debates reveals the discussion of English representation has gained little momentum. Many politicians fail to comprehend who the English are, why they desire greater representation and believe that increased recognition of the English would disrupt the unity of the UK. These debates highlight the disconnect of parliament from the disenfranchised English towns. A failure to recognize the legitimacy of English identity politics generates an inability for solution-focused debates to occur. Political blogs demonstrate cross-party recognition of growing English disenfranchisement. The dissatisfaction with British politics derives from multiple factors, including economic decline, shifting community structures, and the delay of Brexit. The left-behind communities have seen little response from Westminster, which is often contrasted to the devolved and louder voices of the other UK nations. Many blogs recognize the need for a political response to the English and lament the lack of party-level initiatives. In comparison, interviews depict an array of local-level initiatives reconnecting MPs to community members. Local efforts include town trips to Westminster, multi-cultural cooking classes, and English language courses. These efforts begin to rebuild positive, local narratives, promote engagement across community sectors, and acknowledge the English voices. These interviewees called for large-scale, political action. Meanwhile, several interviewees denied the saliency of English identity. For them, the term held only extremist narratives. The multi-level analysis reveals continued uncertainty on Englishness within British politics, contrasted with increased recognition of its saliency by politicians. It is paramount that politicians increase discussions on English identity politics to avoid increased alienation of English citizens and to rebuild trust in the abilities of Westminster.Keywords: British politics, contemporary identity politics and its impacts, English identity, English nationalism, identity politics
Procedia PDF Downloads 114156 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion
Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty
Abstract:
Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors
Procedia PDF Downloads 154155 Spatial Transformation of Heritage Area as The Impact of Tourism Activity (Case Study: Kauman Village, Surakarta City, Central Java, Indonesia
Authors: Nafiah Solikhah Thoha
Abstract:
One area that has spatial character as Heritage area is Kauman Villages. Kauman village in The City of Surakarta, Central Java, Indonesia was formed in 1757 by Paku Buwono III as the King of Kasunanan kingdom (Mataram Kingdom) for Kasunanan kingdom courtiers and scholars of Madrasa. Spatial character of Kauman village influenced by Islamic planning and socio-cultural rules of Kasunanan Kingdom. As traditional settlements influenced by Islamic planning, the Grand Mosque is a binding part of the whole area. Circulation pattern forming network (labyrinth) with narrow streets that ended at the Grand Mosque. The outdoor space can be used for circulation. Social activity is dominated by step movement from one place to a different place. Stalemate (the fina/cul de sac) generally only passable on foot, bicycles, and motorcycles. While the pass (main and branch) can be traversed by motor, vehicles. Kauman village has an area that can not be used as a public road that penetrates and serves as a liaison between the outside world to the other. Hierarchy of hall in Kauman village shows that the existence of a space is getting into more important. Firstly, woman in Kauman make the handmade batik for themself. In 2005 many people improving batik tradisional into commercial, and developed program named "Batik Tourism village of Kauman". That program affects the spatial transformations. This study aimed to explore the influence of tourism program towards spatial transformations. The factors that studied are the organization of space, circulation patterns, hierarchical space, and orientation through the descriptive-evaluation approach methods. Based on the study, tourism activity engenders transformations on the spatial scale (macro), residential block (mezo), homes (micro). First, the Grand Mosque and madrasa (religious school) as a binding zoning; tangle of roads as forming the structure of the area developed as a liaison with outside Kauman; organization of space in the residential of batik entrepreneurs firstly just a residential, then develop into residential, factory of batik including showroom. Second, the circulation pattern forming network (labyrinth) and ends at the Grand Mosque. Third, the hierarchy in the form of public space (the shari), semi-public, and private (the fina/culdesac) is no longer to provide protection to women, only as hierarchy of circulation path. Fourth, cluster building orientation does not follow the kiblat direction or axis oriented to cosmos, but influence by the new function as the showroom. It was need the direction of the main road. Kauman grow as an appropriate area for the community. During its development, the settlement function changes according to community activities, especially economic activities. The new function areas as tourism area affect spatial pattern of Kauman village. Spatial existence and activity as a local wisdom that has been done for generations have meaning of holistic, encompassing socio-cultural sustainability, economics, and the heritage area. By reviewing the local wisdom and the way of life of that society, we can learn how to apply the culture as education for sustainable of heritage area.Keywords: impact of tourism, Kauman village, spatial transformation, sustainable of heritage area
Procedia PDF Downloads 430154 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 184153 Nephrotoxicity and Hepatotoxicity Induced by Chronic Aluminium Exposure in Rats: Impact of Nutrients Combination versus Social Isolation and Protein Malnutrition
Authors: Azza A. Ali, Doaa M. Abd El-Latif, Amany M. Gad, Yasser M. A. Elnahas, Karema Abu-Elfotuh
Abstract:
Background: Exposure to Aluminium (Al) has been increased recently. It is found in food products, food additives, drinking water, cosmetics and medicines. Chronic consumption of Al causes oxidative stress and has been implicated in several chronic disorders. Liver is considered as the major site for detoxification while kidney is involved in the elimination of toxic substances and is a target organ of metal toxicity. Social isolation (SI) or protein malnutrition (PM) also causes oxidative stress and has negative impact on Al-induced nephrotoxicity as well as hepatotoxicity. Coenzyme Q10 (CoQ10) is a powerful intracellular antioxidant with mitochondrial membrane stabilizing ability while wheat grass is a natural product with antioxidant, anti-inflammatory and different protective activities, cocoa is also potent antioxidants and can protect against many diseases. They provide different degrees of protection from the impact of oxidative stress. Objective: To study the impact of social isolation together with Protein malnutrition on nephro- and hepato-toxicity induced by chronic Al exposure in rats as well as to investigate the postulated protection using a combination of Co Q10, wheat grass and cocoa. Methods: Eight groups of rats were used; four served as protected groups and four as un-protected. Each of them received daily for five weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except one group served as control. Al-toxicity model groups were divided to Al-toxicity alone, SI- associated PM (10% casein diet) and Al- associated SI&PM groups. Protection was induced by oral co-administration of CoQ10 (200mg/kg), wheat grass (100mg/kg) and cocoa powder (24mg/kg) combination together with Al. Biochemical changes in total bilirubin, lipids, cholesterol, triglycerides, glucose, proteins, creatinine and urea as well as alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), lactate deshydrogenase (LDH) were measured in serum of all groups. Specimens of kidney and liver were used for assessment of oxidative parameters (MDA, SOD, TAC, NO), inflammatory mediators (TNF-α, IL-6β, nuclear factor kappa B (NF-κB), Caspase-3) and DNA fragmentation in addition to evaluation of histopathological changes. Results: SI together with PM severely enhanced nephro- and hepato-toxicity induced by chronic Al exposure. Co Q10, wheat grass and cocoa combination showed clear protection against hazards of Al exposure either alone or when associated with SI&PM. Their protection were indicated by the significant decrease in Al-induced elevations in total bilirubin, lipids, cholesterol, triglycerides, glucose, creatinine and urea levels as well as ALT, AST, ALP, LDH. Liver and kidney of the treated groups also showed significant decrease in MDA, NO, TNF-α, IL-6β, NF-κB, caspase-3 and DNA fragmentation, together with significant increase in total proteins, SOD and TAC. Biochemical results were confirmed by the histopathological examinations. Conclusion: SI together with PM represents a risk factor in enhancing nephro- and hepato-toxicity induced by Al in rats. CoQ10, wheat grass and cocoa combination provide clear protection against nephro- and hepatotoxicity as well as the consequent degenerations induced by chronic Al-exposure even when associated with the risk of SI together with PM.Keywords: aluminum, nephrotoxicity, hepatotoxicity, isolation and protein malnutrition, coenzyme Q10, wheatgrass, cocoa, nutrients combinations
Procedia PDF Downloads 249152 Genetically Engineered Crops: Solution for Biotic and Abiotic Stresses in Crop Production
Authors: Deepak Loura
Abstract:
Production and productivity of several crops in the country continue to be adversely affected by biotic (e.g., Insect-pests and diseases) and abiotic (e.g., water temperature and salinity) stresses. Over-dependence on pesticides and other chemicals is economically non-viable for the resource-poor farmers of our country. Further, pesticides can potentially affect human and environmental safety. While traditional breeding techniques and proper- management strategies continue to play a vital role in crop improvement, we need to judiciously use biotechnology approaches for the development of genetically modified crops addressing critical problems in the improvement of crop plants for sustainable agriculture. Modern biotechnology can help to increase crop production, reduce farming costs, and improve food quality and the safety of the environment. Genetic engineering is a new technology which allows plant breeders to produce plants with new gene combinations by genetic transformation of crop plants for improvement of agronomic traits. Advances in recombinant DNA technology have made it possible to have genes between widely divergent species to develop genetically modified or genetically engineered plants. Plant genetic engineering provides the strength to harness useful genes and alleles from indigenous microorganisms to enrich the gene pool for developing genetically modified (GM) crops that will have inbuilt (inherent) resistance to insect pests, diseases, and abiotic stresses. Plant biotechnology has made significant contributions in the past 20 years in the development of genetically engineered or genetically modified crops with multiple benefits. A variety of traits have been introduced in genetically engineered crops which include (i) herbicide resistance. (ii) pest resistance, (iii) viral resistance, (iv) slow ripening of fruits and vegetables, (v) fungal and bacterial resistance, (vi) abiotic stress tolerance (drought, salinity, temperature, flooding, etc.). (vii) quality improvement (starch, protein, and oil), (viii) value addition (vitamins, micro, and macro elements), (ix) pharmaceutical and therapeutic proteins, and (x) edible vaccines, etc. Multiple genes in transgenic crops can be useful in developing durable disease resistance and a broad insect-control spectrum and could lead to potential cost-saving advantages for farmers. The development of transgenic to produce high-value pharmaceuticals and the edible vaccine is also under progress, which requires much more research and development work before commercially viable products will be available. In addition, molecular-aided selection (MAS) is now routinely used to enhance the speed and precision of plant breeding. Newer technologies need to be developed and deployed for enhancing and sustaining agricultural productivity. There is a need to optimize the use of biotechnology in conjunction with conventional technologies to achieve higher productivity with fewer resources. Therefore, genetic modification/ engineering of crop plants assumes greater importance, which demands the development and adoption of newer technology for the genetic improvement of crops for increasing crop productivity.Keywords: biotechnology, plant genetic engineering, genetically modified, biotic, abiotic, disease resistance
Procedia PDF Downloads 71151 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 62150 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo
Authors: Shpetim Rezniqi
Abstract:
Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.Keywords: globalisation, finance, crisis, recomandation, bank, credits
Procedia PDF Downloads 389149 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 152148 Genotoxic Effect of Tricyclic Antidepressant Drug “Clomipramine Hydrochloride’ on Somatic and Germ Cells of Male Mice
Authors: Samia A. El-Fiky, Fouad A. Abou-Zaid, Ibrahim M. Farag, Naira M. El-Fiky
Abstract:
Clomipramine hydrochloride is one of the most used tricyclic antidepressant drug in Egypt. This drug contains in its chemical structure on two benzene rings. Benzene is considered to be toxic and clastogenic agent. So, the present study was designed to assess the genotoxic effect of Clomipramine hydrochloride on somatic and germ cells in mice. Three dose levels 0.195 (Low), 0.26 (Medium), and 0.65 (High) mg/kg.b.wt. were used. Seven groups of male mice were utilized in this work. The first group was employed as a control. In the remaining six groups, each of the above doses was orally administrated for two groups, one of them was treated for 5 days and the other group was given the same dose for 30 days. At the end of experiments, the animals were sacrificed for cytogenetic and sperm examination as well as histopathological investigations by using hematoxylin and eosin stains (H and E stains) and electron microscope. Concerning the sperm studies, these studies were confined to 5 days treatment with different dose levels. Moreover, the ultrastructural investigation by electron microscope was restricted to 30 days treatment with drug doses. The results of the dose dependent effect of Clomipramine showed that the treatment with three different doses induced increases of frequencies of chromosome aberrations in bone marrow and spermatocyte cells as compared to control. In addition, mitotic and meiotic activities of somatic and germ cells were declined. The treatments with medium or high doses were more effective for inducing significant increases of chromosome aberrations and significant decreases of cell divisions than treatment with low dose. The effect of high dose was more pronounced for causing such genetic deleterious in respect to effect of medium dose. Moreover, the results of the time dependent effect of Clomipramine observed that the treatment with different dose levels for 30 days led to significant increases of genetic aberrations than treatment for 5 days. Sperm examinations revealed that the treatment with Clomipramine at different dose levels caused significant increase of sperm shape abnormalities and significant decrease in sperm count as compared to control. The adverse effects on sperm shape and count were more obviousness by using the treatments with medium or high doses than those found in treatment with low dose. The group of mice treated with high dose had the highest rate of sperm shape abnormalities and the lowest proportion of sperm count as compared to mice received medium dose. In histopathological investigation, hematoxylin and eosin stains showed that, the using of low dose of Clomipramine for 5 or 30 days caused a little pathological changes in liver tissue. However, using medium and high doses for 5 or 30 days induced severe damages than that observed in mice treated with low dose. The treatment with high dose for 30 days gave the worst results of pathological changes in hepatic cells. Moreover, ultrastructure examination revealed, the mice treated with low dose of Clomipramine had little differences in liver histological architecture as compared to control group. These differences were confined to cytoplasmic inclusions. Whereas, prominent pathological changes in nuclei as well as dilated of rough Endoplasmic Reticulum (rER) were observed in mice treated with medium or high doses of Clomipramine drug. In conclusion, the present study adds evidence that treatments with medium or high doses of Clomipramine have genotoxic effects on somatic and germ cells of mice, as unwanted side effects. However, the using of low dose (especially for short time, 5 days) can be utilized as a therapeutic dose, where it caused relatively similar proportions of genetic, sperm, and histopathological changes as those found in normal control.Keywords: chromosome aberrations, clomipramine, mice, histopathology, sperm abnormalities
Procedia PDF Downloads 521147 Computer-Integrated Surgery of the Human Brain, New Possibilities
Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto
Abstract:
The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.Keywords: computational mechanics, peridynamics, finite element, biomechanics
Procedia PDF Downloads 80146 Non-Mammalian Pattern Recognition Receptor from Rock Bream (Oplegnathus fasciatus): Genomic Characterization and Transcriptional Profile upon Bacterial and Viral Inductions
Authors: Thanthrige Thiunuwan Priyathilaka, Don Anushka Sandaruwan Elvitigala, Bong-Soo Lim, Hyung-Bok Jeong, Jehee Lee
Abstract:
Toll like receptors (TLRs) are a phylogeneticaly conserved family of pattern recognition receptors, which participates in the host immune responses against various pathogens and pathogen derived mitogen. TLR21, a non-mammalian type, is almost restricted to the fish species even though those can be identified rarely in avians and amphibians. Herein, this study was carried out to identify and characterize TLR21 from rock bream (Oplegnathus fasciatus) designated as RbTLR21, at transcriptional and genomic level. In this study, the full length cDNA and genomic sequence of RbTLR21 was identified using previously constructed cDNA sequence database and BAC library, respectively. Identified RbTLR21 sequence was characterized using several bioinformatics tools. The quantitative real time PCR (qPCR) experiment was conducted to determine tissue specific expressional distribution of RbTLR21. Further, transcriptional modulation of RbTLR21 upon the stimulation with Streptococcus iniae (S. iniae), rock bream iridovirus (RBIV) and Edwardsiella tarda (E. tarda) was analyzed in spleen tissues. The complete coding sequence of RbTLR21 was 2919 bp in length which can encode a protein consisting of 973 amino acid residues with molecular mass of 112 kDa and theoretical isoelectric point of 8.6. The anticipated protein sequence resembled a typical TLR domain architecture including C-terminal ectodomain with 16 leucine rich repeats, a transmembrane domain, cytoplasmic TIR domain and signal peptide with 23 amino acid residues. Moreover, protein folding pattern prediction of RbTLR21 exhibited well-structured and folded ectodomain, transmembrane domain and cytoplasmc TIR domain. According to the pair wise sequence analysis data, RbTLR21 showed closest homology with orange-spotted grouper (Epinephelus coioides) TLR21with 76.9% amino acid identity. Furthermore, our phylogenetic analysis revealed that RbTLR21 shows a close evolutionary relationship with its ortholog from Danio rerio. Genomic structure of RbTLR21 consisted of single exon similar to its ortholog of zebra fish. Sevaral putative transcription factor binding sites were also identified in 5ʹ flanking region of RbTLR21. The RBTLR 21 was ubiquitously expressed in all the tissues we tested. Relatively, high expression levels were found in spleen, liver and blood tissues. Upon induction with rock bream iridovirus, RbTLR21 expression was upregulated at the early phase of post induction period even though RbTLR21 expression level was fluctuated at the latter phase of post induction period. Post Edwardsiella tarda injection, RbTLR transcripts were upregulated throughout the experiment. Similarly, Streptococcus iniae induction exhibited significant upregulations of RbTLR21 mRNA expression in the spleen tissues. Collectively, our findings suggest that RbTLR21 is indeed a homolog of TLR21 family members and RbTLR21 may be involved in host immune responses against bacterial and DNA viral infections.Keywords: rock bream, toll like receptor 21 (TLR21), pattern recognition receptor, genomic characterization
Procedia PDF Downloads 542145 The Relevance of Community Involvement in Flood Risk Governance Towards Resilience to Groundwater Flooding. A Case Study of Project Groundwater Buckinghamshire, UK
Authors: Claude Nsobya, Alice Moncaster, Karen Potter, Jed Ramsay
Abstract:
The shift in Flood Risk Governance (FRG) has moved away from traditional approaches that solely relied on centralized decision-making and structural flood defenses. Instead, there is now the adoption of integrated flood risk management measures that involve various actors and stakeholders. This new approach emphasizes people-centered approaches, including adaptation and learning. This shift to a diversity of FRG approaches has been identified as a significant factor in enhancing resilience. Resilience here refers to a community's ability to withstand, absorb, recover, adapt, and potentially transform in the face of flood events. It is argued that if the FRG merely focused on the conventional 'fighting the water' - flood defense - communities would not be resilient. The move to these people-centered approaches also implies that communities will be more involved in FRG. It is suggested that effective flood risk governance influences resilience through meaningful community involvement, and effective community engagement is vital in shaping community resilience to floods. Successful community participation not only uses context-specific indigenous knowledge but also develops a sense of ownership and responsibility. Through capacity development initiatives, it can also raise awareness and all these help in building resilience. Recent Flood Risk Management (FRM) projects have thus had increasing community involvement, with varied conceptualizations of such community engagement in the academic literature on FRM. In the context of overland floods, there has been a substantial body of literature on Flood Risk Governance and Management. Yet, groundwater flooding has gotten little attention despite its unique qualities, such as its persistence for weeks or months, slow onset, and near-invisibility. There has been a little study in this area on how successful community involvement in Flood Risk Governance may improve community resilience to groundwater flooding in particular. This paper focuses on a case study of a flood risk management project in the United Kingdom. Buckinghamshire Council is leading Project Groundwater, which is one of 25 significant initiatives sponsored by England's Department for Environment, Food and Rural Affairs (DEFRA) Flood and Coastal Resilience Innovation Programme. DEFRA awarded Buckinghamshire Council and other councils 150 million to collaborate with communities and implement innovative methods to increase resilience to groundwater flooding. Based on a literature review, this paper proposes a new paradigm for effective community engagement in Flood Risk Governance (FRG). This study contends that effective community participation can have an impact on various resilience capacities identified in the literature, including social capital, institutional capital, physical capital, natural capital, human capital, and economic capital. In the case of social capital, for example, successful community engagement can influence social capital through the process of social learning as well as through developing social networks and trust values, which are vital in influencing communities' capacity to resist, absorb, recover, and adapt. The study examines community engagement in Project Groundwater using surveys with local communities and documentary analysis to test this notion. The outcomes of the study will inform community involvement activities in Project Groundwater and may shape DEFRA policies and guidelines for community engagement in FRM.Keywords: flood risk governance, community, resilience, groundwater flooding
Procedia PDF Downloads 70144 Genomic and Proteomic Variability in Glycine Max Genotypes in Response to Salt Stress
Authors: Faheema Khan
Abstract:
To investigate the ability of sensitive and tolerant genotype of Glycine max to adapt to a saline environment in a field, we examined the growth performance, water relation and activities of antioxidant enzymes in relation to photosynthetic rate, chlorophyll a fluorescence, photosynthetic pigment concentration, protein and proline in plants exposed to salt stress. Ten soybean genotypes (Pusa-20, Pusa-40, Pusa-37, Pusa-16, Pusa-24, Pusa-22, BRAGG, PK-416, PK-1042, and DS-9712) were selected and grown hydroponically. After 3 days of proper germination, the seedlings were transferred to Hoagland’s solution (Hoagland and Arnon 1950). The growth chamber was maintained at a photosynthetic photon flux density of 430 μmol m−2 s−1, 14 h of light, 10 h of dark and a relative humidity of 60%. The nutrient solution was bubbled with sterile air and changed on alternate days. Ten-day-old seedlings were given seven levels of salt in the form of NaCl viz., T1 = 0 mM NaCl, T2=25 mM NaCl, T3=50 mM NaCl, T4=75 mM NaCl, T5=100 mM NaCl, T6=125 mM NaCl, T7=150 mM NaCl. The investigation showed that genotype Pusa-24, PK-416 and Pusa-20 appeared to be the most salt-sensitive. genotypes as inferred from their significantly reduced length, fresh weight and dry weight in response to the NaCl exposure. Pusa-37 appeared to be the most tolerant genotype since no significant effect of NaCl treatment on growth was found. We observed a greater decline in the photosynthetic variables like photosynthetic rate, chlorophyll fluorescence and chlorophyll content, in salt-sensitive (Pusa-24) genotype than in salt-tolerant Pusa-37 under high salinity. Numerous primers were verified on ten soybean genotypes obtained from Operon technologies among which 30 RAPD primers shown high polymorphism and genetic variation. The Jaccard’s similarity coefficient values for each pairwise comparison between cultivars were calculated and similarity coefficient matrix was constructed. The closer varieties in the cluster behaved similar in their response to salinity tolerance. Intra-clustering within the two clusters precisely grouped the 10 genotypes in sub-cluster as expected from their physiological findings.Salt tolerant genotype Pusa-37, was further analysed by 2-Dimensional gel electrophoresis to analyse the differential expression of proteins at high salt stress. In the Present study, 173 protein spots were identified. Of these, 40 proteins responsive to salinity were either up- or down-regulated in Pusa-37. Proteomic analysis in salt-tolerant genotype (Pusa-37) led to the detection of proteins involved in a variety of biological processes, such as protein synthesis (12 %), redox regulation (19 %), primary and secondary metabolism (25 %), or disease- and defence-related processes (32 %). In conclusion, the soybean plants in our study responded to salt stress by changing their protein expression pattern. The photosynthetic, biochemical and molecular study showed that there is variability in salt tolerance behaviour in soybean genotypes. Pusa-24 is the salt-sensitive and Pusa-37 is the salt-tolerant genotype. Moreover this study gives new insights into the salt-stress response in soybean and demonstrates the power of genomic and proteomic approach in plant biology studies which finally could help us in identifying the possible regulatory switches (gene/s) controlling the salt tolerant genotype of the crop plants and their possible role in defence mechanism.Keywords: glycine max, salt stress, RAPD, genomic and proteomic variability
Procedia PDF Downloads 423143 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 167142 Sorbitol Galactoside Synthesis Using β-Galactosidase Immobilized on Functionalized Silica Nanoparticles
Authors: Milica Carević, Katarina Banjanac, Marija ĆOrović, Ana Milivojević, Nevena Prlainović, Aleksandar Marinković, Dejan Bezbradica
Abstract:
Nowadays, considering the growing awareness of functional food beneficial effects on human health, due attention is dedicated to the research in the field of obtaining new prominent products exhibiting improved physiological and physicochemical characteristics. Therefore, different approaches to valuable bioactive compounds synthesis have been proposed. β-Galactosidase, for example, although mainly utilized as hydrolytic enzyme, proved to be a promising tool for these purposes. Namely, under the particular conditions, such as high lactose concentration, elevated temperatures and low water activities, reaction of galactose moiety transfer to free hydroxyl group of the alternative acceptor (e.g. different sugars, alcohols or aromatic compounds) can generate a wide range of potentially interesting products. Up to now, galacto-oligosaccharides and lactulose have attracted the most attention due to their inherent prebiotic properties. The goal of this study was to obtain a novel product sorbitol galactoside, using the similar reaction mechanism, namely transgalactosylation reaction catalyzed by β-galactosidase from Aspergillus oryzae. By using sugar alcohol (sorbitol) as alternative acceptor, a diverse mixture of potential prebiotics is produced, enabling its more favorable functional features. Nevertheless, an introduction of alternative acceptor into the reaction mixture contributed to the complexity of reaction scheme, since several potential reaction pathways were introduced. Therefore, the thorough optimization using response surface method (RSM), in order to get an insight into different parameter (lactose concentration, sorbitol to lactose molar ratio, enzyme concentration, NaCl concentration and reaction time) influences, as well as their mutual interactions on product yield and productivity, was performed. In view of product yield maximization, the obtained model predicted optimal lactose concentration 500 mM, the molar ratio of sobitol to lactose 9, enzyme concentration 0.76 mg/ml, concentration of NaCl 0.8M, and the reaction time 7h. From the aspect of productivity, the optimum substrate molar ratio was found to be 1, while the values for other factors coincide. In order to additionally, improve enzyme efficiency and enable its reuse and potential continual application, immobilization of β-galactosidase onto tailored silica nanoparticles was performed. These non-porous fumed silica nanoparticles (FNS)were chosen on the basis of their biocompatibility and non-toxicity, as well as their advantageous mechanical and hydrodinamical properties. However, in order to achieve better compatibility between enzymes and the carrier, modifications of the silica surface using amino functional organosilane (3-aminopropyltrimethoxysilane, APTMS) were made. Obtained support with amino functional groups (AFNS) enabled high enzyme loadings and, more importantly, extremely high expressed activities, approximately 230 mg proteins/g and 2100 IU/g, respectively. Moreover, this immobilized preparation showed high affinity towards sorbitol galactoside synthesis. Therefore, the findings of this study could provided a valuable contribution to the efficient production of physiologically active galactosides in immobilized enzyme reactors.Keywords: β-galactosidase, immobilization, silica nanoparticles, transgalactosylation
Procedia PDF Downloads 301141 Developing and integrated Clinical Risk Management Model
Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei
Abstract:
Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.Keywords: failure modes and effective analysis, risk management, root cause analysis, model
Procedia PDF Downloads 249140 Generating Biogas from Municipal Kitchen Waste: An Experience from Gaibandha, Bangladesh
Authors: Taif Rocky, Uttam Saha, Mahobul Islam
Abstract:
With a rapid urbanisation in Bangladesh, waste management remains one of the core challenges. Turning municipal waste into biogas for mass usage is a solution that Bangladesh needs to adopt urgently. Practical Action with its commitment to challenging poverty with technological justice has piloted such idea in Gaibandha. The initiative received immense success and drew the attention of policy makers and practitioners. We believe, biogas from waste can highly contribute to meet the growing demand for energy in the country at present and in the future. Practical Action has field based experience in promoting small scale and innovative technologies. We have proven track record in integrated solid waste management. We further utilized this experience to promote waste to biogas at end users’ level. In 2011, we have piloted a project on waste to biogas in Gaibandha, a northern secondary town of Bangladesh. With resource and support from UNICEF and with our own innovative funds we have established a complete chain of utilizing waste to the renewable energy source and organic fertilizer. Biogas is produced from municipal solid waste, which is properly collected, transported and segregated by private entrepreneurs. The project has two major focuses, diversification of biogas end use and establishing a public-private partnership business model. The project benefits include Recycling of Wastes, Improved institutional (municipal) capacity, Livelihood from improved services and Direct Income from the project. Project risks include Change of municipal leadership, Traditional mindset, Access to decision making, Land availability. We have observed several outcomes from the initiative. Up scaling such an initiative will certainly contribute for sustainable cleaner and healthier urban environment and urban poverty reduction. - It reduces the unsafe disposal of wastes which improve the cleanliness and environment of the town. -Make drainage system effective reducing the adverse impact of water logging or flooding. -Improve public health from better management of wastes. -Promotes usage of biogas replacing the use of firewood/coal which creates smoke and indoor air pollution in kitchens which have long term impact on health of women and children. -Reduce the greenhouse gas emission from the anaerobic recycling of wastes and contributes to sustainable urban environment. -Promote the concept of agroecology from the uses of bio slurry/compost which contributes to food security. -Creates green jobs from waste value chain which impacts on poverty alleviation of urban extreme poor. -Improve municipal governance from inclusive waste services and functional partnership with private sectors. -Contribute to the implementation of 3R (Reduce, Reuse, Recycle) Strategy and Employment Creation of extreme poor to achieve the target set in Vision 2021 by Government of Bangladesh.Keywords: kitchen waste, secondary town, biogas, segregation
Procedia PDF Downloads 222139 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity
Authors: Julia Baker
Abstract:
Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.Keywords: biodiversity offsetting, development, nature conservation planning, net gain
Procedia PDF Downloads 195138 Modelling Pest Immigration into Rape Seed Crops under Past and Future Climate Conditions
Authors: M. Eickermann, F. Ronellenfitsch, J. Junk
Abstract:
Oilseed rape (Brassica napus L.) is one of the most important crops throughout Europe, but pressure due to pest insects and pathogens can reduce yield amount substantially. Therefore, the usage of pesticide applications is outstanding in this crop. In addition, climate change effects can interact with phenology of the host plant and their pests and can apply additional pressure on the yield. Next to the pollen beetle, Meligethes aeneus L., the seed-damaging pest insects, cabbage seed weevil (Ceutorhynchus obstrictus Marsham) and the brassica pod midge (Dasineura brassicae Winn.) are of main economic impact to the yield. While females of C. obstrictus are infesting oilseed rape by depositing single eggs into young pods, the females of D. brassicae are using this local damage in the pod for their own oviposition, while depositing batches of 20-30 eggs. Without a former infestation by the cabbage seed weevil, a significant yield reduction by the brassica pod midge can be denied. Based on long-term, multisided field experiments, a comprehensive data-set on pest migration to crops of B. napus has been built up in the last ten years. Five observational test sides, situated in different climatic regions in Luxembourg were controlled between February until the end of May twice a week. Pest migration was recorded by using yellow water pan-traps. Caught insects were identified in the laboratory according to species specific identification keys. By a combination of pest observations and corresponding meteorological observations, the set-up of models to predict the migration periods of the seed-damaging pests was possible. This approach is the basis for a computer-based decision support tool, to assist the farmer in identifying the appropriate time point of pesticide application. In addition, the derived algorithms of that decision support tool can be combined with climate change projections in order to assess the future potential threat caused by the seed-damaging pest species. Regional climate change effects for Luxembourg have been intensively studied in recent years. Significant changes to wetter winters and drier summers, as well as a prolongation of the vegetation period mainly caused by higher spring temperature, have also been reported. We used the COSMO-CLM model to perform a time slice experiment for Luxembourg with a spatial resolution of 1.3 km. Three ten year time slices were calculated: The reference time span (1991-2000), the near (2041-2050) and the far future (2091-2100). Our results projected a significant shift of pest migration to an earlier onset of the year. In addition, a prolongation of the possible migration period could be observed. Because D. brassiace is depending on the former oviposition activity by C. obstrictus to infest its host plant successfully, the future dependencies of both pest species will be assessed. Based on this approach the future risk potential of both seed-damaging pests is calculated and the status as pest species is characterized.Keywords: CORDEX projections, decision support tool, Brassica napus, pests
Procedia PDF Downloads 382137 Electrochemical Activity of NiCo-GDC Cermet Anode for Solid Oxide Fuel Cells Operated in Methane
Authors: Kamolvara Sirisuksakulchai, Soamwadee Chaianansutcharit, Kazunori Sato
Abstract:
Solid Oxide Fuel Cells (SOFCs) have been considered as one of the most efficient large unit power generators for household and industrial applications. The efficiency of an electronic cell depends mainly on the electrochemical reactions in the anode. The development of anode materials has been intensely studied to achieve higher kinetic rates of redox reactions and lower internal resistance. Recent studies have introduced an efficient cermet (ceramic-metallic) material for its ability in fuel oxidation and oxide conduction. This could expand the reactive site, also known as the triple-phase boundary (TPB), thus increasing the overall performance. In this study, a bimetallic catalyst Ni₀.₇₅Co₀.₂₅Oₓ was combined with Gd₀.₁Ce₀.₉O₁.₉₅ (GDC) to be used as a cermet anode (NiCo-GDC) for an anode-supported type SOFC. The synthesis of Ni₀.₇₅Co₀.₂₅Oₓ was carried out by ball milling NiO and Co3O4 powders in ethanol and calcined at 1000 °C. The Gd₀.₁Ce₀.₉O₁.₉₅ was prepared by a urea co-precipitation method. Precursors of Gd(NO₃)₃·6H₂O and Ce(NO₃)₃·6H₂O were dissolved in distilled water with the addition of urea and were heated subsequently. The heated mixture product was filtered and rinsed thoroughly, then dried and calcined at 800 °C and 1500 °C, respectively. The two powders were combined followed by pelletization and sintering at 1100 °C to form an anode support layer. The fabrications of an electrolyte layer and cathode layer were conducted. The electrochemical performance in H₂ was measured from 800 °C to 600 °C while for CH₄ was from 750 °C to 600 °C. The maximum power density at 750 °C in H₂ was 13% higher than in CH₄. The difference in performance was due to higher polarization resistances confirmed by the impedance spectra. According to the standard enthalpy, the dissociation energy of C-H bonds in CH₄ is slightly higher than the H-H bond H₂. The dissociation of CH₄ could be the cause of resistance within the anode material. The results from lower temperatures showed a descending trend of power density in relevance to the increased polarization resistance. This was due to lowering conductivity when the temperature decreases. The long-term stability was measured at 750 °C in CH₄ monitoring at 12-hour intervals. The maximum power density tends to increase gradually with time while the resistances were maintained. This suggests the enhanced stability from charge transfer activities in doped ceria due to the transition of Ce⁴⁺ ↔ Ce³⁺ at low oxygen partial pressure and high-temperature atmosphere. However, the power density started to drop after 60 h, and the cell potential also dropped from 0.3249 V to 0.2850 V. These phenomena was confirmed by a shifted impedance spectra indicating a higher ohmic resistance. The observation by FESEM and EDX-mapping suggests the degradation due to mass transport of ions in the electrolyte while the anode microstructure was still maintained. In summary, the electrochemical test and stability test for 60 h was achieved by NiCo-GDC cermet anode. Coke deposition was not detected after operation in CH₄, hence this confirms the superior properties of the bimetallic cermet anode over typical Ni-GDC.Keywords: bimetallic catalyst, ceria-based SOFCs, methane oxidation, solid oxide fuel cell
Procedia PDF Downloads 154136 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions
Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos
Abstract:
The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon
Procedia PDF Downloads 142135 Expanded Polyurethane Foams and Waterborne-Polyurethanes from Vegetable Oils
Authors: A.Cifarelli, L. Boggioni, F. Bertini, L. Magon, M. Pitalieri, S. Losio
Abstract:
Nowadays, the growing environmental awareness and the dwindling of fossil resources stimulate the polyurethane (PU) industry towards renewable polymers with low carbon footprint to replace the feed stocks from petroleum sources. The main challenge in this field consists in replacing high-performance products from fossil-fuel with novel synthetic polymers derived from 'green monomers'. The bio-polyols from plant oils have attracted significant industrial interest and major attention in scientific research due to their availability and biodegradability. Triglycerides rich in unsaturated fatty acids, such as soybean oil (SBO) and linseed oil (ELO), are particularly interesting because their structures and functionalities are tunable by chemical modification in order to obtain polymeric materials with expected final properties. Unfortunately, their use is still limited for processing or performance problems because a high functionality, as well as OH number of the polyols will result in an increase in cross-linking densities of the resulting PUs. The main aim of this study is to evaluate soy and linseed-based polyols as precursors to prepare prepolymers for the production of polyurethane foams (PUFs) or waterborne-polyurethanes (WPU) used as coatings. An effective reaction route is employed for its simplicity and economic impact. Indeed, bio-polyols were synthesized by a two-step method: epoxidation of the double bonds in vegetable oils and solvent-free ring-opening reaction of the oxirane with organic acids. No organic solvents have been used. Acids with different moieties (aliphatic or aromatics) and different length of hydrocarbon backbones can be used to customize polyols with different functionalities. The ring-opening reaction requires a fine tuning of the experimental conditions (time, temperature, molar ratio of carboxylic acid and epoxy group) to control the acidity value of end-product as well as the amount of residual starting materials. Besides, a Lewis base catalyst is used to favor the ring opening reaction of internal epoxy groups of the epoxidized oil and minimize the formation of cross-linked structures in order to achieve less viscous and more processable polyols with narrower polydispersity indices (molecular weight lower than 2000 g/mol⁻¹). The functionality of optimized polyols is tuned from 2 to 4 per molecule. The obtained polyols are characterized by means of GPC, NMR (¹H, ¹³C) and FT-IR spectroscopy to evaluate molecular masses, molecular mass distributions, microstructures and linkage pathways. Several polyurethane foams have been prepared by prepolymer method blending conventional synthetic polyols with new bio-polyols from soybean and linseed oils without using organic solvents. The compatibility of such bio-polyols with commercial polyols and diisocyanates is demonstrated. The influence of the bio-polyols on the foam morphology (cellular structure, interconnectivity), density, mechanical and thermal properties has been studied. Moreover, bio-based WPUs have been synthesized by well-established processing technology. In this synthesis, a portion of commercial polyols is substituted by the new bio-polyols and the properties of the coatings on leather substrates have been evaluated to determine coating hardness, abrasion resistance, impact resistance, gloss, chemical resistance, flammability, durability, and adhesive strength.Keywords: bio-polyols, polyurethane foams, solvent free synthesis, waterborne-polyurethanes
Procedia PDF Downloads 131134 Thermally Conductive Polymer Nanocomposites Based on Graphene-Related Materials
Authors: Alberto Fina, Samuele Colonna, Maria del Mar Bernal, Orietta Monticelli, Mauro Tortello, Renato Gonnelli, Julio Gomez, Chiara Novara, Guido Saracco
Abstract:
Thermally conductive polymer nanocomposites are of high interest for several applications including low-temperature heat recovery, heat exchangers in a corrosive environment and heat management in electronics and flexible electronics. In this paper, the preparation of thermally conductive nanocomposites exploiting graphene-related materials is addressed, along with their thermal characterization. In particular, correlations between 1- chemical and physical features of the nanoflakes and 2- processing conditions with the heat conduction properties of nanocomposites is studied. Polymers are heat insulators; therefore, the inclusion of conductive particles is the typical solution to obtain a sufficient thermal conductivity. In addition to traditional microparticles such as graphite and ceramics, several nanoparticles have been proposed, including carbon nanotubes and graphene, for the use in polymer nanocomposites. Indeed, thermal conductivities for both carbon nanotubes and graphenes were reported in the wide range of about 1500 to 6000 W/mK, despite such property may decrease dramatically as a function of the size, number of layers, the density of topological defects, re-hybridization defects as well as on the presence of impurities. Different synthetic techniques have been developed, including mechanical cleavage of graphite, epitaxial growth on SiC, chemical vapor deposition, and liquid phase exfoliation. However, the industrial scale-up of graphene, defined as an individual, single-atom-thick sheet of hexagonally arranged sp2-bonded carbons still remains very challenging. For large scale bulk applications in polymer nanocomposites, some graphene-related materials such as multilayer graphenes (MLG), reduced graphene oxide (rGO) or graphite nanoplatelets (GNP) are currently the most interesting graphene-based materials. In this paper, different types of graphene-related materials were characterized for their chemical/physical as well as for thermal properties of individual flakes. Two selected rGOs were annealed at 1700°C in vacuum for 1 h to reduce defectiveness of the carbon structure. Thermal conductivity increase of individual GNP with annealing was assessed via scanning thermal microscopy. Graphene nano papers were prepared from both conventional RGO and annealed RGO flakes. Characterization of the nanopapers evidenced a five-fold increase in the thermal diffusivity on the nano paper plane for annealed nanoflakes, compared to pristine ones, demonstrating the importance of structural defectiveness reduction to maximize the heat dissipation performance. Both pristine and annealed RGO were used to prepare polymer nanocomposites, by melt reactive extrusion. Thermal conductivity showed two- to three-fold increase in the thermal conductivity of the nanocomposite was observed for high temperature treated RGO compared to untreated RGO, evidencing the importance of using low defectivity nanoflakes. Furthermore, the study of different processing paremeters (time, temperature, shear rate) during the preparation of poly (butylene terephthalate) nanocomposites evidenced a clear correlation with the dispersion and fragmentation of the GNP nanoflakes; which in turn affected the thermal conductivity performance. Thermal conductivity of about 1.7 W/mK, i.e. one order of magnitude higher than for pristine polymer, was obtained with 10%wt of annealed GNPs, which is in line with state of the art nanocomposites prepared by more complex and less upscalable in situ polymerization processes.Keywords: graphene, graphene-related materials, scanning thermal microscopy, thermally conductive polymer nanocomposites
Procedia PDF Downloads 268133 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results
Authors: Athanasios Karasimos, Vasiliki Makri
Abstract:
This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes
Procedia PDF Downloads 99