Search results for: transmission coefficient – Quasiperiodic superlattices- singularly localized and extended states- structural parameters- Laser with modulated wavelength
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20099

Search results for: transmission coefficient – Quasiperiodic superlattices- singularly localized and extended states- structural parameters- Laser with modulated wavelength

449 Family Income and Parental Behavior: Maternal Personality as a Moderator

Authors: Robert H. Bradley, Robert F. Corwyn

Abstract:

There is abundant research showing that socio-economic status is implicated in parenting. However, additional factors such as family context, parent personality, parenting history and child behavior also help determine how parents enact the role of caregiver. Each of these factors not only helps determine how a parent will act in a given situation, but each can serve to moderate the influence of the other factors. Personality has long been studied as a factor that influences parental behavior, but it has almost never been considered as a moderator of family contextual factors. For this study, relations between three maternal personality characteristics (agreeableness, extraversion, neuroticism) and four aspects of parenting (harshness, sensitivity, stimulation, learning materials) were examined when children were 6 months, 36 months, and 54 months old and again at 5th grade. Relations between these three aspects of personality and the overall home environment were also examined. A key concern was whether maternal personality characteristics moderated relations between household income and the four aspects of parenting and between household income and the overall home environment. The data for this study were taken from the NICHD Study of Early Child Care and Youth Development (NICHD SECCYD). The total sample consisted of 1364 families living in ten different sites in the United States. However, the samples analyzed included only those with complete data on all four parenting outcomes (i.e., sensitivity, harshness, stimulation, and provision of learning materials), income, maternal education and all three measures of personality (i.e., agreeableness, neuroticism, extraversion) at each age examined. Results from hierarchical regression analysis showed that mothers high in agreeableness were more likely to demonstrate sensitivity and stimulation as well as provide more learning materials to their children but were less likely to manifest harshness. Maternal agreeableness also consistently moderated the effects of low income on parental behavior. Mothers high in extraversion were more likely to provide stimulation and learning materials, with extraversion serving as a moderator of low income on both. By contrast, mothers high in neuroticism were less likely to demonstrate positive aspects of parenting and more likely to manifest negative aspects (e.g., harshness). Neuroticism also served to moderate the influence of low income on parenting, especially for stimulation and learning materials. The most consistent effects of parent personality were on the overall home environment, with significant main and interaction effects observed in 11 of the 12 models tested. These findings suggest that it may behoove professional who work with parents living in adverse circumstances to consider parental personality in helping to better target prevention or intervention efforts aimed at supporting parental efforts to act in ways that benefit children.

Keywords: home environment, household income, learning materials, personality, sensitivity, stimulation

Procedia PDF Downloads 208
448 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 321
447 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 230
446 Hydro-Mechanical Characterization of PolyChlorinated Biphenyls Polluted Sediments in Interaction with Geomaterials for Landfilling

Authors: Hadi Chahal, Irini Djeran-Maigre

Abstract:

This paper focuses on the hydro-mechanical behavior of polychlorinated biphenyl (PCB) polluted sediments when stored in landfills and the interaction between PCBs and geosynthetic clay liners (GCL) with respect to hydraulic performance of the liner and the overall performance and stability of landfills. A European decree, adopted in the French regulation forbids the reintroducing of contaminated dredged sediments containing more than 0,64mg/kg Σ 7 PCBs to rivers. At these concentrations, sediments are considered hazardous and a remediation process must be adopted to prevent the release of PCBs into the environment. Dredging and landfilling polluted sediments is considered an eco-environmental remediation solution. French regulations authorize the storage of PCBs contaminated components with less than 50mg/kg in municipal solid waste facilities. Contaminant migration via leachate may be possible. The interactions between PCBs contaminated sediments and the GCL barrier present in the bottom of a landfill for security confinement are not known. Moreover, the hydro-mechanical behavior of stored sediments may affect the performance and the stability of the landfill. In this article, hydro-mechanical characterization of the polluted sediment is presented. This characterization led to predict the behavior of the sediment at the storage site. Chemical testing showed that the concentration of PCBs in sediment samples is between 1.7 and 2,0 mg/kg. Physical characterization showed that the sediment is organic silty sand soil (%Silt=65, %Sand=27, %OM=8) characterized by a high plasticity index (Ip=37%). Permeability tests using permeameter and filter press showed that sediment permeability is in the order of 10-9 m/s. Compressibility tests showed that the sediment is a very compressible soil with Cc=0,53 and Cα =0,0086. In addition, effects of PCB on the swelling behavior of bentonite were studied and the hydraulic performance of the GCL in interaction with PCBs was examined. Swelling tests showed that PCBs don’t affect the swelling behavior of bentonite. Permeability tests were conducted on a 1.0 m pilot scale experiment, simulating a storage facility. PCBs contaminated sediments were directly placed over a passive barrier containing GCL to study the influence of the direct contact of polluted sediment leachate with the GCL. An automatic water system has been designed to simulate precipitation. Effluent quantity and quality have been examined. The sediment settlements and the water level in the sediment have been monitored. The results showed that desiccation affected the behavior of the sediment in the pilot test and that laboratory tests alone are not sufficient to predict the behavior of the sediment in landfill facility. Furthermore, the concentration of PCB in the sediment leachate was very low ( < 0,013 µg/l) and that the permeability of the GCL was affected by other components present in the sediment leachate. Desiccation and cracks were the main parameters that affected the hydro-mechanical behavior of the sediment in the pilot test. In order to reduce these infects, the polluted sediment should be stored at a water content inferior to its shrinkage limit (w=39%). We also propose to conduct other pilot tests with the maximum concentration of PCBs allowed in municipal solid waste facility of 50 mg/kg.

Keywords: geosynthetic clay liners, landfill, polychlorinated biphenyl, polluted dredged materials

Procedia PDF Downloads 121
445 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 395
444 Disaggregating Communities and the Making of Factional States: Evidence from Joint Forest Management in Sundarban, India

Authors: Amrita Sen

Abstract:

In the face of a growing insurgent movement and the perceived failure of the state and the market towards sustainable resource management, a range of decentralized forest management policies was formulated in the last two decades, which recognized the need for community representations within the statutory methods of forest management. The recognition conceded on the virtues of ecological sustainability and traditional environmental knowledge, which were considered to be the principal repositories of the forest dependent communities. The present study, in the light of empirical insights, reflects on the contemporary disjunctions between the preconceived communitarian ethic in environmentalism and the lived reality of forest based life-worlds. Many of the popular as well as dominant ideologies, which have historically shaped the conceptual and theoretical understanding of sociology, needs further perusal in the context of the emerging contours of empirical knowledge, which lends opportunities for substantive reworking and analysis. The image of the community appears to be one of those concepts, an identity which has for long defined perspectives and processes associated with people living together harmoniously in small physical spaces. Through an ethnographic account of the implementation of Joint Forest Management (JFM) in a forest fringe village in Sundarban, the study explores the ways in which the idea of ‘community’ gets transformed through the process of state-making, rendering the necessity of its departure from the standard, conventional definition of homogeneity and internal equity. The study necessitates an attention towards the anthropology of micro-politics, disaggregating an essentially constructivist anthropology of ‘collective identities’, which can render the visibility of political mobilizations plausible within the seemingly culturalist production of communities. The two critical questions that the paper seeks to ask in this context are: how the ‘local’ is constituted within community based conservation practices? Within the efforts of collaborative forest management, how accurately does the depiction of ‘indigenous environmental knowledge’, subscribe to its role of sustainable conservation practices? Reflecting on the execution of JFM in Sundarban, the study critically explores the ways in which the state ceases to be ‘trans-national’ and interacts with the rural life-worlds through its local factions. Simultaneously, the study attempts to articulate the scope of constructing a competing representation of community, shaped by increasing political negotiations and bureaucratic alignments which strains against the usual preoccupations with tradition primordiality and non material culture as well as the amorous construction of indigeneity.

Keywords: community, environmentalism, JFM, state-making, identities, indigenous

Procedia PDF Downloads 192
443 Effect Of Selected Food And Nutrition Environments On Prevalence Of Cardio-Metabolic Risk Factors With Emphasis On Worksite Environment In Urban Delhi

Authors: Deepa Shokeen, Bani Tamber Aeri

Abstract:

Food choice is a complex process influenced by the interplay of multiple factors, including physical, socio-cultural and economic factors comprising macro or micro level food environments. While a clear understanding of the relationship between what we eat and the environmental context in which these food choices are made is still needed; it has however now been shown that food environments do play a significant role in the obesity epidemic and increasing cardio-metabolic risk factors. Evidence in other countries indicates that the food environment may strongly influence the prevalence of obesity and cardio-metabolic risk factors among young adults. Although in the Indian context, data does indicate the associations between sedentary lifestyle, stress, faulty diets but very little evidence supports the role of food environment in influencing cardio-metabolic health among employed adults. Thus, this research is required to establish how different environments affect different individuals as individuals interact with the environment on a number of levels. Methodology: The objective of the present study is to assess the effect of selected food and nutrition environments with emphasis on worksite environment and to analyse its impact on the food choices and dietary behaviour of the employees (25-45 years of age) of the organizations under study. In the proposed study an attempt will be made to randomly select various worksite environments from Delhi and NCR. The study will be conducted in two phases. In phase I, Information will be obtained on their socio-demographic profile and various factors influencing their food choices including most commonly consumed foods and most frequently visited eating outlets in and around the work place. Data will also be gathered on anthropometry (height, weight, waist circumference), biochemical parameters (lipid profile and fasting glucose), blood pressure and dietary intake. Based on the findings of phase I, a list of the most frequently visited eating outlets in and around the workplace will be prepared in Phase II. These outlets will then be subjected to nutrition environment assessment survey (NEMS). On the basis of the information gathered from phase I and phase II, influence of selected food and nutrition environments on food choice, dietary behaviour and prevalence of cardio-metabolic risk factors among employed adults will be assessed. Expected outcomes: The proposed study will try to ascertain the impact of selected food and nutrition environments on food choice and dietary intake of the working adults as it is important to learn how these food environments influence the eating perceptions and health behavior of the adults. In addition to this, anthropometry blood pressure and biochemical assessment of the subjects will be done to assess the prevalence of cardio-metabolic risk factors. If the findings indicate that the work environment, where most of these young adults spend their productive hours of the day, influence their health, than perhaps steps maybe needed to make these environments more conducive to health.

Keywords: food and nutrition environment, cardio-metabolic risk factors, India, worksite environment

Procedia PDF Downloads 276
442 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal

Authors: C. Bateira, J. Fernandes, A. Costa

Abstract:

The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.

Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards

Procedia PDF Downloads 173
441 A Qualitative Study of Inclusive Growth through Microfinance in India

Authors: Amit Kumar Bardhan, Barnali Nag, Chandra Sekhar Mishra

Abstract:

Microfinance is considered as one of the key drivers of financial inclusion and pro-poor financial growth. Microfinance in India became popular through Self Help Group (SHG) movement initiated by NABARD. In terms of outreach and loan portfolio, SHG Bank Linkage programme (SHG-BLP) has emerged as the largest microfinance initiative in the world. The success of financial inclusion lies in the successful implementation of SHG-BLP. SHGs are generally promoted by social welfare organisations like NGOs, welfare societies, government agencies, Co-operatives etc. and even banks are also involved in SHG formation. Thus, the pro-poor implementation of the scheme largely depends on the credibility of the SHG Promoting Institutions (SHPIs). The rural poor lack education, skills and financial literacy and hence need continuous support and proper training right from planning to implementation. In this study, we have made an attempt to inspect the reasons behind low penetration of SHG financing to the poorest of the poor both from demand and supply side perspective. Banks, SHPIs, and SHGs are three key essential stakeholders in SHG-BLP programmes. All of them have a vital role in programme implementation. The objective of this paper is to find out the drivers and hurdles in the path of financial inclusion through SHG-BLP and the role of SHPIs in reaching out to the ultra poor. We try to address questions like 'what are the challenges faced by SHPIs in targeting the poor?' and, 'what are factors behind the low credit linkage of SHGs?' Our work is based on a qualitative study of SHG programmes in semi-urban towns in the states of West Bengal and Odisha in India. Data are collected through unstructured questionnaire and in-depth interview from the members of SHGs, SHPIs and designated banks. The study provides some valuable insights about the programme and a comprehensive view of problems and challenges faced by SGH, SHPIs, and banks. On the basis of our understanding from the survey, some findings and policy recommendations that seem relevant are: increasing level of non-performing assets (NPA) of commercial banks and wilful default in expectation of loan waiver and subsidy are the prime reasons behind low rate of credit linkage of SHGs. Regular changes in SHG schemes and no incentive for after linkage follow up results in dysfunctional SHGs. Government schemes are mostly focused on creation of SHG and less on livelihood promotion. As a result, in spite of increasing (YoY) trend of number of SHGs promoted, there is no real impact on welfare growth. Government and other SHPIs should focus on resource based SHG promotion rather only increasing the number of SHGs.

Keywords: financial inclusion, inclusive growth, microfinance, Self-Help Group (SHG), Self-Help Group Promoting Institution (SHPI)

Procedia PDF Downloads 214
440 Plastic Pollution: Analysis of the Current Legal Framework and Perspectives on Future Governance

Authors: Giorgia Carratta

Abstract:

Since the beginning of mass production, plastic items have been crucial in our daily lives. Thanks to their physical and chemical properties, plastic materials have proven almost irreplaceable in a number of economic sectors such as packaging, automotive, building and construction, textile, and many others. At the same time, the disruptive consequences of plastic pollution have been progressively brought to light in all environmental compartments. The overaccumulation of plastics in the environment, and its adverse effects on habitats, wildlife, and (most likely) human health, represents a call for action to decision-makers around the globe. From a regulatory perspective, plastic production is an unprecedented challenge at all levels of governance. At the international level, the design of new legal instruments, the amendment of existing ones, and the coordination among the several relevant policy areas requires considerable effort. Under the pressure of both increasing scientific evidence and a concerned public opinion, countries seem to slowly move towards the discussion of a new international ‘plastic treaty.’ However, whether, how, and with which scopes such instrument would be adopted is still to be seen. Additionally, governments are establishing regional-basedstrategies, prone to consider the specificities of the plastic issue in a certain geographical area. Thanks to the new Circular Economy Action Plan, approved in March 2020 by the European Commission, EU countries are slowly but steadily shifting to a carbon neutral, circular economy in the attempt to reduce the pressure on natural resources and, parallelly, facilitate sustainable economic growth. In this context, the EU Plastic Strategy is promising to change the way plastic is designed, produced, used, and treated after consumption. In fact, only in the EU27 Member States, almost 26 million tons of plastic waste are generated herein every year, whose 24,9% is still destined to landfill. Positive effects of the Strategy also include a more effective protection of our environment, especially the marine one, the reduction of greenhouse gas emissions, a reduced need for imported fossil energy sources, more sustainable production and consumption patterns. As promising as it may sound, the road ahead is still long. The need to implement these measures in domestic legislations makes their outcome difficult to predict at the moment. An analysis of the current international and European Union legal framework on plastic pollution, binding, and voluntary instruments included, could serve to detect ‘blind spots’ in the current governance as well as to facilitate the development of policy interventions along the plastic value chain, where it appears more needed.

Keywords: environmental law, European union, governance, plastic pollution, sustainability

Procedia PDF Downloads 105
439 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method

Authors: Jiahui You, Kyung Jae Lee

Abstract:

Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.

Keywords: reactive-transport , Shale, Kerogen, precipitation

Procedia PDF Downloads 162
438 Interdisciplinary Evaluations of Children with Autism Spectrum Disorder in a Telehealth Arena

Authors: Janice Keener, Christine Houlihan

Abstract:

Over the last several years, there has been an increase in children identified as having Autism Spectrum Disorder (ASD). Specialists across several disciplines: mental health and medical professionals have been tasked with ensuring accurate and timely evaluations for children with suspected ASD. Due to the nature of the ASD symptom presentation, an interdisciplinary assessment and treatment approach best addresses the needs of the whole child. During the unprecedented COVID-19 Pandemic, clinicians were faced with how to continue with interdisciplinary assessments in a telehealth arena. Instruments that were previously used to assess ASD in-person were no longer appropriate measures to use due to the safety restrictions. For example, The Autism Diagnostic Observation Schedule requires examiners and children to be in very close proximity of each other and if masks or face shields are worn, they render the evaluation invalid. Similar issues arose with the various cognitive measures that are used to assess children such as the Weschler Tests of Intelligence and the Differential Ability Scale. Thus the need arose to identify measures that are able to be safely and accurately administered using safety guidelines. The incidence of ASD continues to rise over time. Currently, the Center for Disease Control estimates that 1 in 59 children meet the criteria for a diagnosis of ASD. The reasons for this increase are likely multifold, including changes in diagnostic criteria, public awareness of the condition, and other environmental and genetic factors. The rise in the incidence of ASD has led to a greater need for diagnostic and treatment services across the United States. The uncertainty of the diagnostic process can lead to an increased level of stress for families of children with suspected ASD. Along with this increase, there is a need for diagnostic clarity to avoid both under and over-identification of this condition. Interdisciplinary assessment is ideal for children with suspected ASD, as it allows for an assessment of the whole child over the course of time and across multiple settings. Clinicians such as Psychologists and Developmental Pediatricians play important roles in the initial evaluation of autism spectrum disorder. An ASD assessment may consist of several types of measures such as standardized checklists, structured interviews, and direct assessments such as the ADOS-2 are just a few examples. With the advent of telehealth clinicians were asked to continue to provide meaningful interdisciplinary assessments via an electronic platform and, in a sense, going to the family home and evaluating the clinical symptom presentation remotely and confidently making an accurate diagnosis. This poster presentation will review the benefits, limitations, and interpretation of these various instruments. The role of other medical professionals will also be addressed, including medical providers, speech pathology, and occupational therapy.

Keywords: Autism Spectrum Disorder Assessments, Interdisciplinary Evaluations , Tele-Assessment with Autism Spectrum Disorder, Diagnosis of Autism Spectrum Disorder

Procedia PDF Downloads 207
437 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach

Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft

Abstract:

Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.

Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology

Procedia PDF Downloads 104
436 Increase in the Shelf Life Anchovy (Engraulis ringens) from Flaying then Bleeding in a Sodium Citrate Solution

Authors: Santos Maza, Enzo Aldoradin, Carlos Pariona, Eliud Arpi, Maria Rosales

Abstract:

The objective of this study was to investigate the effect of flaying then bleeding anchovy (Engraulis ringens) immersed within a sodium citrate solution. Anchovy is a pelagic fish that readily deteriorates due to its high content of polyunsaturated fatty acids. As such, within the Peruvian food industry, the shelf life of frozen anchovy is explicitly 6 months, this short duration imparts a barrier to use for direct consumption human. Thus, almost all capture of anchovy by the fishing industry is eventually used in the production of fishmeal. We offer this an alternative to its typical production process in order to increase shelf life. In the present study, 100 kg of anchovies were captured and immediately mixed with ice on ship, maintaining a high quality sensory metric (e.g., with color blue in back) while still arriving for processing less than 2 h after capture. Anchovies with fat content of 3% were immediately flayed (i.e., reducing subcutaneous fat), beheaded, gutted and bled (i.e., removing hemoglobin) by immersion in water (Control) or in a solution of 2.5% sodium citrate (treatment), then subsequently frozen at -30 °C for 8 h in 2 kg batches. Subsequent glazing and storage at -25 °C for 14 months completed the experiments parameters. The peroxide value (PV), acidity (A), fatty acid profile (FAP), thiobarbituric acid reactive substances (TBARS), heme iron (HI), pH and sensory attributes of the samples were evaluated monthly. The results of the PV, TBARS, A, pH and sensory analyses displayed significant differences (p<0.05) between treatment and control sample; where the sodium citrate treated samples showed increased preservation features. Specifically, at the beginning of the study, flayed, beheaded, gutted and bled anchovies displayed low content of fat (1.5%) with moderate amount of PV, A and TBARS, and were not rejected by sensory analysis. HI values and FAP displayed varying behavior, however, results of HI did not reveal a decreasing trend. This result is indicative of the fact that levels of iron were maintained as HI and did not convert into no heme iron, which is known to be the primary catalyst of lipid oxidation in fish. According to the FAP results, the major quantity of fatty acid was of polyunsaturated fatty acid (PFA) followed by saturated fatty acid (SFA) and then monounsaturated fatty acid (MFA). According to sensory analysis, the shelf life of flayed, beheaded and gutted anchovy (control and treatment) was 14 months. This shelf life was reached at laboratory level because high quality anchovies were used and immediately flayed, beheaded, gutted, bled and frozen. Therefore, it is possible to maintain the shelf life of anchovies for a long time. Overall, this method displayed a large increase in shelf life relative to that commonly seen for anchovies in this industry. However, these results should be extrapolated at industrial scales to propose better processing conditions and improve the quality of anchovy for direct human consumption.

Keywords: citrate sodium solution, heme iron, polyunsaturated fatty acids, shelf life of frozen anchovy

Procedia PDF Downloads 289
435 Reducing Falls in Memory Care through Implementation of the Stopping Elderly Accidents, Deaths, and Injuries Program

Authors: Cory B. Lord

Abstract:

Falls among the elderly population has become an area of concern in healthcare today. The negative impacts of falls lead to increased morbidity, mortality, and financial burdens for both patients and healthcare systems. Falls in the United States is reported at an annual rate of 36 million in those aged 65 and older. Each year, one out of four people in this age group will suffer a fall, with 20% of these falls causing injury. The setting for this Doctor of Nursing Practice (DNP) project was a memory care unit in an assisted living community, as these facilities house cognitively impaired older adults. These communities lack fall prevention programs; therefore, the need exists to add to the body of knowledge to positively impact this population. The objective of this project was to reduce fall rates through the implementation of the Center for Disease Control and Prevention (CDC) STEADI (stopping elderly accidents, deaths, and injuries) program. The DNP project performed was a quality improvement pilot study with a pre and post-test design. This program was implemented in the memory care setting over 12 weeks. The project included an educational session for staff and a fall risk assessment with appropriate resident referrals. The three aims of the DNP project were to reduce fall rates among the elderly aged 65 and older who reside in the memory care unit, increase staff knowledge of STEADI fall prevention measures after an educational session, and assess the willingness of memory care unit staff to adopt an evidence-based a fall prevention program. The Donabedian model was used as a guiding conceptual framework for this quality improvement pilot study. The fall rate data for 12 months before the intervention was evaluated and compared to post-intervention fall rates. The educational session comprised of a pre and post-test to assess staff knowledge of the fall prevention program and the willingness of staff to adopt the fall prevention program. The overarching goal was to reduce falls in the elderly population who live in memory care units. The results of the study showed, on average that the fall rate during the implementation period of STEADI (μ=6.79) was significantly lower when compared to the prior 12 months (μ= 9.50) (p=0.02, α = 0.05). The mean staff knowledge scores improved from pretest (μ=77.74%) to post-test (μ=87.42%) (p=0.00, α= 0.05) after the education session. The results of the willingness to adopt a fall prevention program were scored at 100%. In summation, implementing the STEADI fall prevention program can assist in reducing fall rates for residents aged 65 and older who reside in a memory care setting.

Keywords: dementia, elderly, falls, STEADI

Procedia PDF Downloads 123
434 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization

Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo

Abstract:

Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.

Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy

Procedia PDF Downloads 143
433 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 230
432 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 80
431 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 160
430 Existential and Possessive Constructions in Modern Standard Arabic Two Strategies Reflecting the Ontological (Non-)Autonomy of Located or Possessed Entities

Authors: Fayssal Tayalati

Abstract:

Although languages use very divergent constructional strategies, all existential constructions appear to invariably involve an implicit or explicit locative constituent. This locative constituent either surface as a true locative phrase or are realized as a possessor noun phrase. However, while much research focuses on the supposed underlying syntactic relation of locative and possessive existential constructions, not much is known about possible semantic factors that could govern the choice between these constructions. The main question that we address in this talk concerns the choice between the two related constructions in Modern Standard Arabic (MAS). Although both are used to express the existence of something somewhere, we can distinguish three contexts: First, for some types of entities, only the EL construction is possible (e.g. (1a) ṯammata raǧulun fī l-ḥadīqati vs. (1b) *(kāna) ladā l-ḥadīqati raǧulun). Second, for other types of entities, only the possessive construction is possible (e.g. (2a) ladā ṭ-ṭawilati aklun dāʾiriyyun vs. (2b) *ṯammata šaklun dāʾiriyyun ladā/fī ṭ-ṭawilati). Finally, for still other entities, both constructions can be found (e.g. (3a) ṯammata ḥubbun lā yūṣafu ladā ǧārī li-zawǧati-hi and (3b) ladā ǧārī ḥubbun lā yūṣafu li-zawǧati-hi). The data covering a range of ontologically different entities (concrete objects, events, body parts, dimensions, essential qualities, feelings, etc.) shows that the choice between the existential locative and the possessive constructions is closely linked to the conceptual autonomy of the existential theme with respect to its location or to the whole that it is a part of. The construction with ṯammata is the only possible one to express the existence of a fully autonomous (i.e. nondependent) entity (concrete objects (e.g.1) and abstract objects such as events, especially the ones that Grimshaw called ‘simple events’). The possessive construction with (kāna) ladā is the only one used to express the existence of fully non-autonomous (i.e. fully dependent on a whole) entities (body parts, dimensions (e.g. 2), essential qualities). The two constructions alternate when the existential theme is conceptually dependent but separable of the whole, either because it has an autonomous (independent) existence of the given whole (spare parts of an object), or because it receives a relative autonomy in the speech through a modifier (accidental qualities, feelings (e.g. 3a, 3b), psychological states, among some other kinds of themes). In this case, the modifier expresses an approximate boundary on a scale, and provides relative autonomy to the entity. Finally, we will show that kinship terms (e.g. son), which at first sight may seem to constitute counterexamples to our hypothesis, are nonetheless supported by it. The ontological (non-)autonomy of located or possessed entities is also reflected by morpho-syntactic properties, among them the use and the choice of determiners, pluralisation and the behavior of entities in the context of associative anaphora.

Keywords: existence, possession, autonomous entities, non-autonomous entities

Procedia PDF Downloads 345
429 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 127
428 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids

Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout

Abstract:

Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.

Keywords: graphene, layered material, field emission, plasma, doping

Procedia PDF Downloads 359
427 Examining Historically Defined Periods in Autobiographical Memories for Transitional Events

Authors: Khadeeja Munawar, Shamsul Haque

Abstract:

We examined the plausibility of transition theory suggesting that memories of transitional events, which give rise to a significant and persistent change in the fabric of daily life, are organized around the historically defined autobiographical periods (H-DAPs). 141 Pakistani older adults retrieved 10 autobiographical memories (AMs) each to 10 cue words. As the history of Pakistan is dominated by various political and nationwide transitional events, it was expected that the participants would recall memories with H-DAPs references. The content analysis revealed that 0.7% of memories had H-DAP references and 0.4% memories mentioned major transitional events such as War/Natural Disaster. There was a vivid reminiscence bump between 10 - 20 years of age in lifespan distribution of AMs. There were 67.9% social-focused AMs. Significantly more self-focused memories were reported by individuals who endorsed themselves as conservatives. Only a few H-DAPs were reported, although the history of Pakistan was dominated by numerous political, historical and nationwide transitional events. Memories within and outside of the bump period were mostly positive. The participants rarely used historical/political or nationwide significant events or periods to date the memories elicited. The intense and nationwide (as well as region-wise) significant historical/political events spawned across decades in the lives of participants of the present study but these events did not produce H-DAPs. The findings contradicted the previous studies on H-DAPs and transition theory. The dominance of social-focused AMs in the present study is in line with the past studies comparing the memories of collectivist and individualist cultures (i.e., European Americans vs. Asian, African and Latin-American cultures). The past empirical evidence shows that conservative values and beliefs are adopted as a coping strategy to feel secure in the face of danger when future is dominated with uncertainty and to connect to likeminded others. In the present study, conservative political ideology is somehow assisting the participants in living a stable life midst of their complex social worlds. The reminiscence bump, as well as dominance of positive memories within and outside the bump period, are in line with the narrative/identity account which states that the events and experiences during adolescence and early adulthood assimilate into a person’s lifelong narratives. Hence these events are used as identity markers and are more easily recalled later in life. Also, according to socioemotional theory and the positivity effect, the participants evaluated past events more positively as they grow up and the intensity of negative emotions decreased with time.

Keywords: autobiographical memory, historically defined autobiographical periods, narrative/identity account, Pakistan, reminiscence bump, SMS framework, transition theory

Procedia PDF Downloads 229
426 Assessment of Soil Quality Indicators in Rice Soil of Tamil Nadu

Authors: Kaleeswari R. K., Seevagan L .

Abstract:

Soil quality in an agroecosystem is influenced by the cropping system, water and soil fertility management. A valid soil quality index would help to assess the soil and crop management practices for desired productivity and soil health. The soil quality indices also provide an early indication of soil degradation and needy remedial and rehabilitation measures. Imbalanced fertilization and inadequate organic carbon dynamics deteriorate soil quality in an intensive cropping system. The rice soil ecosystem is different from other arable systems since rice is grown under submergence, which requires a different set of key soil attributes for enhancing soil quality and productivity. Assessment of the soil quality index involves indicator selection, indicator scoring and comprehensive score into one index. The most appropriate indicator to evaluate soil quality can be selected by establishing the minimum data set, which can be screened by linear and multiple regression factor analysis and score function. This investigation was carried out in intensive rice cultivating regions (having >1.0 lakh hectares) of Tamil Nadu viz., Thanjavur, Thiruvarur, Nagapattinam, Villupuram, Thiruvannamalai, Cuddalore and Ramanathapuram districts. In each district, intensive rice growing block was identified. In each block, two sampling grids (10 x 10 sq.km) were used with a sampling depth of 10 – 15 cm. Using GIS coordinates, and soil sampling was carried out at various locations in the study area. The number of soil sampling points were 41, 28, 28, 32, 37, 29 and 29 in Thanjavur, Thiruvarur, Nagapattinam, Cuddalore, Villupuram, Thiruvannamalai and Ramanathapuram districts, respectively. Principal Component Analysis is a data reduction tool to select some of the potential indicators. Principal Component is a linear combination of different variables that represents the maximum variance of the dataset. Principal Component that has eigenvalues equal or higher than 1.0 was taken as the minimum data set. Principal Component Analysis was used to select the representative soil quality indicators in rice soils based on factor loading values and contribution percent values. Variables having significant differences within the production system were used for the preparation of the minimum data set. Each Principal Component explained a certain amount of variation (%) in the total dataset. This percentage provided the weight for variables. The final Principal Component Analysis based soil quality equation is SQI = ∑ i=1 (W ᵢ x S ᵢ); where S- score for the subscripted variable; W-weighing factor derived from PCA. Higher index scores meant better soil quality. Soil respiration, Soil available Nitrogen and Potentially Mineralizable Nitrogen were assessed as soil quality indicators in rice soil of the Cauvery Delta zone covering Thanjavur, Thiruvavur and Nagapattinam districts. Soil available phosphorus could be used as a soil quality indicator of rice soils in the Cuddalore district. In rain-fed rice ecosystems of coastal sandy soil, DTPA – Zn could be used as an effective soil quality indicator. Among the soil parameters selected from Principal Component Analysis, Microbial Biomass Nitrogen could be used quality indicator for rice soils of the Villupuram district. Cauvery Delta zone has better SQI as compared with other intensive rice growing zone of Tamil Nadu.

Keywords: soil quality index, soil attributes, soil mapping, and rice soil

Procedia PDF Downloads 83
425 Solution Thermodynamics, Photophysical and Computational Studies of TACH2OX, a C-3 Symmetric 8-Hydroxyquinoline: Abiotic Siderophore Analogue of Enterobactin

Authors: B. K. Kanungo, Monika Thakur, Minati Baral

Abstract:

8-hydroxyquinoline, (8HQ), experiences a renaissance due to its utility as a building block in metallosupramolecular chemistry and its versatile use of its derivatives in various fields of analytical chemistry, materials science, and pharmaceutics. It forms stable complexes with a variety of metal ions. Assembly of more than one such unit to form a polydentate chelator enhances its coordinating ability and the related properties due to the chelate effect resulting in high stability constant. Keeping in view the above, a nonadentate chelator N-[3,5-bis(8-hydroxyquinoline-2-amido)cyclohexyl]-8-hydroxyquinoline-2-carboxamide, (TACH2OX), containing a central cis,cis-1,3,5-triaminocyclohexane appended to three 8-hydroxyquinoline at 2-position through amide linkage is developed, and its solution thermodynamics, photophysical and Density Functional Theory (DFT) studies were undertaken. The synthesis of TACH2OX was carried out by condensation of cis,cis-1,3,5-triaminocyclohexane, (TACH) with 8‐hydroxyquinoline‐2‐carboxylic acid. The brown colored solid has been fully characterized through melting point, infrared, nuclear magnetic resonance, electrospray ionization mass and electronic spectroscopy. In solution, TACH2OX forms protonated complexes below pH 3.4, which consecutively deprotonates to generate trinegative ion with the rise of pH. Nine protonation constants for the ligand were obtained that ranges between 2.26 to 7.28. The interaction of the chelator with two trivalent metal ion Fe3+ and Al3+ were studied in aqueous solution at 298 K. The metal-ligand formation constants (ML) obtained by potentiometric and spectrophotometric method agree with each other. The protonated and hydrolyzed species were also detected in the system. The in-silico studies of the ligand, as well as the complexes including their protonated and deprotonated species assessed by density functional theory technique, gave an accurate correlation with each observed properties such as the protonation constants, stability constants, infra-red, nmr, electronic absorption and emission spectral bands. The nature of electronic and emission spectral bands in terms of number and type were ascertained from time-dependent density functional theory study and the natural transition orbitals (NTO). The global reactivity indices parameters were used for comparison of the reactivity of the ligand and the complex molecules. The natural bonding orbital (NBO) analysis could successfully describe the structure and bonding of the metal-ligand complexes specifying the percentage of contribution in atomic orbitals in the creation of molecular orbitals. The obtained high value of metal-ligand formation constants indicates that the newly synthesized chelator is a very powerful synthetic chelator. The minimum energy molecular modeling structure of the ligand suggests that the ligand, TACH2OX, in a tripodal fashion firmly coordinates to the metal ion as hexa-coordinated chelate displaying distorted octahedral geometry by binding through three sets of N, O- donor atoms, present in each pendant arm of the central tris-cyclohexaneamine tripod.

Keywords: complexes, DFT, formation constant, TACH2OX

Procedia PDF Downloads 147
424 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 170
423 An Assessment of Involuntary Migration in India: Understanding Issues and Challenges

Authors: Rajni Singh, Rakesh Mishra, Mukunda Upadhyay

Abstract:

India is among the nations born out of partition that led to one of the greatest forced migrations that marked the past century. The Indian subcontinent got partitioned into two nation-states, namely India and Pakistan. This led to an unexampled mass displacement of people accounting for about 20 million in the subcontinent as a whole. This exemplifies the socio-political version of displacement, but there are other identified reasons leading to human displacement viz., natural calamities, development projects and people-trafficking and smuggling. Although forced migrations are rare in incidence, they are mostly region-specific and a very less percentage of population appears to be affected by it. However, when this percentage is transcripted in terms of volume, the real impact created by such migration can be realized. Forced migration is thus an issue related to the lives of many people and requires to be addressed with proper intervention. Forced or involuntary migration decimates peoples' assets while taking from them their most basic resources and makes them migrate without planning and intention. This in most cases proves to be a burden on the destination resources. Thus, the question related to their security concerns arise profoundly with regard to the protection and safeguards to these migrants who need help at the place of destination. This brings the human security dimension of forced migration into picture. The present study is an analysis of a sample of 1501 persons by NSSO in India (National Sample Survey Organisation), which identifies three reasons for forced migration- natural disaster, social/political problem and displacement by development projects. It was observed that, of the total forced migrants, about 4/5th comprised of the internally displaced persons. However, there was a huge inflow of such migrants to the country from across the borders also, the major contributing countries being Bangladesh, Pakistan, Sri Lanka, Gulf countries and Nepal. Among the three reasons for involuntary migration, social and political problem is the most prominent in displacing huge masses of population; it is also the reason where the share of international migrants to that of internally displaced is higher compared to the other two factors /reasons. Second to political and social problems, natural calamities displaced a high portion of the involuntary migrants. The present paper examines the factors which increase people's vulnerability to forced migration. On perusing the background characteristics of the migrants it was seen that those who were economically weak and socially fragile are more susceptible to migration. Therefore, getting an insight about this fragile group of society is required so that government policies can benefit these in the most efficient and targeted manner.

Keywords: involuntary migration, displacement, natural disaster, social and political problem

Procedia PDF Downloads 351
422 Biosurfactants Produced by Antarctic Bacteria with Hydrocarbon Cleaning Activity

Authors: Claudio Lamilla, Misael Riquelme, Victoria Saez, Fernanda Sepulveda, Monica Pavez, Leticia Barrientos

Abstract:

Biosurfactants are compounds synthesized by microorganisms that show various chemical structures, including glycolipids, lipopeptides, polysaccharide-protein complex, phospholipids, and fatty acids. These molecules have attracted attention in recent years due to the amphipathic nature of these compounds, which allows their application in various activities related to emulsification, foaming, detergency, wetting, dispersion and solubilization of hydrophobic compounds. Microorganisms that produce biosurfactants are ubiquitous, not only present in water, soil, and sediments but in extreme conditions of pH, salinity or temperature such as those present in Antarctic ecosystems. Due to this, it is of interest to study biosurfactants producing bacterial strains isolated from Antarctic environments, with the potential to be used in various biotechnological processes. The objective of this research was to characterize biosurfactants produced by bacterial strains isolated from Antarctic environments, with potential use in biotechnological processes for the cleaning of sites contaminated with hydrocarbons. The samples were collected from soils and sediments in the South Shetland Islands and the Antarctic Peninsula, during the Antarctic Research Expedition INACH 2016, from both pristine and human occupied areas (influenced). The bacteria isolation was performed from solid R2A, M1 and LB media. The selection of strains producing biosurfactants was done by hemolysis test on blood agar plates (5%) and blue agar (CTAB). From 280 isolates, it was determined that 10 bacterial strains produced biosurfactants after stimulation with different carbon sources. 16S rDNA taxonomic markers, using the universal primers 27F-1492R, were used to identify these bacterias. Biosurfactants production was carried out in 250 ml flasks using Bushnell Hass liquid culture medium enriched with different carbon sources (olive oil, glucose, glycerol, and hexadecane) during seven days under constant stirring at 20°C. Each cell-free supernatant was characterized by physicochemical parameters including drop collapse, emulsification and oil displacement, as well as stability at different temperatures, salinity, and pH. In addition, the surface tension of each supernatant was quantified using a tensiometer. The strains with the highest activity were selected, and the production of biosurfactants was stimulated in six liters of culture medium. Biosurfactants were extracted from the supernatants with chloroform methanol (2:1). These biosurfactants were tested against crude oil and motor oil, to evaluate their displacement activity (detergency). The characterization by physicochemical properties of 10 supernatants showed that 80% of them produced the drop collapse, 60% had stability at different temperatures, and 90% had detergency activity in motor and olive oil. The biosurfactants obtained from two bacterial strains showed a high activity of dispersion of crude oil and motor oil with halos superior to 10 cm. We can conclude that bacteria isolated from Antarctic soils and sediments provide biological material of high quality for the production of biosurfactants, with potential applications in the biotechnological industry, especially in hydrocarbons -contaminated areas such as petroleum.

Keywords: antarctic, bacteria, biosurfactants, hydrocarbons

Procedia PDF Downloads 277
421 Influence of Protein Malnutrition and Different Stressful Conditions on Aluminum-Induced Neurotoxicity in Rats: Focus on the Possible Protection Using Epigallocatechin-3-Gallate

Authors: Azza A. Ali, Asmaa Abdelaty, Mona G. Khalil, Mona M. Kamal, Karema Abu-Elfotuh

Abstract:

Background: Aluminium (Al) is known as a neurotoxin environmental pollutant that can cause certain diseases as Dementia, Alzheimer's disease, and Parkinsonism. It is widely used in antacid drugs as well as in food additives and toothpaste. Stresses have been linked to cognitive impairment; Social isolation (SI) may exacerbate memory deficits while protein malnutrition (PM) increases oxidative damage in cortex, hippocampus and cerebellum. The risk of cognitive decline may be lower by maintaining social connections. Epigallocatechin-3-gallate (EGCG) is the most abundant catechin in green tea and has antioxidant, anti-inflammatory and anti-atherogenic effects as well as health-promoting effects in CNS. Objective: To study the influence of different stressful conditions as social isolation, electric shock (EC) and inadequate Nutritional condition as PM on neurotoxicity induced by Al in rats as well as to investigate the possible protective effect of EGCG in these stressful and PM conditions. Methods: Rats were divided into two major groups; protected group which was daily treated during three weeks of the experiment by EGCG (10 mg/kg, IP) or non-treated. Protected and non-protected groups included five subgroups as following: One normal control received saline and four Al toxicity groups injected daily for three weeks by ALCl3 (70 mg/kg, IP). One of them served as Al toxicity model, two groups subjected to different stresses either by isolation as mild stressful condition (SI-associated Al toxicity model) or by electric shock as high stressful condition (EC- associated Al toxicity model). The last was maintained on 10% casein diet (PM -associated Al toxicity model). Isolated rats were housed individually in cages covered with black plastic. Biochemical changes in the brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups. Histopathological changes in different brain regions were also evaluated. Results: Rats exposed to Al for three weeks showed brain neurotoxicity and neuronal degenerations. Both mild (SI) and high (EC) stressful conditions as well as inadequate nutrition (PM) enhanced Al-induced neurotoxicity and brain neuronal degenerations; the enhancement induced by stresses especially in its higher conditions (ES) was more pronounced than that of inadequate nutritional conditions (PM) as indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β together with the significant decrease in SOD, TAC, BDNF. On the other hand, EGCG showed more pronounced protection against hazards of Al in both stressful conditions (SI and EC) rather than in PM .The protective effects of EGCG were indicated by the significant decrease in Aβ, ACHE, MDA, TNF-α, IL-1β together with the increase in SOD, TAC, BDNF and confirmed by brain histopathological examinations. Conclusion: Neurotoxicity and brain neuronal degenerations induced by Al were more severe with stresses than with PM. EGCG can protect against Al-induced brain neuronal degenerations in all conditions. Consequently, administration of EGCG together with socialization as well as adequate protein nutrition is advised especially on excessive Al-exposure to avoid the severity of its neuronal toxicity.

Keywords: environmental pollution, aluminum, social isolation, protein malnutrition, neuronal degeneration, epigallocatechin-3-gallate, rats

Procedia PDF Downloads 386
420 Changes in Physicochemical Characteristics of a Serpentine Soil and in Root Architecture of a Hyperaccumulating Plant Cropped with a Legume

Authors: Ramez F. Saad, Ahmad Kobaissi, Bernard Amiaud, Julien Ruelle, Emile Benizri

Abstract:

Agromining is a new technology that establishes agricultural systems on ultramafic soils in order to produce valuable metal compounds such as nickel (Ni), with the final aim of restoring a soil's agricultural functions. But ultramafic soils are characterized by low fertility levels and this can limit yields of hyperaccumulators and metal phytoextraction. The objectives of the present work were to test if the association of a hyperaccumulating plant (Alyssum murale) and a Fabaceae (Vicia sativa var. Prontivesa) could induce changes in physicochemical characteristics of a serpentine soil and in root architecture of a hyperaccumulating plant then lead to efficient agromining practices through soil quality improvement. Based on standard agricultural systems, consisting in the association of legumes and another crop such as wheat or rape, a three-month rhizobox experiment was carried out to study the effect of the co-cropping (Co) or rotation (Ro) of a hyperaccumulating plant (Alyssum murale) with a legume (Vicia sativa) and incorporating legume biomass to soil, in comparison with mineral fertilization (FMo), on the structure and physicochemical properties of an ultramafic soil and on root architecture. All parameters measured (biomass, C and N contents, and taken-up Ni) on Alyssum murale conducted in co-cropping system showed the highest values followed by the mineral fertilization and rotation (Co > FMo > Ro), except for root nickel yield for which rotation was better than the mineral fertilization (Ro > FMo). The rhizosphere soil of Alyssum murale in co-cropping had larger soil particles size and better aggregates stability than other treatments. Using geostatistics, co-cropped Alyssum murale showed a greater root surface area spatial distribution. Moreover, co-cropping and rotation-induced lower soil DTPA-extractable nickel concentrations than other treatments, but higher pH values. Alyssum murale co-cropped with a legume showed a higher biomass production, improved soil physical characteristics and enhanced nickel phytoextraction. This study showed that the introduction of a legume into Ni agromining systems could improve yields of dry biomass of the hyperaccumulating plant used and consequently, the yields of Ni. Our strategy can decrease the need to apply fertilizers and thus minimizes the risk of nitrogen leaching and underground water pollution. Co-cropping of Alyssum murale with the legume showed a clear tendency to increase nickel phytoextraction and plant biomass in comparison to rotation treatment and fertilized mono-culture. In addition, co-cropping improved soil physical characteristics and soil structure through larger and more stabilized aggregates. It is, therefore, reasonable to conclude that the use of legumes in Ni-agromining systems could be a good strategy to reduce chemical inputs and to restore soil agricultural functions. Improving the agromining system by the replacement of inorganic fertilizers could simultaneously be a safe way of rehabilitating degraded soils and a method to restore soil quality and functions leading to the recovery of ecosystem services.

Keywords: plant association, legumes, hyperaccumulating plants, ultramafic soil physicochemical properties

Procedia PDF Downloads 159