Search results for: direct loss
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6558

Search results for: direct loss

1098 The Effects of Human Activities on Plant Diversity in Tropical Wetlands of Lake Tana (Ethiopia)

Authors: Abrehet Kahsay Mehari

Abstract:

Aquatic plants provide the physical structure of wetlands and increase their habitat complexity and heterogeneity, and as such, have a profound influence on other biotas. In this study, we investigated how human disturbance activities influenced the species richness and community composition of aquatic plants in the wetlands of Lake Tana, Ethiopia. Twelve wetlands were selected: four lacustrine, four river mouths, and four riverine papyrus swamps. Data on aquatic plants, environmental variables, and human activities were collected during the dry and wet seasons of 2018. A linear mixed effect model and a distance-based Redundancy Analysis (db-RDA) were used to relate aquatic plant species richness and community composition, respectively, to human activities and environmental variables. A total of 113 aquatic plant species, belonging to 38 families, were identified across all wetlands during the dry and wet seasons. Emergent species had the maximum area covered at 73.45 % and attained the highest relative abundance, followed by amphibious and other forms. The mean taxonomic richness of aquatic plants was significantly lower in wetlands with high overall human disturbance scores compared to wetlands with low overall human disturbance scores. Moreover, taxonomic richness showed a negative correlation with livestock grazing, tree plantation, and sand mining. The community composition also varied across wetlands with varying levels of human disturbance and was primarily driven by turnover (i.e., replacement of species) rather than nestedness resultant(i.e., loss of species). Distance-based redundancy analysis revealed that livestock grazing, tree plantation, sand mining, waste dumping, and crop cultivation were significant predictors of variation in aquatic plant communities’ composition in the wetlands. Linear mixed effect models and distance-based redundancy analysis also revealed that water depth, turbidity, conductivity, pH, sediment depth, and temperature were important drivers of variations in aquatic plant species richness and community composition. Papyrus swamps had the highest species richness and supported different plant communities. Conservation efforts should therefore focus on these habitats and measures should be taken to restore the highly disturbed and species poor wetlands near the river mouths.

Keywords: species richness, community composition, aquatic plants, wetlands, Lake Tana, human disturbance activities

Procedia PDF Downloads 119
1097 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico

Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano

Abstract:

In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.

Keywords: accidents, grade crossing, railroad, traffic safety

Procedia PDF Downloads 104
1096 Study of Variation in Linear Growth and Other Parameters of Male Albino Rats on Exposure to Chronic Multiple Stress after Birth

Authors: Potaliya Pushpa, Kataria Sushma, D. S. Chowdhary, Dadhich Abhilasha

Abstract:

Introduction: Stress is a nonspecific response of the body to a stressor or triggering stimulus. Chronic stress exposure contributes to various remarkable alterations o growth and development. Collective effects of stressors lead to several changes which are physical, physiological and behavioral in nature. Objective: To understand on an animal model how various chronic stress affect the somatic body growth as it can be useful for effective stress treatment and prevention of stress related illnesses. Material and Method: By selective fostering only male pup colonies were made and 102 male albino rats were studied. They were divided two groups as Control and Stressed. The experimental groups were exposed to four major types of stress as maternal deprivation, Restraint stress, electric foot shock and noise stress for affecting emotional, physical and physiological activities. Exposure was from birth to 17 week of life. Roentgenographs were taken in two planes as Dorso-ventral and Lateral and then measured for each rat. Various parameters were observed at specific intervals. Parameters recorded were Body weight and for linear growth it was summation of Cranial length, Head rump length and tail length. Behavior changes were also observed. Result: Multiple chronic stresses resulted in loss of approx. 25% of mean body weight. Maximal difference was found on 119th day (i.e. 87.81 gm) between the control and stressed group. Linear growth showed retardation which was found to be significant in stressed group on statistical analysis. Cranial Length and Head-rump Length showed maximum difference after maternal deprivation stress. After maternal deprivation (Day 21) and electric foot shock (Day 101) maximum difference i.e. 0.39 cm and 0.47 cm were found in cranial length of two groups. Electric foot shock had considerable impact on tail length. Noise Stress affected moreover behavior as compact to physical growth. Conclusion: Collective study showed that chronic stress not only resulted in reduced body weight in albino rats but also total linear size of rat thus affecting whole growth and development.

Keywords: stress, microscopic anatomy, macroscopic anatomy, chronic multiple stress, birth

Procedia PDF Downloads 263
1095 Genetic Advance versus Environmental Impact toward Sustainable Protein, Wet Gluten and Zeleny Sedimentation in Bread and Durum Wheat

Authors: Gordana Branković, Dejan Dodig, Vesna Pajić, Vesna Kandić, Desimir Knežević, Nenad Đurić

Abstract:

The wheat grain quality properties are influenced by genotype, environmental conditions and genotype × environment interaction (GEI). The increasing request of more nutritious wheat products will direct future breeding programmes. Therefore, the aim of investigation was to determine: i) variability of the protein content (PC), wet gluten content (WG) and Zeleny sedimentation volume (ZS); ii) components of variance, heritability in a broad sense (hb2), and expected genetic advance as percent of mean (GAM) for PC, WG, and ZS; iii) correlations between PC, WG, ZS, and most important agronomic traits; in order to assess expected breeding success versus environmental impact for these quality traits. The plant material consisted of 30 genotypes of bread wheat (Triticum aestivum L. ssp. aestivum) and durum wheat (Triticum durum Desf.). The trials were sown at the three test locations in Serbia: Rimski Šančevi, Zemun Polje and Padinska Skela during 2010-2011 and 2011-2012. The experiments were set as randomized complete block design with four replications. The plot consisted of five rows of 1 m2 (5 × 0.2 m × 1 m). PC, WG and ZS were determined by the use of Near infrared spectrometry (NIRS) with the Infraneo analyser (Chopin Technologies, France). PC, WG and ZS, in bread wheat, were in the range 13.4-16.4%, 22.8-30.3%, and 39.4-67.1 mL, respectively, and in durum wheat, in the range 15.3-18.1%, 28.9-36.3%, 37.4-48.3 mL, respectively. The dominant component of variance for PC, WG, and ZS, in bread wheat, was genotype with the genetic variance/GEI variance (VG/VG × E) relation of 3.2, 2.9 and 1.0, respectively, and in durum wheat was GEI with the VG/VG × E relation of 0.70, 0.69 and 0.49, respectively. hb2 and GAM values for PC, WG and ZS, in bread wheat, were 94.9% and 12.6%, 93.7% and 18.4%, and 86.2% and 28.1%, respectively, and in durum wheat, 80.7% and 7.6%, 79.7% and 10.2%, and 74% and 11.2%, respectively. The most consistent through six environments, statistically significant correlations, for bread wheat, were between PC and spike length (-0.312 to -0.637); PC, WG, ZS and grain number per spike (-0.320 to -0.620; -0.369 to -0.567; -0.301 to -0.378, respectively); PC and grain thickness (0.338 to 0.566), and for durum wheat, were between PC, WG, ZS and yield (-0.290 to -0.690; -0.433 to -0.753; -0.297 to -0.660, respectively); PC and plant height (-0.314 to -0.521); PC, WG and spike length (-0.298 to -0.597; -0.293 to -0.627, respectively); PC, WG and grain thickness (0.260 to 0.575; 0.269 to 0.498, respectively); PC, WG and grain vitreousness (0.278 to 0.665; 0.357 to 0.690, respectively). Breeding success can be anticipated for ZS in bread wheat due to coupled high values for hb2 and GAM, suggesting existence of additive genetic effects, and also for WG in bread wheat, due to very high hb2 and medium high GAM. The small, and medium, negative correlations between PC, WG, ZS, and yield or yield components, indicate difficulties to select simultaneously for high quality and yield, depending on linkage for particular genetic arrangements to be broken by recombination.

Keywords: bread and durum wheat, genetic advance, protein and wet gluten content, Zeleny sedimentation volume

Procedia PDF Downloads 248
1094 Experimental Study on Heat and Mass Transfer of Humidifier for Fuel Cell

Authors: You-Kai Jhang, Yang-Cheng Lu

Abstract:

Major contributions of this study are threefold: designing a new model of planar-membrane humidifier for Proton Exchange Membrane Fuel Cell (PEMFC), an index to measure the Effectiveness (εT) of that humidifier, and an air compressor system to replicate related planar-membrane humidifier experiments. PEMFC as a kind of renewable energy has become more and more important in recent years due to its reliability and durability. To maintain the efficiency of the fuel cell, the membrane of PEMFC need to be controlled in a good hydration condition. How to maintain proper membrane humidity is one of the key issues to optimize PEMFC. We developed new humidifier to recycle water vapor from cathode air outlet so as to keep the moisture content of cathode air inlet in a PEMFC. By measuring parameters such as dry side air outlet dew point temperature, dry side air inlet temperature and humidity, wet side air inlet temperature and humidity, and differential pressure between dry side and wet side, we calculated indices obtained by dew point approach temperature (DPAT), water flux (J), water recovery ratio (WRR), effectiveness (εT), and differential pressure (ΔP). We discussed six topics including sealing effect, flow rate effect, flow direction effect, channel effect, temperature effect, and humidity effect by using these indices. Gas cylinders are used as sources of air supply in many studies of humidifiers. Gas cylinder depletes quickly during experiment at 1kW air flow rate, and it causes replication difficult. In order to ensure high stable air quality and better replication of experimental data, this study designs an air supply system to overcome this difficulty. The experimental result shows that the best rate of pressure loss of humidifier is 0.133×10³ Pa(g)/min at the torque of 25 (N.m). The best humidifier performance ranges from 30-40 (LPM) of air flow rates. The counter flow configured humidifies moisturizes the dry side inlet air more effectively than the parallel flow humidifier. From the performance measurements of the channel plates various rib widths studied in this study, it is found that the narrower the rib width is, the more the performance of humidifier improves. Raising channel width in same hydraulic diameter (Dh ) will obtain higher εT and lower ΔP. Moreover, increasing the dry side air inlet temperature or humidity will lead to lower εT. In addition, when the dry side air inlet temperature exceeds 50°C, the effect becomes even more obvious.

Keywords: PEM fuel cell, water management, membrane humidifier, heat and mass transfer, humidifier performance

Procedia PDF Downloads 170
1093 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards

Authors: Hanna Schübel, Ivo Wallimann-Helmer

Abstract:

In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.

Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility

Procedia PDF Downloads 115
1092 The Implementation of a Nurse-Driven Palliative Care Trigger Tool

Authors: Sawyer Spurry

Abstract:

Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).

Keywords: palliative care, nursing, quality improvement, trigger tool

Procedia PDF Downloads 190
1091 A Study on the Effect of the Work-Family Conflict on Work Engagement: A Mediated Moderation Model of Emotional Exhaustion and Positive Psychology Capital

Authors: Sungeun Hyun, Sooin Lee, Gyewan Moon

Abstract:

Work-Family Conflict has been an active research area for the past decades. Work-Family Conflict harms individuals and organizations, it is ultimately expected to bring the cost of losses to the company in the long run. WFC has mainly focused on effects of organizational effectiveness and job attitude such as Job Satisfaction, Organizational Commitment, and Turnover Intention variables. This study is different from consequence variable with previous research. For this purpose, we selected the positive job attitude 'Work Engagement' as a consequence of WFC. This research has its primary research purpose in identifying the negative effects of the Work-Family Conflict, and started out from the recognition of the problem that the research on the direct relationship on the influence of the WFC on Work Engagement is lacking. Based on the COR(Conservation of resource theory) and JD-R(Job Demand- Resource model), the empirical study model to examine the negative effects of WFC with Emotional Exhaustion as the link between WFC and Work Engagement was suggested and validated. Also, it was analyzed how much Positive Psychological Capital may buffer the negative effects arising from WFC within this relationship, and the Mediated Moderation model controlling the indirect effect influencing the Work Engagement by the Positive Psychological Capital mediated by the WFC and Emotional Exhaustion was verified. Data was collected by using questionnaires distributed to 500 employees engaged manufacturing, services, finance, IT industry, education services, and other sectors, of which 389 were used in the statistical analysis. The data are analyzed by statistical package, SPSS 21.0, SPSS macro and AMOS 21.0. The hierarchical regression analysis, SPSS PROCESS macro and Bootstrapping method for hypothesis testing were conducted. Results showed that all hypotheses are supported. First, WFC showed a negative effect on Work Engagement. Specifically, WIF appeared to be on more negative effects than FIW. Second, Emotional exhaustion found to mediate the relationship between WFC and Work Engagement. Third, Positive Psychological Capital showed to moderate the relationship between WFC and Emotional Exhaustion. Fourth, the effect of mediated moderation through the integration verification, Positive Psychological Capital demonstrated to buffer the relationship among WFC, Emotional Exhastion, and Work Engagement. Also, WIF showed a more negative effects than FIW through verification of all hypotheses. Finally, we discussed the theoretical and practical implications on research and management of the WFC, and proposed limitations and future research directions of research.

Keywords: emotional exhaustion, positive psychological capital, work engagement, work-family conflict

Procedia PDF Downloads 218
1090 Oleic Acid Enhances Hippocampal Synaptic Efficacy

Authors: Rema Vazhappilly, Tapas Das

Abstract:

Oleic acid is a cis unsaturated fatty acid and is known to be a partially essential fatty acid due to its limited endogenous synthesis during pregnancy and lactation. Previous studies have demonstrated the role of oleic acid in neuronal differentiation and brain phospholipid synthesis. These evidences indicate a major role for oleic acid in learning and memory. Interestingly, oleic acid has been shown to enhance hippocampal long term potentiation (LTP), the physiological correlate of long term synaptic plasticity. However the effect of oleic acid on short term synaptic plasticity has not been investigated. Short term potentiation (STP) is the physiological correlate of short term synaptic plasticity which is the key underlying molecular mechanism of short term memory and neuronal information processing. STP in the hippocampal CA1 region has been known to require the activation of N-methyl-D-aspartate receptors (NMDARs). The NMDAR dependent hippocampal STP as a potential mechanism for short term memory has been a subject of intense interest for the past few years. Therefore in the present study the effect of oleic acid on NMDAR dependent hippocampal STP was determined in mouse hippocampal slices (in vitro) using Multi-electrode array system. STP was induced by weak tetanic Stimulation (one train of 100 Hz stimulations for 0.1s) of the Schaffer collaterals of CA1 region of the hippocampus in slices treated with different concentrations of oleic acid in presence or absence of NMDAR antagonist D-AP5 (30 µM) . Oleic acid at 20 (mean increase in fEPSP amplitude = ~135 % Vs. Control = 100%; P<0.001) and 30 µM (mean increase in fEPSP amplitude = ~ 280% Vs. Control = 100%); P<0.001) significantly enhanced the STP following weak tetanic stimulation. Lower oleic acid concentrations at 10 µM did not modify the hippocampal STP induced by weak tetanic stimulation. The hippocampal STP induced by weak tetanic stimulation was completely blocked by the NMDA receptor antagonist D-AP5 (30µM) in both oleic acid and control treated hippocampal slices. This lead to the conclusion that the hippocampal STP elicited by weak tetanic stimulation and enhanced by oleic acid was NMDAR dependent. Together these findings suggest that oleic acid may enhance the short term memory and neuronal information processing through the modulation of NMDAR dependent hippocampal short-term synaptic plasticity. In conclusion this study suggests the possible role of oleic acid to prevent the short term memory loss and impaired neuronal function throughout development.

Keywords: oleic acid, short-term potentiation, memory, field excitatory post synaptic potentials, NMDA receptor

Procedia PDF Downloads 330
1089 An Autonomous Space Debris-Removal System for Effective Space Missions

Authors: Shriya Chawla, Vinayak Malhotra

Abstract:

Space exploration has noted an exponential rise in the past two decades. The world has started probing the alternatives for efficient and resourceful sustenance along with utilization of advanced technology viz., satellites on earth. Space propulsion forms the core of space exploration. Of all the issues encountered, space debris has increasingly threatened the space exploration and propulsion. The efforts have resulted in the presence of disastrous space debris fragments orbiting the earth at speeds up to several kilometres per hour. Debris are well known as a potential damage to the future missions with immense loss of resources, mankind, and huge amount of money is invested in active research on them. Appreciable work had been done in the past relating to active space debris-removal technologies such as harpoon, net, drag sail. The primary emphasis is laid on confined removal. In recently, remove debris spacecraft was used for servicing and capturing cargo ships. Airbus designed and planned the debris-catching net experiment, aboard the spacecraft. The spacecraft represents largest payload deployed from the space station. However, the magnitude of the issue suggests that active space debris-removal technologies, such as harpoons and nets, still would not be enough. Thus, necessitating the need for better and operative space debris removal system. Techniques based on diverting the path of debris or the spacecraft to avert damage have turned out minimal usage owing to limited predictions. Present work focuses on an active hybrid space debris removal system. The work is motivated by the need to have safer and efficient space missions. The specific objectives of the work are 1) to thoroughly analyse the existing and conventional debris removal techniques, their working, effectiveness and limitations under varying conditions, 2) to understand the role of key controlling parameters in coupled operation of debris capturing and removal. The system represents the utilization of the latest autonomous technology available with an adaptable structural design for operations under varying conditions. The design covers advantages of most of the existing technologies while removing the disadvantages. The system is likely to enhance the probability of effective space debris removal. At present, systematic theoretical study is being carried out to thoroughly observe the effects of pseudo-random debris occurrences and to originate an optimal design with much better features and control.

Keywords: space exploration, debris removal, space crafts, space accidents

Procedia PDF Downloads 162
1088 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 113
1087 Origins of the Tattoo: Decoding the Ancient Meanings of Terrestrial Body Art to Establish a Connection between the Natural World and Humans Today

Authors: Sangeet Anand

Abstract:

Body art and tattooing have long been practiced as a form of self-expression for centuries, and this study studies and analyzes the pertinence of tattoo culture in our everyday lives and ancient past. Individuals of different cultures represent ideas, practices, and elements of their cultures through symbolic representation. These symbols come in all shapes and sizes and can be as simple as the makeup you put on every day to something more permanent such as a tattoo. In the long run, these individuals who choose to display art on their bodies are seeking to express their individuality. In addition, these visuals are ultimately a reflection of our own appropriate cultures deem as beautiful, important, and powerful to the human eye. They make us known to the world and give us a plausible identity in an ever-changing world. We have lived through and seen a rise in hippie culture today. This type of bodily decoration displayed by this fad has made it seem as though body art is a visual language that is relatively new. But quite to the contrary, it is not. Through cultural symbolic exploration, we can answer key questions to ideas that have been raised for centuries. Through careful, in-depth interviews, this study takes a broad subject matter-art, and symbolism-and culminates it into a deeper philosophical connection between the world and its past. The basic methodologies used in this sociocultural study include interview questionnaires and textual analysis, which encompass a subject and interviewer as well as source material. The major findings of this study contain a distinct connection between cultural heritage and the day-to-day likings of an individual. The participant that was studied during this project demonstrated a clear passion for hobbies that were practiced even by her ancestors. We can conclude, through these findings, that there is a deeper cultural connection between modern day humans, the first humans, and the surrounding environments. Our symbols today are a direct reflection of the elements of nature that our human ancestors were exposed to, and, through cultural acceptance, we can adorn ourselves with these representations to help others identify our pasts. Body art embraces the different aspects of different cultures and holds significance, tells stories, and persists, even as the human population rapidly integrates. With this pattern, our human descendents will continue to represent their cultures and identities in the future. Body art is an integral element in understanding how and why people identify with certain aspects of life over others and broaden the scope for conducting more analysis cross-culturally.

Keywords: natural, symbolism, tattoo, terrestrial

Procedia PDF Downloads 103
1086 Hepatoprotective Action of Emblica officinalis Linn. against Radiation and Lead Induced Changes in Swiss Albino Mice

Authors: R. K. Purohit

Abstract:

Ionizing radiation induces cellular damage through direct ionization of DNA and other cellular targets and indirectly via reactive oxygen species which may include effects from epigenetic changes. So there is a need of hour is to search for an ideal radioprotector which could minimize the deleterious and damaging effects caused by ionizing radiation. Radioprotectors are agents which reduce the radiation effects on cell when applied prior to exposure of radiation. The aim of this study was to access the efficacy of Emblica officinalis in reducing radiation and lead induced changes in mice liver. For the present experiment, healthy male Swiss albino mice (6-8 weeks) were selected and maintained under standard conditions of temperature and light. Fruit extract of Emblica was fed orally at the dose of 0.01 ml/animal/day. The animal were divided into seven groups according to the treatment i.e. lead acetate solution as drinking water (group-II) or exposed to 3.5 or 7.0 Gy gamma radiation (group-III) or combined treatment of radiation and lead acetate (group-IV). The animals of experimental groups were administered Emblica extract seven days prior to radiation or lead acetate treatment (group V, VI and VII) respectively. The animals from all the groups were sacrificed by cervical dislocation at each post-treatment intervals of 1, 2, 4, 7, 14 and 28 days. After sacrificing the animals pieces of liver were taken out and some of them were kept at -20°C for different biochemical parameters. The histopathological changes included cytoplasmic degranulation, vacuolation, hyperaemia, pycnotic and crenated nuclei. The changes observed in the control groups were compared with the respective experimental groups. An increase in the value of total proteins, glycogen, acid phosphtase, alkaline phosphatase activity and RNA was observed up to day-14 in the non drug treated group and day 7 in the Emblica treated groups, thereafter value declined up to day-28 without reaching to normal. The value of cholesterol and DNA showed a decreasing trend up to day -14 in non drug treated groups and day-7 in drug treated groups, thereafter value elevated up to day-28. The biochemical parameters were observed in the form of increase or decrease in the values. The changes were found dose dependent. After combined treatment of radiation and lead acetate synergistic effect were observed. The liver of Emblica treated animals exhibited less severe damage as compared to non-drug treated animals at all the corresponding intervals. An early and fast recovery was also noticed in Emblica pretreated animals. Thus, it appears that Emblica is potent enough to check lead and radiation induced heptic lesion in Swiss albino mice.

Keywords: radiation, lead , emblica, mice, liver

Procedia PDF Downloads 318
1085 Personality Composition in Senior Management Teams: The Importance of Homogeneity in Dynamic Managerial Capabilities

Authors: Shelley Harrington

Abstract:

As a result of increasingly dynamic business environments, the creation and fostering of dynamic capabilities, [those capabilities that enable sustained competitive success despite of dynamism through the awareness and reconfiguration of internal and external competencies], supported by organisational learning [a dynamic capability] has gained increased and prevalent momentum in the research arena. Presenting findings funded by the Economic Social Research Council, this paper investigates the extent to which Senior Management Team (SMT) personality (at the trait and facet level) is associated with the creation of dynamic managerial capabilities at the team level, and effective organisational learning/knowledge sharing within the firm. In doing so, this research highlights the importance of micro-foundations in organisational psychology and specifically dynamic capabilities, a field which to date has largely ignored the importance of psychology in understanding these important and necessary capabilities. Using a direct measure of personality (NEO PI-3) at the trait and facet level across 32 high technology and finance firms in the UK, their CEOs (N=32) and their complete SMTs [N=212], a new measure of dynamic managerial capabilities at the team level was created and statistically validated for use within the work. A quantitative methodology was employed with regression and gap analysis being used to show the empirical foundations of personality being positioned as a micro-foundation of dynamic capabilities. The results of this study found that personality homogeneity within the SMT was required to strengthen the dynamic managerial capabilities of sensing, seizing and transforming, something which was required to reflect strong organisational learning at middle management level [N=533]. In particular, it was found that the greater the difference [t-score gaps] between the personality profiles of a Chief Executive Officer (CEO) and their complete, collective SMT, the lower the resulting self-reported nature of dynamic managerial capabilities. For example; the larger the difference between a CEOs level of dutifulness, a facet contributing to the definition of conscientiousness, and their SMT’s level of dutifulness, the lower the reported level of transforming, a capability fundamental to strategic change in a dynamic business environment. This in turn directly questions recent trends, particularly in upper echelons research highlighting the need for heterogeneity within teams. In doing so, it successfully positions personality as a micro-foundation of dynamic capabilities, thus contributing to recent discussions from within the strategic management field calling for the need to empirically explore dynamic capabilities at such a level.

Keywords: dynamic managerial capabilities, senior management teams, personality, dynamism

Procedia PDF Downloads 263
1084 2D titanium, vanadium carbide MXene, and Polyaniline heterostructures for electrochemical energy storage

Authors: Ayomide A Sijuade, Nafiza Anjum

Abstract:

The rising demand to meet the need for clean and sustainable energy solutions has led the market to create effective energy storage technologies. In this study, we look at the possibility of using a heterostructure made of polyaniline (PANI), titanium carbide (Ti₃C₂), and vanadium carbide (V₂C) for energy storage devices. V₂C is a two-dimensional transition metal carbide with remarkable mechanical and electrical conductivity. Ti₃C2 has solid thermal conductivity and mechanical strength. PANI, on the other hand, is a conducting polymer with customizable electrical characteristics and environmental stability. Layer-by-layer assembly creates the heterostructure of V₂C, Ti₃C₂, and PANI, allowing for precise film thickness and interface quality control. Structural and morphological characterization is carried out using X-ray diffraction, scanning electron microscopy, and atomic force microscopy. For energy storage applications, the heterostructure’s electrochemical performance is assessed. Electrochemical experiments, such as cyclic voltammetry and galvanostatic charge-discharge tests, examine the heterostructure’s charge storage capacity, cycle stability, and rate performance. Comparing the heterostructure to the individual components reveals better energy storage capabilities. V₂C, Ti₃C₂, and PANI synergize to increase specific capacitance, boost charge storage, and prolong cycling stability. The heterostructure’s unique arrangement of 2D materials and conducting polymers promotes effective ion diffusion and charge transfer processes, improving the effectiveness of energy storage. The heterostructure also exhibits remarkable electrochemical stability, which minimizes capacity loss after repeated cycling. The longevity and long-term dependability of energy storage systems depend on this quality. By examining the potential of V₂C, Ti₃C₂, and PANI heterostructures, the results of this study expand energy storage technology. These materials’ specialized integration and design show potential for use in hybrid energy storage systems, lithium-ion batteries, and supercapacitors. Overall, the development of high-performance energy storage devices utilizing V₂C, Ti₃C₂, and PANI heterostructures is clarified by this research, opening the door to the realization of effective, long-lasting, and eco-friendly energy storage solutions to satisfy the demands of the modern world.

Keywords: MXenes, energy storage materials, conductive polymers, composites

Procedia PDF Downloads 48
1083 Flash Flood in Gabes City (Tunisia): Hazard Mapping and Vulnerability Assessment

Authors: Habib Abida, Noura Dahri

Abstract:

Flash floods are among the most serious natural hazards that have disastrous environmental and human impacts. They are associated with exceptional rain events, characterized by short durations, very high intensities, rapid flows and small spatial extent. Flash floods happen very suddenly and are difficult to forecast. They generally cause damage to agricultural crops and property, infrastructures, and may even result in the loss of human lives. The city of Gabes (South-eastern Tunisia) has been exposed to numerous damaging floods because of its mild topography, clay soil, high urbanization rate and erratic rainfall distribution. The risks associated with this situation are expected to increase further in the future because of climate change, deemed responsible for the increase of the frequency and the severity of this natural hazard. Recently, exceptional events hit Gabes City causing death and major property losses. A major flooding event hit the region on June 2nd, 2014, causing human deaths and major material losses. It resulted in the stagnation of storm water in the numerous low zones of the study area, endangering thereby human health and causing disastrous environmental impacts. The characterization of flood risk in Gabes Watershed (South-eastern Tunisia) is considered an important step for flood management. Analytical Hierarchy Process (AHP) method coupled with Monte Carlo simulation and geographic information system were applied to delineate and characterize flood areas. A spatial database was developed based on geological map, digital elevation model, land use, and rainfall data in order to evaluate the different factors susceptible to affect flood analysis. Results obtained were validated by remote sensing data for the zones that showed very high flood hazard during the extreme rainfall event of June 2014 that hit the study basin. Moreover, a survey was conducted from different areas of the city in order to understand and explore the different causes of this disaster, its extent and its consequences.

Keywords: analytical hierarchy process, flash floods, Gabes, remote sensing, Tunisia

Procedia PDF Downloads 104
1082 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 181
1081 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 129
1080 Influence of Freeze-Thaw Cycles on Protein Integrity and Quality of Chicken Meat

Authors: Nafees Ahmed, Nur Izyani Kamaruzman, Saralla Nathan, Mohd Ezharul Hoque Chowdhury, Anuar Zaini Md Zain, Iekhsan Othman, Sharifah Binti Syed Hassan

Abstract:

Meat quality is always subject to consumer scrutiny when purchasing from retail markets on mislabeling as fresh meat. Various physiological and biochemical changes influence the quality of meat. As a major component of muscle tissue, proteins play a major role in muscle foods. In meat industry, freezing is the most common form of storage of meat products. Repeated cycles of freezing and thawing are common in restaurants, kitchen, and retail outlets and can also occur during transportation or storage. Temperature fluctuation is responsible for physical, chemical, and biochemical changes. Repeated cycles of ‘freeze-thaw’ degrade the quality of meat by stimulating the lipid oxidation and surface discoloration. The shelf life of meat is usually determined by its appearance, texture, color, flavor, microbial activity, and nutritive value and is influenced by frozen storage and subsequent thawing. The main deterioration of frozen meat during storage is due to protein. Due to the large price differences between fresh and frozen–thawed meat, it is of great interest to consumer to know whether a meat product is truly fresh or not. Researchers have mainly focused on the reduction of moisture loss due to freezing and thawing cycles of meat. The water holding capacity (WHC) of muscle proteins and reduced water content are key quality parameters of meat that ultimately changes color and texture. However, there has been limited progress towards understanding the actual mechanisms behind the meat quality changes under the freeze–thaw cycles. Furthermore, effect of freeze-thaw process on integrity of proteins is ignored. In this paper, we have studied the effect of ‘freeze-thawing’ on physicochemical changes of chicken meat protein. We have assessed the quality of meat by pH, spectroscopic measurements, Western Blot. Our results showed that increase in freeze-thaw cycles causes changes in pH. Measurements of absorbance (UV-visible and IR) indicated the degradation of proteins. The expression of various proteins (CREB, AKT, MAPK, GAPDH, and phosphorylated forms) were performed using Western Blot. These results indicated the repeated cycles of freeze-thaw is responsible for deterioration of protein, thus causing decrease in nutritious value of meat. It damges the use of these products in Islamic Sharia.

Keywords: chicken meat, freeze-thaw, halal, protein, western blot

Procedia PDF Downloads 406
1079 The Role of Nickel on the High-Temperature Corrosion of Modell Alloys (Stainless Steels) before and after Breakaway Corrosion at 600°C: A Microstructural Investigation

Authors: Imran Hanif, Amanda Persdotter, Sedigheh Bigdeli, Jesper Liske, Torbjorn Jonsson

Abstract:

Renewable fuels such as biomass/waste for power production is an attractive alternative to fossil fuels in order to achieve a CO₂ -neutral power generation. However, the combustion results in the release of corrosive species. This puts high demands on the corrosion resistance of the alloys used in the boiler. Stainless steels containing nickel and/or nickel containing coatings are regarded as suitable corrosion resistance material especially in the superheater regions. However, the corrosive environment in the boiler caused by the presence of water vapour and reactive alkali very rapidly breaks down the primary protection, i.e., the Cr-rich oxide scale formed on stainless steels. The lifetime of the components, therefore, relies on the properties of the oxide scale formed after breakaway, i.e., the secondary protection. The aim of the current study is to investigate the role of varying nickel content (0–82%) on the high-temperature corrosion of model alloys with 18% Cr (Fe in balance) in the laboratory mimicking industrial conditions at 600°C. The influence of nickel is investigated on both the primary protection and especially the secondary protection, i.e., the scale formed after breakaway, during the oxidation/corrosion process in the dry O₂ (primary protection) and more aggressive environment such as H₂O, K₂CO₃ and KCl (secondary protection). All investigated alloys experience a very rapid loss of the primary protection, i.e., the Cr-rich (Cr, Fe)₂O₃, and the formation of secondary protection in the aggressive environments. The microstructural investigation showed that secondary protection of all alloys has a very similar microstructure in all more aggressive environments consisting of an outward growing iron oxide and inward growing spinel-oxide (Fe, Cr, Ni)₃O₄. The oxidation kinetics revealed that it is possible to influence the protectiveness of the scale formed after breakaway (secondary protection) through the amount of nickel in the alloy. The difference in oxidation kinetics of the secondary protection is linked to the microstructure and chemical composition of the complex spinel-oxide. The detailed microstructural investigations were carried out using the extensive analytical techniques such as electron back scattered diffraction (EBSD), energy dispersive X-rays spectroscopy (EDS) via the scanning and transmission electron microscopy techniques and results are compared with the thermodynamic calculations using the Thermo-Calc software.

Keywords: breakaway corrosion, EBSD, high-temperature oxidation, SEM, TEM

Procedia PDF Downloads 135
1078 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance

Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder

Abstract:

Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.

Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes

Procedia PDF Downloads 425
1077 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads

Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li

Abstract:

Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.

Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability

Procedia PDF Downloads 223
1076 Pomegranates Attenuates Cognitive and Behavioural Deficts and reduces inflammation in a Transgenic Mice Model of Alzheimer's Disease

Authors: M. M. Essa, S. Subash, M. Akbar, S. Al-Adawi, A. Al-Asmi, G. J. Guillemein

Abstract:

Objective: Transgenic (tg) mice which contain an amyloid precursor protein (APP) gene mutation, develop extracellular amyloid beta (Aβ) deposition in the brain, and severe memory and behavioural deficits with age. These mice serve as an important animal model for testing the efficacy of novel drug candidates for the treatment and management of symptoms of Alzheimer's disease (AD). Several reports have suggested that oxidative stress is the underlying cause of Aβ neurotoxicity in AD. Pomegranates contain very high levels of antioxidants and several medicinal properties that may be useful for improving the quality of life in AD patients. In this study, we investigated the effect of dietary supplementation of Omani pomegranate extract on the memory, anxiety and learning skills along with inflammation in an AD mouse model containing the double Swedish APP mutation (APPsw/Tg2576). Methods: The experimental groups of APP-transgenic mice from the age of 4 months were fed custom-mix diets (pellets) containing 4% pomegranate. We assessed spatial memory and learning ability, psychomotor coordination, and anxiety-related behavior in Tg and wild-type mice at the age of 4-5 months and 18-19 months using the Morris water maze test, rota rod test, elevated plus maze test, and open field test. Further, inflammatory parameters also analysed. Results: APPsw/Tg2576 mice that were fed a standard chow diet without pomegranates showed significant memory deficits, increased anxiety-related behavior, and severe impairment in spatial learning ability, position discrimination learning ability and motor coordination along with increased inflammation compared to the wild type mice on the same diet, at the age of 18-19 months In contrast, APPsw/Tg2576 mice that were fed a diet containing 4% pomegranates showed a significant improvements in memory, learning, locomotor function, and anxiety with reduced inflammatory markers compared to APPsw/Tg2576 mice fed the standard chow diet. Conclusion: Our results suggest that dietary supplementation with pomegranates may slow the progression of cognitive and behavioural impairments in AD. The exact mechanism is still unclear and further extensive research needed.

Keywords: Alzheimer's disease, pomegranates, oman, cognitive decline, memory loss, anxiety, inflammation

Procedia PDF Downloads 526
1075 Patterns of Eosinophilia in Cardiac Patients and its Association with Endomyocardial Disease Presenting to Tertiary Care Hospital in Peshawar

Authors: Rashid Azeem

Abstract:

Introduction: Eosinophilia, which can be categorized as mild, moderate, and severe form on the basis of increasing eosinophil counts, might be responsible for a wide range of cardiac manifestations, varying from a simple myocarditis to a severe state like endomyocardial fibrosis. Eosinophils are involved in the pathogenesis of a variety of cardiovascular disorder like Loffler endocarditis, eosinophilic granulomatosis with polyangitis (EGPH), and hyper eosinophilic (HES). Among them HES carries and incidence rate b/w 48% and 75% and is the main causes of cardiac motility and mobility due to eosinophilia involvement. Aims and objectives: The aim of this study is to determine the frequency of eosinophilia in cardiac patients and to ascertain the evidence of endomyocardial diseases in eosinophilic patients in a cardiology institution Material and Methods: This cross sectional analytical study was conducted in hematology Department of Peshawar institute of Cardiology after approval from hospital ethical and research committee. All 70 patients were subjected to detailed history and clinical examination. Investigation like CBC, Chest X-ray, ECG, Echo, Angiography findings were used to monitor patient’s clinical status. Data is analyzed using SPSS version 25 and MS Excel. Results: Out of 70 patients in our study, a total of 66 patients(94 %) shows evidence of cardiac manifestations. In our study, we have observed a number of abnormal ECG patterns in cardiac patients presenting with eosinophilia, like T wave changes, loss of R wave, sinus bradycardia with LVH strain, and ST wave abnormality. abnormal echocardiographic findings were observed in our patients, like valvular abnormalities (in 45.7%), RWMA abnormalities (in 2.8%), isolated ventricular dysfunction (in 21.4%), and in 10% patients, normal echocardiography. We further noted abnormal coronary angiography findings in cardiac patients with eosinophilia ranging from single vessel to multi vessel occlusions. Conclusions: Eosinophils are involved in the pathogenesis of a variety of cardiovascular disorders which can be detected by various diagnostic means, and the severity of the disease increases with time and with increasing eosinophil count ranging from simple myocarditis to a fatal condition like endomyocardial fibrosis. Thus, increased eosinophilic count as a laboratory parameter in cardiac patients may be a sign of endomyocardial damage which will further help cardiologist to intervene more aggressively then routine approach to a cardiac patient.

Keywords: eosinophilia, endomyocardial fibrosis, cardiac, hypereosinophilic syndrome

Procedia PDF Downloads 61
1074 Evaluation of Low Temperature as Treatment Tool for Eradication of Mediterranean Fruit Fly (Ceratitis capitata) in Artificial Diet

Authors: Farhan J. M. Al-Behadili, Vineeta Bilgi, Miyuki Taniguchi, Junxi Li, Wei Xu

Abstract:

Mediterranean fruit fly (Ceratitis capitata) is one of the most destructive pests of fruits and vegetables. Medfly originated from Africa and spread in many countries, and is currently an endemic pest in Western Australia. Medfly has been recorded from over 300 plant species including fruits, vegetables, nuts and its main hosts include blueberries, citrus, stone fruit, pome fruits, peppers, tomatoes, and figs. Global trade of fruits and other farm fresh products are suffering from the damages of this pest, which prompted towards the need to develop more effective ways to control these pests. The available quarantine treatment technologies mainly include chemical treatment (e.g., fumigation) and non-chemical treatments (e.g., cold, heat and irradiation). In recent years, with the loss of several chemicals, it has become even more important to rely on non-chemical postharvest control technologies (i.e., heat, cold and irradiation) to control fruit flies. Cold treatment is one of the most potential trends of focus in postharvest treatment because it is free of chemical residues, mitigates or kills the pest population, increases the strength of the fruits, and prolongs storage time. It can also be applied to fruits after packing and ‘in transit’ during lengthy transport by sea during their exports. However, limited systematic study on cold treatment of Medfly stages in artificial diets was reported, which is critical to provide a scientific basis to compare with previous research in plant products and design an effective cold treatment suitable for exported plant products. The overall purpose of this study was to evaluate and understand Medfly responses to cold treatments. Medfly stages were tested. The long-term goal was to optimize current postharvest treatments and develop more environmentally-friendly, cost-effective, and efficient treatments for controlling Medfly. Cold treatment with different exposure times is studied to evaluate cold eradication treatment of Mediterranean fruit fly (Ceratitis capitata), that reared on carrot diet. Mortality is important aspect was studied in this study. On the other hand, study effects of exposure time on mortality means of medfly stages.

Keywords: cold treatment, fruit fly, Ceratitis capitata, carrot diet, temperature effects

Procedia PDF Downloads 222
1073 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 91
1072 Exploring Coping Strategies among Caregivers of Children Who Have Survived Cancer

Authors: Noor Ismael, Somaya Malkawi, Sherin Al Awady, Taleb Ismael

Abstract:

Background/Significance: Cancer is a serious health condition that affects individuals’ quality of life during and after the course of this condition. Children who have survived cancer and their caregivers may deal with residual physical, cognitive or social disabilities. There is little research on caregivers’ health and wellbeing after cancer. To the authors’ best knowledge; there is no specific research about how caregivers cope with everyday stressors after cancer. Therefore, this study aimed to explore the coping strategies that caregivers of children who have survived cancer utilize to overcome everyday stressors. Methods: This study utilized a descriptive survey design. The sample consisted of 103 caregivers, who visited the health and wellness clinic at a national cancer center (additional demographics are presented in the results). The sample included caregivers of children who were off cancer treatments for at least two years from the beginning of data collection. The institution’s internal review board approved this study. Caregivers who agreed to participate completed the survey. The survey collected caregiver reported demographic information and the Brief COPE which measures caregivers' frequency of engaging in certain coping strategies. The Brief COPE consisted of 14 coping sub-scales, which are self-distraction, active coping, denial, substance use, use of emotional support, use of instrumental support, behavioral disengagement, venting, positive reframing, planning, humor, acceptance, religion, and self-blame. Data analyses included calculating sub-scales’ scores for the fourteen coping strategies and analysis of frequencies of demographics and coping strategies. Results: The 103 caregivers who participated in this study were 62% mothers, 80% married, 45% finished high school, 50% do not work outside the house, and 60% have low family income. Result showed that religious coping (66%) and acceptance (60%) were the most utilized coping strategies, followed by positive reframing (45%), active coping (44%) and planning (43%). The least utilized coping strategies in our sample were humor (5%), behavioral disengagement (8%), and substance-use (10%). Conclusions: Caregivers of children who have survived cancer mostly utilize religious coping and acceptance in dealing with everyday stressors. Because these coping strategies do not directly solve stressors like active coping and planning coping strategies, it is important to support caregivers in choosing and implementing effective coping strategies. Knowing from our results that some caregivers may utilize substance use as a coping strategy, which has negative health effects on caregivers and their children, there must be direct interventions that target these caregivers and their families.

Keywords: caregivers, cancer, stress, coping

Procedia PDF Downloads 166
1071 Fillet Chemical Composition of Sharpsnout Seabream (Diplodus puntazzo) from Wild and Cage-Cultured Conditions

Authors: Oğuz Taşbozan, Celal Erbaş, Şefik Surhan Tabakoğlu, Mahmut Ali Gökçe

Abstract:

Polyunsaturated fatty acids (PUFAs) and particularly the levels and ratios of ω-3 and ω-6 fatty acids are important for biological functions in humans and recognized as essential components of human diet. According to the terms of many different points of view, the nutritional composition of fish in culture conditions and caught from wild are wondered by the consumers. Therefore the aim of this study was to investigate the chemical composition of cage-cultured and wild sharpsnout seabream which has been preferred by the consumers as an economical important fish species in Turkey. The fish were caught from wild and obtained from cage-cultured commercial companies. Eight fish were obtained for each group, and their average weights of the samples were 245.8±13.5 g for cultured, 149.4±13.3 g for wild samples. All samples were stored in freezer (-18 °C) and analyses were carried out in triplicates, using homogenized boneless fish fillets. Proximate compositions (protein, ash, moisture and lipid) were determined. The fatty acid composition was analyzed by a GC Clarous 500 with auto sampler (Perkin–Elmer, USA). Proximate compositions of cage-cultured and wild samples of sharpsnout seabream were found statistical differences in terms of proximate composition between the groups. The saturated fatty acid (SFA), monounsaturated fatty acid (MUFA) and PUFA amounts of cultured and wild sharpsnout seabream were significantly different. ω3/ω6 ratio was higher in the cultured group. Especially in protein level and lipid level of cultured samples was significantly higher than wild counterparts. One of the reasons for this, cultured species exposed to continuous feeding. This situation had a direct effect on their body lipid content. The fatty acid composition of fish differs depending on a variety of factors including species, diet, environmental factors and whether they are farmed or wild. The higher levels of MUFA in the cultured fish may be explained with the high content of monoenoic fatty acids in the feed of cultured fish as in some other species. The ω3/ω6 ratio is a good index for comparing the relative nutritional value of fish oils. In our study, the cultured sharpsnout seabream appears to be better nutritious in terms of ω3/ω6. Acknowledgement: This work was supported by the Scientific Research Project Unit of the University of Cukurova, Turkey under grant no FBA-2016-5780.

Keywords: Diplodus puntazo, cage cultured, PUFA, fatty acid

Procedia PDF Downloads 261
1070 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 97
1069 Identification of Suitable Sites for Rainwater Harvesting in Salt Water Intruded Area by Using Geospatial Techniques in Jafrabad, Amreli District, India

Authors: Pandurang Balwant, Ashutosh Mishra, Jyothi V., Abhay Soni, Padmakar C., Rafat Quamar, Ramesh J.

Abstract:

The sea water intrusion in the coastal aquifers has become one of the major environmental concerns. Although, it is a natural phenomenon but, it can be induced with anthropogenic activities like excessive exploitation of groundwater, seacoast mining, etc. The geological and hydrogeological conditions including groundwater heads and groundwater pumping pattern in the coastal areas also influence the magnitude of seawater intrusion. However, this problem can be remediated by taking some preventive measures like rainwater harvesting and artificial recharge. The present study is an attempt to identify suitable sites for rainwater harvesting in salt intrusion affected area near coastal aquifer of Jafrabad town, Amreli district, Gujrat, India. The physico-chemical water quality results show that out of 25 groundwater samples collected from the study area most of samples were found to contain high concentration of Total Dissolved Solids (TDS) with major fractions of Na and Cl ions. The Cl/HCO3 ratio was also found greater than 1 which indicates the salt water contamination in the study area. The geophysical survey was conducted at nine sites within the study area to explore the extent of contamination of sea water. From the inverted resistivity sections, low resistivity zone (<3 Ohm m) associated with seawater contamination were demarcated in North block pit and south block pit of NCJW mines, Mitiyala village Lotpur and Lunsapur village at the depth of 33 m, 12 m, 40 m, 37 m, 24 m respectively. Geospatial techniques in combination of Analytical Hierarchy Process (AHP) considering hydrogeological factors, geographical features, drainage pattern, water quality and geophysical results for the study area were exploited to identify potential zones for the Rainwater Harvesting. Rainwater harvesting suitability model was developed in ArcGIS 10.1 software and Rainwater harvesting suitability map for the study area was generated. AHP in combination of the weighted overlay analysis is an appropriate method to identify rainwater harvesting potential zones. The suitability map can be further utilized as a guidance map for the development of rainwater harvesting infrastructures in the study area for either artificial groundwater recharge facilities or for direct use of harvested rainwater.

Keywords: analytical hierarchy process, groundwater quality, rainwater harvesting, seawater intrusion

Procedia PDF Downloads 168