Search results for: structural equation modeling semiparametric
214 Assessment of Neurodevelopmental Needs in Duchenne Muscular Dystrophy
Authors: Mathula Thangarajh
Abstract:
Duchenne muscular dystrophy (DMD) is a severe form of X-linked muscular dystrophy caused by mutations in the dystrophin gene resulting in progressive skeletal muscle weakness. Boys with DMD also have significant cognitive disabilities. The intelligence quotient of boys with DMD, compared to peers, is approximately one standard deviation below average. Detailed neuropsychological testing has demonstrated that boys with DMD have a global developmental impairment, with verbal memory and visuospatial skills most significantly affected. Furthermore, the total brain volume and gray matter volume are lower in children with DMD compared to age-matched controls. These results are suggestive of a significant structural and functional compromise to the developing brain as a result of absent dystrophin protein expression. There is also some genetic evidence to suggest that mutations in the 3’ end of the DMD gene are associated with more severe neurocognitive problems. Our working hypothesis is that (i) boys with DMD do not make gains in neurodevelopmental skills compared to typically developing children and (ii) women carriers of DMD mutations may have subclinical cognitive deficits. We also hypothesize that there may be an intergenerational vulnerability of cognition, with boys of DMD-carrier mothers being more affected cognitively than boys of non-DMD-carrier mothers. The objectives of this study are: 1. Assess the neurodevelopment in boys with DMD at 4-time points and perform baseline neuroradiological assessment, 2. Assess cognition in biological mothers of DMD participants at baseline, 3. Assess possible correlation between DMD mutation and cognitive measures. This study also explores functional brain abnormalities in people with DMD by exploring how regional and global connectivity of the brain underlies executive function deficits in DMD. Such research can contribute to a better holistic understanding of the cognition alterations due to DMD and could potentially allow clinicians to create better-tailored treatment plans for the DMD population. There are four study visits for each participant (baseline, 2-4 weeks, 1 year, 18 months). At each visit, the participant completes the NIH Toolbox Cognition Battery, a validated psychometric measure that is recommended by NIH Common Data Elements for use in DMD. Visits 1, 3, and 4 also involve the administration of the BRIEF-2, ABAS-3, PROMIS/NeuroQoL, PedsQL Neuromuscular module 3.0, Draw a Clock Test, and an optional fMRI scan with the N-back matching task. We expect to enroll 52 children with DMD, 52 mothers of children with DMD, and 30 healthy control boys. This study began in 2020 during the height of the COVID-19 pandemic. Due to this, there were subsequent delays in recruitment because of travel restrictions. However, we have persevered and continued to recruit new participants for the study. We partnered with the Muscular Dystrophy Association (MDA) and helped advertise the study to interested families. Since then, we have had families from across the country contact us about their interest in the study. We plan to continue to enroll a diverse population of DMD participants to contribute toward a better understanding of Duchenne Muscular Dystrophy.Keywords: neurology, Duchenne muscular dystrophy, muscular dystrophy, cognition, neurodevelopment, x-linked disorder, DMD, DMD gene
Procedia PDF Downloads 99213 “It’s All in Your Head”: Epistemic Injustice, Prejudice, and Power in the Modern Healthcare System
Authors: David Tennison
Abstract:
Epistemic injustice, an injustice done to a person specifically in their capacity as a “knower”, is a subtle form of discrimination, yet its effects can be as dehumanizing and damaging as more overt forms of discrimination. The lens of epistemic injustice has, in recent years, been fruitfully applied to the field of healthcare, examining questions of agency, power, credibility and belief in doctor-patient interactions. Contested illness patients (e.g., those with illnesses lacking scientific consensuses such as fibromyalgia (FM), Myalgic Encephalomyelitis/ Chronic Fatigue Syndrome (ME/CFS) and Long Covid) face higher levels of scrutiny than other patient groups and are often disbelieved or dismissed when their ailments cannot be easily imaged or tested for- often encapsulated by the expression “it’s all in your head”. Using the case study of FM, the trials of contested illness patients in healthcare can be conceptualized in terms of epistemic injustice, and what is going wrong in these doctor-patient relationships can be effectively diagnosed. This case study also helps reveal epistemic dysfunction (structural epistemic issues embedded in the healthcare system), how this relates to stigma identity-based prejudice, and how the healthcare system upholds existing societal hierarchies and disenfranchises the most vulnerable. In the modern landscape, where cases of these chronic illnesses are not only on the rise but future pandemics threaten to add to their number, this conversation is crucial for the well-being of patients and providers. This presentation will cover what epistemic injustice is and how it can be applied to the politics of the doctor-patient interaction on a micro level and the politics of the healthcare system more broadly. Contested illnesses will be explored in terms of how the “contested” label causes the patient to experience disease stigma and lowers their credibility in healthcare and across other aspects of life. This will be explored in tandem with a discussion of existing identity-based prejudice in the healthcare system and how social identities (such as those of gender, race, and socioeconomic status) intersect with the contested illness label. The effects of epistemic injustice, which include worsening patients’ symptoms of mental health and potentially disenfranchising them from the healthcare system altogether, will be presented alongside the potential ethical quandaries this poses for providers. Finally, issues with the way healthcare appointments and the modern NHS function will be explored in terms of epistemic injustice and solutions to improve doctor-patient communication and patient care will be discussed. The relationship between contested illness patients and healthcare providers is notoriously poor, and while this can mean frustration or feelings of unfulfillment in providers, the negative effects for patients are much more severe. The purpose of this research, then, is to highlight these issues and suggest ways in which to improve the healthcare experience for these patients, along with improving doctor-patient communication and mending the doctor-patient relationship in a tangible and realistic way. This research also aims to provoke important conversations about belief and hierarchy in medical settings and how these aspects intersect with identity prejudices.Keywords: epistemic injustice, fibromyalgia, contested illnesses, chronic illnesses, doctor-patient relationships, philosophy of medicine
Procedia PDF Downloads 60212 Element Distribution and REE Dispersal in Sandstone-Hosted Copper Mineralization within Oligo-Miocene Strata, NE Iran: Insights from Lithostratigraphy and Mineralogy
Authors: Mostafa Feiz, Mohammad Safari, Hossein Hadizadeh
Abstract:
The Chalpo copper area is located in northeastern Iran, which is part of the structural zone of central Iran and the back-arc basin of Sabzevar. This sedimentary basin accumulated in destructive-oligomiocene sediments is named the Nasr-Chalpo-Sangerd (NCS) basin. The sedimentary layers in this basin originated mainly from Upper Cretaceous ophiolitic rocks and intermediate to mafic-post ophiolitic volcanic rocks, deposited as a nonconformity. The mineralized sandstone layers in the Chalpo area include leached zones (with a thickness of 5 to 8 meters) and mineralized lenses with a thickness of 0.5 to 0.7 meters. Ore minerals include primary sulfide minerals, such as chalcocite, chalcopyrite, and pyrite, as well as secondary minerals, such as covellite, digenite, malachite, and azurite, formed in three stages that comprise primary, simultaneously, and supergene stage. The best agents that control the mineralization in this area include the permeability of host rocks, the presence of fault zones as the conduits for copper oxide solutions, and significant amounts of plant fossils, which create a reducing environment for the deposition of mineralized layers. The calculations of mass changes on copper-bearing layers and primary sandstone layers indicate that Pb, As, Cd, Te, and Mo are enriched in the mineralized zones, whereas SiO₂, TiO₂, Fe₂O₃, V, Sr, and Ba are depleted. The combination of geological, stratigraphic, and geochemical studies suggests that the origin of copper may have been the underlying red strata that contained hornblende, plagioclase, biotite, alkaline feldspar, and labile minerals. Dehydration and hydrolysis of these minerals during the diagenetic process caused the leaching of copper and associated elements by circling fluids, which formed an oxidant-hydrothermal solution. Copper and silver in this oxidant solution might have moved upwards through the basin-fault zones and deposited in the reducing environments in the sandstone layers that have had abundant organic matter. Copper in these solutions was probably carried by chloride complexes. The collision of oxidant and reduced solutions caused the deposition of Cu and Ag, whereas some s elements in oxidant environments (e.g., Fe₂O₃, TiO₂, SiO₂, REEs) become uns in the reduced condition. Therefore, the copper-bearing sandstones in the study area are depleted from these elements resulting from the leaching process. The results indicate that during the mineralization stage, LREEs and MREEs were depleted, but Cu, Ag, and S were enriched. Based on field evidence, it seems that the circulation of connate fluids in the reb-bed strata, produced by diagenetic processes, encountered to reduced facies, which formed earlier by abundant fossil-plant debris in the sandstones, is the best model for precipitating sulfide-copper minerals.Keywords: Chalpo, Oligo-Miocene red beds, sandstone-hosted copper mineralization, mass change, LREEs and MREEs
Procedia PDF Downloads 28211 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection
Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda
Abstract:
In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards
Procedia PDF Downloads 140210 Evaluation of Nanoparticle Application to Control Formation Damage in Porous Media: Laboratory and Mathematical Modelling
Authors: Gabriel Malgaresi, Sara Borazjani, Hadi Madani, Pavel Bedrikovetsky
Abstract:
Suspension-Colloidal flow in porous media occurs in numerous engineering fields, such as industrial water treatment, the disposal of industrial wastes into aquifers with the propagation of contaminants and low salinity water injection into petroleum reservoirs. The main effects are particle mobilization and captured by the porous rock, which can cause pore plugging and permeability reduction which is known as formation damage. Various factors such as fluid salinity, pH, temperature, and rock properties affect particle detachment. Formation damage is unfavorable specifically near injection and production wells. One way to control formation damage is pre-treatment of the rock with nanoparticles. Adsorption of nanoparticles on fines and rock surfaces alters zeta-potential of the surfaces and enhances the attachment force between the rock and fine particles. The main objective of this study is to develop a two-stage mathematical model for (1) flow and adsorption of nanoparticles on the rock in the pre-treatment stage and (2) fines migration and permeability reduction during the water production after the pre-treatment. The model accounts for adsorption and desorption of nanoparticles, fines migration, and kinetics of particle capture. The system of equations allows for the exact solution. The non-self-similar wave-interaction problem was solved by the Method of Characteristics. The analytical model is new in two ways: First, it accounts for the specific boundary and initial condition describing the injection of nanoparticle and production from the pre-treated porous media; second, it contains the effect of nanoparticle sorption hysteresis. The derived analytical model contains explicit formulae for the concentration fronts along with pressure drop. The solution is used to determine the optimal injection concentration of nanoparticle to avoid formation damage. The mathematical model was validated via an innovative laboratory program. The laboratory study includes two sets of core-flood experiments: (1) production of water without nanoparticle pre-treatment; (2) pre-treatment of a similar core with nanoparticles followed by water production. Positively-charged Alumina nanoparticles with the average particle size of 100 nm were used for the rock pre-treatment. The core was saturated with the nanoparticles and then flushed with low salinity water; pressure drop across the core and the outlet fine concentration was monitored and used for model validation. The results of the analytical modeling showed a significant reduction in the fine outlet concentration and formation damage. This observation was in great agreement with the results of core-flood data. The exact solution accurately describes fines particle breakthroughs and evaluates the positive effect of nanoparticles in formation damage. We show that the adsorbed concentration of nanoparticle highly affects the permeability of the porous media. For the laboratory case presented, the reduction of permeability after 1 PVI production in the pre-treated scenario is 50% lower than the reference case. The main outcome of this study is to provide a validated mathematical model to evaluate the effect of nanoparticles on formation damage.Keywords: nano-particles, formation damage, permeability, fines migration
Procedia PDF Downloads 623209 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 243208 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting
Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan
Abstract:
El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index
Procedia PDF Downloads 156207 Testing Two Actors Contextual Interaction Theory in a Multi Actors Context: Case of COVID-19 Disease Prevention and Control Policy
Authors: Muhammad Fayyaz Nazir, Ellen Wayenberg, Shahzadaah Faahed Qureshi
Abstract:
Introduction: The study is based on the Contextual Interaction Theory (CIT) constructs to explore the role of policy actors in implementing the COVID-19 Disease Prevention and Control (DP&C) Policy. The study analyzes the role of healthcare workers' contextual factors, such as cognition, motives, and resources, and their interactions in implementing Social Distancing (SD). In this way, we test a two actors policy implementation theory, i.e., the CIT in a three-actor context. Methods: Data was collected through document analysis and semi-structured interviews. For a qualitative study design, interviews were conducted with questions on cognition, motives, and resources from the healthcare workers involved in implementing SD in the local context in Multan – Pakistan. The possible interactions resulting from contextual factors of the policy actors – healthcare workers were identified through framework analysis protocol guided by CIT and supported by trustworthiness criterion and data saturation. Results: This inquiry resulted in theory application, addition, and enrichment. The theoretical application in the three actor's contexts illustrates the different levels of motives, cognition, and resources of healthcare workers – senior administrators, managers, and healthcare professionals. The senior administrators working in National Command and Operations Center (NCOC), Provincial Technical Committees (PTCs), and Districts Covid Teams (DCTs) were playing their role with high motivation. They were fully informed about the policy and moderately resourceful. The policy implementors: healthcare managers working on implementing the SD within their respective hospitals were playing their role with high motivation and were fully informed about the policy. However, they lacked the required resources to implement SD. The target medical and allied healthcare professionals were moderately motivated but lack of resources and information. The interaction resulted in cooperation and the need for learning to manage the future healthcare crisis. However, the lack of resources created opposition to the implementation of SD. Objectives of the Study: The study aimed to apply a two actors theory in a multi actors context. We take this as an opportunity to qualitatively test the theory in a novel situation of the Covid-19 pandemic and make way for its quantitative application by designing a survey instrument so that implementation researchers can apply CIT through multivariate analyses or higher-order statistical modeling. Conclusion: Applying two actors' implementation theory in exploring a complex case of healthcare intervention in three actors context is a unique work that has never been done before, up to the best of our knowledge. So, the work will contribute to the policy implementation studies by applying, extending, and enriching an implementation theory in a novel case of the Covi-19 pandemic, ultimately fulfilling the gap in implementation literature. Policy institutions and other low or middle-income countries can learn from this research and improve SD implementation by working on the variables with weak significance levels.Keywords: COVID-19, disease prevention and control policy, implementation, policy actors, social distancing
Procedia PDF Downloads 59206 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak
Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi
Abstract:
This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak
Procedia PDF Downloads 154205 Exploring Bio-Inspired Catecholamine Chemistry to Design Durable Anti-Fungal Wound Dressings
Authors: Chetna Dhand, Venkatesh Mayandi, Silvia Marrero Diaz, Roger W. Beuerman, Seeram Ramakrishna, Rajamani Lakshminarayanan
Abstract:
Sturdy Insect Cuticle Sclerotization, Incredible Substrate independent Mussel’s bioadhesion, Tanning of Leather are some of catechol(amine)s mediated natural processes. Chemical contemplation spots toward a mechanism instigated with the formation of the quinone moieties from the respective catechol(amine)s, via oxidation, followed by the nucleophilic addition of the amino acids/proteins/peptides to this quinone leads to the development of highly strong, cross-linked and water-resistant proteinacious structures. Inspired with this remarkable catechol(amine)s chemistry towards amino acids/proteins/peptides, we attempted to design highly stable and water-resistant antifungal wound dressing mats with exceptional durability using collagen (protein), dopamine (catecholamine) and antifungal drugs (Amphotericin B and Caspofungin) as the key materials. Electrospinning technique has been used to fabricate desired nanofibrous mat including Collagen (COLL), COLL/Dopamine (COLL/DP) and calcium incorporated COLL/DP (COLL-DP-Ca2+). The prepared protein-based scaffolds have been studied for their microscopic investigations (SEM, TEM, and AFM), structural analysis (FT-IR), mechanical properties, water wettability characteristics and aqueous stability. Biocompatibility of these scaffolds has been analyzed for dermal fibroblast cells using MTS assay, Cell TrackerTM Green CMFDA and confocal imaging. Being the winner sample, COLL-DP-Ca2+ scaffold has been selected for incorporating two antifungal drugs namely Caspofungin (Peptide based) and Amphotericin B (Non-Peptide based). Antifungal efficiency of the designed mats has been evaluated for eight diverse fungal strains employing different microbial assays including disc diffusion, cell-viability assay, time kill kinetics etc. To confirm the durability of these mats, in term of their antifungal activity, drug leaching studies has been performed and monitored using disc diffusion assay each day. Ex-vivo fungal infection model has also been developed and utilized to validate the antifungal efficacy of the designed wound dressings. Results clearly reveal dopamine mediated crosslinking within COLL-antifungal scaffolds that leads to the generation of highly stable, mechanical tough, biocompatible wound dressings having the zone of inhabitation of ≥ 2 cm for almost all the investigated fungal strains. Leaching studies and Ex-vivo model has confirmed the durability of these wound dressing for more than 3 weeks and certified their suitability for commercialization. A model has also been proposed to enlighten the chemical mechanism involved for the development of these antifungal wound dressings with exceptional robustness.Keywords: catecholamine chemistry, electrospinning technique, antifungals, wound dressings, collagen
Procedia PDF Downloads 378204 Delineation of Different Geological Interfaces Beneath the Bengal Basin: Spectrum Analysis and 2D Density Modeling of Gravity Data
Authors: Md. Afroz Ansari
Abstract:
The Bengal basin is a spectacular example of a peripheral foreland basin formed by the convergence of the Indian plate beneath the Eurasian and Burmese plates. The basin is embraced on three sides; north, west and east by different fault-controlled tectonic features whereas released in the south where the rivers are drained into the Bay of Bengal. The Bengal basin in the eastern part of the Indian subcontinent constitutes the largest fluvio-deltaic to shallow marine sedimentary basin in the world today. This continental basin coupled with the offshore Bengal Fan under the Bay of Bengal forms the biggest sediment dispersal system. The continental basin is continuously receiving the sediments by the two major rivers Ganga and Brahmaputra (known as Jamuna in Bengal), and Meghna (emerging from the point of conflux of the Ganga and Brahmaputra) and large number of rain-fed, small tributaries originating from the eastern Indian Shield. The drained sediments are ultimately delivered into the Bengal fan. The significance of the present study is to delineate the variations in thicknesses of the sediments, different crustal structures, and the mantle lithosphere throughout the onshore-offshore Bengal basin. In the present study, the different crustal/geological units and the shallower mantle lithosphere were delineated by analyzing the Bouguer Gravity Anomaly (BGA) data along two long traverses South-North (running from Bengal fan cutting across the transition offshore-onshore of the Bengal basin and intersecting the Main Frontal Thrust of India-Himalaya collision zone in Sikkim-Bhutan Himalaya) and West-East (running from the Peninsular Indian Shield across the Bengal basin to the Chittagong–Tripura Fold Belt). The BGA map was derived from the analysis of topex data after incorporating Bouguer correction and all terrain corrections. The anomaly map was compared with the available ground gravity data in the western Bengal basin and the sub-continents of India for consistency of the data used. Initially, the anisotropy associated with the thicknesses of the different crustal units, crustal interfaces and moho boundary was estimated through spectral analysis of the gravity data with varying window size over the study area. The 2D density sections along the traverses were finalized after a number of iterations with the acceptable root mean square (RMS) errors. The estimated thicknesses of the different crustal units and dips of the Moho boundary along both the profiles are consistent with the earlier results. Further the results were encouraged by examining the earthquake database and focal mechanism solutions for better understanding the geodynamics. The earthquake data were taken from the catalogue of US Geological Survey, and the focal mechanism solutions were compiled from the Harvard Centroid Moment Tensor Catalogue. The concentrations of seismic events at different depth levels are not uncommon. The occurrences of earthquakes may be due to stress accumulation as a result of resistance from three sides.Keywords: anisotropy, interfaces, seismicity, spectrum analysis
Procedia PDF Downloads 274203 Language and Power Relations in Selected Political Crisis Speeches in Nigeria: A Critical Discourse Analysis
Authors: Isaiah Ifeanyichukwu Agbo
Abstract:
Human speech is capable of serving many purposes. Power and control are not always exercised overtly by linguistic acts, but maybe enacted and exercised in the myriad of taken-for-granted actions of everyday life. Domination, power control, discrimination and mind control exist in human speech and may lead to asymmetrical power relations. In discourse, there are persuasive and manipulative linguistic acts that serve to establish solidarity and identification with the 'we group' and polarize with the 'they group'. Political discourse is crafted to defend and promote the problematic narrative of outright controversial events in a nation’s history thereby sustaining domination, marginalization, manipulation, inequalities and injustices, often without the dominated and marginalized group being aware of them. They are designed and positioned to serve the political and social needs of the producers. Political crisis speeches in Nigeria, just like in other countries concentrate on positive self-image, de-legitimization of political opponents, reframing accusation to one’s advantage, redefining problematic terms and adopting reversal strategy. In most cases, the people are ignorant of the hidden ideological positions encoded in the text. Few researches have been conducted adopting the frameworks of critical discourse analysis and systemic functional linguistics to investigate this situation in the political crisis speeches in Nigeria. In this paper, we focus attention on the analyses of the linguistic, semantic, and ideological elements in selected political crisis speeches in Nigeria to investigate if they create and sustain unequal power relations and manipulative tendencies from the perspectives of Critical Discourse Analysis (CDA) and Systemic Functional Linguistics (SFL). Critical Discourse Analysis unpacks both opaque and transparent structural relationships of power dominance, power relations and control as manifested in language. Critical discourse analysis emerged from a critical theory of language study which sees the use of language as a form of social practice where social relations are reproduced or contested and different interests are served. Systemic function linguistics relates the structure of texts to their function. Fairclough’s model of CDA and Halliday’s systemic functional approach to language study are adopted in this paper. This paper probes into language use that perpetuates inequalities. This study demystifies the hidden implicature of the selected political crisis speeches and reveals the existence of information that is not made explicit in what the political actors actually say. The analysis further reveals the ideological configurations present in the texts. These ideological standpoints are the basis for naturalizing implicit ideologies and hegemonic influence in the texts. The analyses of the texts further uncovered the linguistic and discursive strategies deployed by text producers to manipulate the unsuspecting members of the public both mentally and conceptually in order to enact, sustain and maintain unhealthy power relations at crisis times in the Nigerian political history.Keywords: critical discourse analysis, language, political crisis, power relations, systemic functional linguistics
Procedia PDF Downloads 346202 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach
Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman
Abstract:
Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.Keywords: categorical data, log linear modeling, neural network, shifting cultivation
Procedia PDF Downloads 56201 Health and Greenhouse Gas Emission Implications of Reducing Meat Intakes in Hong Kong
Authors: Cynthia Sau Chun Yip, Richard Fielding
Abstract:
High meat and especially red meat intakes are significantly and positively associated with a multiple burden of diseases and also high greenhouse gas (GHG) emissions. This study investigated population meat intake patterns in Hong Kong. It quantified the burden of disease and GHG emission outcomes by modeling to adjust Hong Kong population meat intakes to recommended healthy levels. It compared age- and sex-specific population meat, fruit and vegetable intakes obtained from a population survey among adults aged 20 years and over in Hong Kong in 2005-2007, against intake recommendations suggested in the Modelling System to Inform the Revision of the Australian Guide to Healthy Eating (AGHE-2011-MS) technical document. This study found that meat and meat alternatives, especially red meat intakes among Hong Kong males aged 20+ years and over are significantly higher than recommended. Red meat intakes among females aged 50-69 years and other meat and alternatives intakes among aged 20-59 years are also higher than recommended. Taking the 2005-07 age- and sex-specific population meat intake as baselines, three counterfactual scenarios of adjusting Hong Kong adult population meat intakes to AGHE-2011-MS and Pre-2011 AGHE recommendations by the year 2030 were established. Consequent energy intake gaps were substituted with additional legume, fruit and vegetable intakes. To quantify the consequent GHG emission outcomes associated with Hong Kong meat intakes, Cradle-to-ready-to-eat lifecycle assessment emission outcome modelling was used. Comparative risk assessment of burden of disease model was used to quantify the health outcomes. This study found adjusting meat intakes to recommended levels could reduce Hong Kong GHG emission by 17%-44% when compared against baseline meat intake emissions, and prevent 2,519 to 7,012 premature deaths in males and 53 to 1,342 in females, as well as multiple burden of diseases when compared to the baseline meat intake scenario. Comparing lump sum meat intake reduction and outcome measures across the entire population, and using emission factors, and relative risks from individual studies in previous co-benefit studies, this study used age- and sex-specific input and output measures, emission factors and relative risks obtained from high quality meta-analysis and meta-review respectively, and has taken government dietary recommendations into account. Hence evaluations in this study are of better quality and more reflective of real life practices. Further to previous co-benefit studies, this study pinpointed age- and sex-specific population and meat-type-specific intervention points and leverages. When compared with similar studies in Australia, this study also showed that intervention points and leverages among populations in different geographic and cultural background could be different, and that globalization also globalizes meat consumption emission effects. More regional and cultural specific evaluations are recommended to promote more sustainable meat consumption and enhance global food security.Keywords: burden of diseases, greenhouse gas emissions, Hong Kong diet, sustainable meat consumption
Procedia PDF Downloads 312200 Simulation and Thermal Evaluation of Containers Using PCM in Different Weather Conditions of Chile: Energy Savings in Lightweight Constructions
Authors: Paula Marín, Mohammad Saffari, Alvaro de Gracia, Luisa F. Cabeza, Svetlana Ushak
Abstract:
Climate control represents an important issue when referring to energy consumption of buildings and associated expenses, both in installation or operation periods. The climate control of a building relies on several factors. Among them, localization, orientation, architectural elements, sources of energy used, are considered. In order to study the thermal behaviour of a building set up, the present study proposes the use of energy simulation program Energy Plus. In recent years, energy simulation programs have become important tools for evaluation of thermal/energy performance of buildings and facilities. Besides, the need to find new forms of passive conditioning in buildings for energy saving is a critical component. The use of phase change materials (PCMs) for heat storage applications has grown in importance due to its high efficiency. Therefore, the climatic conditions of Northern Chile: high solar radiation and extreme temperature fluctuations ranging from -10°C to 30°C (Calama city), low index of cloudy days during the year are appropriate to take advantage of solar energy and use passive systems in buildings. Also, the extensive mining activities in northern Chile encourage the use of large numbers of containers to harbour workers during shifts. These containers are constructed with lightweight construction systems, requiring heating during night and cooling during day, increasing the HVAC electricity consumption. The use of PCM can improve thermal comfort and reduce the energy consumption. The objective of this study was to evaluate the thermal and energy performance of containers of 2.5×2.5×2.5 m3, located in four cities of Chile: Antofagasta, Calama, Santiago, and Concepción. Lightweight envelopes, typically used in these building prototypes, were evaluated considering a container without PCM inclusion as the reference building and another container with PCM-enhanced envelopes as a test case, both of which have a door and a window in the same wall, orientated in two directions: North and South. To see the thermal response of these containers in different seasons, the simulations were performed considering a period of one year. The results show that higher energy savings for the four cities studied are obtained when the distribution of door and window in the container is in the north direction because of higher solar radiation incidence. The comparison of HVAC consumption and energy savings in % for north direction of door and window are summarised. Simulation results show that in the city of Antofagasta 47% of heating energy could be saved and in the cities of Calama and Concepción the biggest savings in terms of cooling could be achieved since PCM reduces almost all the cooling demand. Currently, based on simulation results, four containers have been constructed and sized with the same structural characteristics carried out in simulations, that are, containers with/without PCM, with door and window in one wall. Two of these containers will be placed in Antofagasta and two containers in a copper mine near to Calama, all of them will be monitored for a period of one year. The simulation results will be validated with experimental measurements and will be reported in the future.Keywords: energy saving, lightweight construction, PCM, simulation
Procedia PDF Downloads 287199 Conceptual Methods of Mitigating Matured Urban Tree Roots Surviving in Conflicts Growth within Built Environment: A Review
Authors: Mohd Suhaizan Shamsuddin
Abstract:
Urbanization exacerbates the environment quality and pressures of matured urban trees' growth and development in changing environment. The growth of struggled matured urban tree-roots by spreading within the existences of infrastructures, resulting in large damage to the structured and declined growth. Many physiological growths declined or damages by the present and installations of infrastructures within and nearby root zone. Afford to remain both matured urban tree and infrastructures as a service provider causes damage and death, respectively. Inasmuch, spending more expenditure on fixing both or removing matured urban trees as risky to the future environment as the mitigation methods to reduce the problems are unconcerned. This paper aims to explain mitigation method practices of reducing the encountered problems of matured urban tree-roots settling and infrastructures while modified urban soil to sustain at an optimum level. Three categories capturing encountered conflicts growth of matured urban tree-roots growth within and nearby infrastructures by mitigating the problems of limited soil spaces, poor soil structures and soil space barrier installations and maintenance. The limited soil space encountered many conflicts and identified six methods that mitigate the survival tree-roots, such as soil volume/mounding, soil replacement/amendment for the radial trench, soil spacing-root bridge, root tunneling, walkway/pavement rising/diverted, and suspended pavement. The limited soil spaces are mitigation affords of inadequate soil-roots and spreading root settling and modification of construction soil media since the barrier existed and installed in root trails or zones. This is the reason for enabling tree-roots spreading and finds adequate sources (nutrients, water uptake and oxygen), spaces and functioning to stability stand of root anchorage since the matured tree grows larger. The poor soil structures were identified as three methods to mitigate soil materials' problems, and fewer soil voids comprise skeletal soil, structural soil, and soil cell. Mitigation of poor soil structure is altering the existing and introducing new structures by modifying the quantities and materials ratio allowing more voids beneath for roots spreading by considering the above structure of foot and vehicle traffics functioning or load-bearing. The soil space barrier installations and maintenance recognized to sustain both infrastructures and tree-roots grown in limited spaces and its benefits, the root barrier installations and root pruning are recommended. In conclusion, these recommended methods attempt to mitigate the present problems encountered at a particular place and problems among tree-roots and infrastructures exist. The combined method is the best way to alleviates the conflicts since the recognized conflicts are between tree-roots and man-made while the urban soil is modified. These presenting methods are most considered to sustain the matured urban trees' lifespan growth in the urban environment.Keywords: urban tree-roots, limited soil spaces, poor soil structures, soil space barrier and maintenance
Procedia PDF Downloads 200198 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico
Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos
Abstract:
Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis
Procedia PDF Downloads 153197 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model
Authors: Muhammad Karim Ahmadzai
Abstract:
Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis
Procedia PDF Downloads 119196 Music Piracy Revisited: Agent-Based Modelling and Simulation of Illegal Consumption Behavior
Authors: U. S. Putro, L. Mayangsari, M. Siallagan, N. P. Tjahyani
Abstract:
National Collective Management Institute (LKMN) in Indonesia stated that legal music products were about 77.552.008 unit while illegal music products were about 22.0688.225 unit in 1996 and this number keeps getting worse every year. Consequently, Indonesia named as one of the countries with high piracy levels in 2005. This study models people decision toward unlawful behavior, music content piracy in particular, using agent-based modeling and simulation (ABMS). The classification of actors in the model constructed in this study are legal consumer, illegal consumer, and neutral consumer. The decision toward piracy among the actors is a manifestation of the social norm which attributes are social pressure, peer pressure, social approval, and perceived prevalence of piracy. The influencing attributes fluctuate depending on the majority of surrounding behavior called social network. There are two main interventions undertaken in the model, campaign and peer influence, which leads to scenarios in the simulation: positively-framed descriptive norm message, negatively-framed descriptive norm message, positively-framed injunctive norm with benefits message, and negatively-framed injunctive norm with costs message. Using NetLogo, the model is simulated in 30 runs with 10.000 iteration for each run. The initial number of agent was set 100 proportion of 95:5 for illegal consumption. The assumption of proportion is based on the data stated that 95% sales of music industry are pirated. The finding of this study is that negatively-framed descriptive norm message has a worse reversed effect toward music piracy. The study discovers that selecting the context-based campaign is the key process to reduce the level of intention toward music piracy as unlawful behavior by increasing the compliance awareness. The context of Indonesia reveals that that majority of people has actively engaged in music piracy as unlawful behavior, so that people think that this illegal act is common behavior. Therefore, providing the information about how widespread and big this problem is could make people do the illegal consumption behavior instead. The positively-framed descriptive norm message scenario works best to reduce music piracy numbers as it focuses on supporting positive behavior and subject to the right perception on this phenomenon. Music piracy is not merely economical, but rather social phenomenon due to the underlying motivation of the actors which has shifted toward community sharing. The indication of misconception of value co-creation in the context of music piracy in Indonesia is also discussed. This study contributes theoretically that understanding how social norm configures the behavior of decision-making process is essential to breakdown the phenomenon of unlawful behavior in music industry. In practice, this study proposes that reward-based and context-based strategy is the most relevant strategy for stakeholders in music industry. Furthermore, this study provides an opportunity that findings may generalize well beyond music piracy context. As an emerging body of work that systematically constructs the backstage of law and social affect decision-making process, it is interesting to see how the model is implemented in other decision-behavior related situation.Keywords: music piracy, social norm, behavioral decision-making, agent-based model, value co-creation
Procedia PDF Downloads 188195 Mining in Peru and Local Governance: Assessing the Contribution of CRS Projects
Authors: Sandra Carrillo Hoyos
Abstract:
Mining activities in South America have significantly grown during the last decades, given the abundance of natural resources, the implemented governmental policies to incentivize foreign investment as well as the boom in international prices for metals and oil between 2002 and 2008. While this context allowed the region to occupy a leading position between the top producers of minerals around the world, it has also meant an increase in socio-environmental conflicts which have generated costs and negative impacts not only for the companies but especially for the governments and local communities.During the latest decade, the mining sector in Peru has faced with the social resistance of a large number of communities, which began organizing actions against the implementation of high investing projects. The dissatisfaction has derived in the prevalence of socio-environmental conflicts associated with mining activities, some of them never solved into an agreement. In order to prevent those socio-environmental conflicts and obtain the social license from local communities, most of the mining companies have developed diverse initiatives within the framework of policies and practices of corporate social responsibility (CSR). This paper has assessed the mining sector’s contribution toward the local development management along the last decade, as part of CSR strategies as well as the policies promoted by the Peruvian State. This assessment found that, in the beginning, these initiatives have been based on a philanthropic approach and were reacting to pressures from local stakeholders to maintain the consent to operate from the surrounding communities as well as to create, as a result, a harmonious atmosphere for operations. Due to the weak State presence, such practices have increased the expectations of communities related to the participation of mining companies in solving structural development problems, especially those related to primary needs, infrastructure, education, health, among others. In other words, this paper was focused on analyze in what extent these initiatives have promoted local empowerment for development planning and integrated management of natural resources from a territorial approach. From this perspective, the analysis demonstrates that, while the design and planning of social investment initiatives have improved due to the sector´s sustainability approach, many companies have developed actions beyond their competence during this process. In some cases, the referenced actions have generated dependency with communities, even though this relationship has not exempted the companies of conflict situations with unfortunate consequences. Furthermore, the social programs developed have not necessarily generated a significant impact in improving the quality of life of affected populations. In fact, it is possible to identify that those regions with high mining resources and investment are facing with a situation of poverty and high dependency on mining production. In spite of the revenues derived from mining industry, local governments have not been able to translate the royalties into sustainable development opportunities. For this reason, the proposed paper suggests some challenges for the mining sector contribution to local development based on the best practices and lessons learnt from a benchmarking for the leading mining companies.Keywords: corporate social responsibility, local development, mining, socio-environmental conflict
Procedia PDF Downloads 408194 Improvement in the Photocatalytic Activity of Nanostructured Manganese Ferrite – Type of Materials by Mechanochemical Activation
Authors: Katerina Zaharieva, Katya Milenova, Zara Cherkezova-Zheleva, Alexander Eliyas, Boris Kunev, Ivan Mitov
Abstract:
The synthesized nanosized manganese ferrite-type of samples have been tested as photocatalysts in the reaction of oxidative degradation of model contaminant Reactive Black 5 (RB5) dye in aqueous solutions under UV irradiation. As it is known this azo dye is applied in the textile-coloring industry and it is discharged into the waterways causing pollution. The co-precipitation procedure has been used for the synthesis of manganese ferrite-type of materials: Sample 1 - Mn0.25Fe2.75O4, Sample 2 - Mn0.5Fe2.5O4 and Sample 3 - MnFe2O4 from 0.03M aqueous solutions of MnCl2•4H2O, FeCl2•4H2O and/or FeCl3•6H2O and 0.3M NaOH in appropriate amounts. The mechanochemical activation of co-precipitated ferrite-type of samples has been performed in argon (Samples 1 and 2) or in air atmosphere (Sample 3) for 2 hours at a milling speed of 500 rpm. The mechano-chemical treatment has been carried out in a high energy planetary ball mill type PM 100, Retsch, Germany. The mass ratio between balls and powder was 30:1. As a result mechanochemically activated Sample 4 - Mn0.25Fe2.75O4, Sample 5 - Mn0.5Fe2.5O4 and Sample 6 - MnFe2O4 have been obtained. The synthesized manganese ferrite-type photocatalysts have been characterized by X-ray diffraction method and Moessbauer spectroscopy. The registered X-ray diffraction patterns and Moessbauer spectra of co-precipitated ferrite-type of materials show the presence of manganese ferrite and additional akaganeite phase. The presence of manganese ferrite and small amounts of iron phases is established in the mechanochemically treated samples. The calculated average crystallite size of manganese ferrites varies within the range 7 – 13 nm. This result is confirmed by Moessbauer study. The registered spectra show superparamagnetic behavior of the prepared materials at room temperature. The photocatalytic investigations have been made using polychromatic UV-A light lamp (Sylvania BLB, 18 W) illumination with wavelength maximum at 365 nm. The intensity of light irradiation upon the manganese ferrite-type photocatalysts was 0.66 mW.cm-2. The photocatalytic reaction of oxidative degradation of RB5 dye was carried out in a semi-batch slurry photocatalytic reactor with 0.15 g of ferrite-type powder, 150 ml of 20 ppm dye aqueous solution under magnetic stirring at rate 400 rpm and continuously feeding air flow. The samples achieved adsorption-desorption equilibrium in the dark period for 30 min and then the UV-light was turned on. After regular time intervals aliquot parts from the suspension were taken out and centrifuged to separate the powder from solution. The residual concentrations of dye were established by a UV-Vis absorbance single beam spectrophotometer CamSpec M501 (UK) measuring in the wavelength region from 190 to 800 nm. The photocatalytic measurements determined that the apparent pseudo-first-order rate constants calculated by linear slopes approximating to first order kinetic equation, increase in following order: Sample 3 (1.1х10-3 min-1) < Sample 1 (2.2х10-3 min-1) < Sample 2 (3.3 х10-3 min-1) < Sample 4 (3.8х10-3 min-1) < Sample 6 (11х10-3 min-1) < Sample 5 (15.2х10-3 min-1). The mechanochemically activated manganese ferrite-type of photocatalyst samples show significantly higher degree of oxidative degradation of RB5 dye after 120 minutes of UV light illumination in comparison with co-precipitated ferrite-type samples: Sample 5 (92%) > Sample 6 (91%) > Sample 4 (63%) > Sample 2 (53%) > Sample 1 (42%) > Sample 3 (15%). Summarizing the obtained results we conclude that the mechanochemical activation leads to a significant enhancement of the degree of oxidative degradation of the RB5 dye and photocatalytic activity of tested manganese ferrite-type of catalyst samples under our experimental conditions. The mechanochemically activated Mn0.5Fe2.5O4 ferrite-type of material displays the highest photocatalytic activity (15.2х10-3 min-1) and degree of oxidative degradation of the RB5 dye (92%) compared to the other synthesized samples. Especially a significant improvement in the degree of oxidative degradation of RB5 dye (91%) has been determined for mechanochemically treated MnFe2O4 ferrite-type of sample with the highest extent of substitution of iron ions by manganese ions than in the case of the co-precipitated MnFe2O4 sample (15%). The mechanochemically activated manganese ferrite-type of samples show good photocatalytic properties in the reaction of oxidative degradation of RB5 azo dye in aqueous solutions and it could find potential application for dye removal from wastewaters originating from textile industry.Keywords: nanostructured manganese ferrite-type materials, photocatalytic activity, Reactive Black 5, water treatment
Procedia PDF Downloads 347193 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers
Authors: M. Sarraf, J. E. Moros, M. C. Sánchez
Abstract:
Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.Keywords: basil seed gum, particle size, viscoelastic properties, whey protein
Procedia PDF Downloads 66192 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 69191 Scenario-Based Scales and Situational Judgment Tasks to Measure the Social and Emotional Skills
Authors: Alena Kulikova, Leonid Parmaksiz, Ekaterina Orel
Abstract:
Social and emotional skills are considered by modern researchers as predictors of a person's success both in specific areas of activity and in the life of a person as a whole. The popularity of this scientific direction ensures the emergence of a large number of practices aimed at developing and evaluating socio-emotional skills. Assessment of social and emotional development is carried out at the national level, as well as at the level of individual regions and institutions. Despite the fact that many of the already existing social and emotional skills assessment tools are quite convenient and reliable, there are now more and more new technologies and task formats which improve the basic characteristics of the tools. Thus, the goal of the current study is to develop a tool for assessing social and emotional skills such as emotion recognition, emotion regulation, empathy and a culture of self-care. To develop a tool assessing social and emotional skills, Rasch-Gutman scenario-based approach was used. This approach has shown its reliability and merit for measuring various complex constructs: parental involvement; teacher practices that support cultural diversity and equity; willingness to participate in the life of the community after psychiatric rehabilitation; educational motivation and others. To assess emotion recognition, we used a situational judgment task based on OCC (Ortony, Clore, and Collins) emotions theory. The main advantage of these two approaches compare to classical Likert scales is that it reduces social desirability in answers. A field test to check the psychometric properties of the developed instrument was conducted. The instrument was developed for the presidential autonomous non-profit organization “Russia - Land of Opportunity” for nationwide soft skills assessment among higher education students. The sample for the field test consisted of 500 people, students aged from 18 to 25 (mean = 20; standard deviation 1.8), 71% female. 67% of students are only studying and are not currently working and 500 employed adults aged from 26 to 65 (mean = 42.5; SD 9), 57% female. Analysis of the psychometric characteristics of the scales was carried out using the methods of IRT (Item Response Theory). A one-parameter rating scale model RSM (Rating scale model) and Graded Response model (GRM) of the modern testing theory were applied. GRM is a polyatomic extension of the dichotomous two-parameter model of modern testing theory (2PL) based on the cumulative logit function for modeling the probability of a correct answer. The validity of the developed scales was assessed using correlation analysis and MTMM (multitrait-multimethod matrix). The developed instrument showed good psychometric quality and can be used by HR specialists or educational management. The detailed results of a psychometric study of the quality of the instrument, including the functioning of the tasks of each scale, will be presented. Also, the results of the validity study by MTMM analysis will be discussed.Keywords: social and emotional skills, psychometrics, MTMM, IRT
Procedia PDF Downloads 76190 Logic of Appearance vs Explanatory Logic: A Systemic Functional Linguistics Approach to the Evolution of Communicative Strategies in the European Union Institutional Discourse
Authors: Antonio Piga
Abstract:
The issue of European cultural identity has become a prominent topic of discussion among political actors in the wake of the unsuccessful referenda held in France and the Netherlands in May and June 2006. The „period of reflection‟ announced by the European Council at the conclusion of June 2006 has provided an opportunity for the implementation of several initiatives and programmes designed to „bridge the gap‟ between the EU institutions and its citizens. Specific programmes were designed with the objective of enhancing the European Commission‟s external communication of its activities. Subsequently, further plans for democracy, debate, and dialogue were devised with the objective of fostering open and extensive discourse between EU institutions and citizens. Further documentation on communication policy emphasised the necessity of developing linguistic techniques to re-engage disenchanted or uninformed citizens with the European project. It was observed that the European Union is perceived as a „faceless‟ entity, which is attributed to the absence of a distinct public identity vis-à-vis its institutions. This contribution presents an analysis of a collection of informative publications regarding the European Union, entitled “Europe on the Move”. This collection of booklets provides comprehensive information about the European Union, including its historical origins, core values, and historical development, as well as its achievements, strategic objectives, policies, and operational procedures. The theoretical framework adopted for the longitudinal linguistic analysis of EU discourse is that of Systemic Functional Linguistics (SFL). In more detail, this study considers two basic systems of relations between clauses: firstly, the degree of interdependency (or taxis) and secondly, the logico-semantic relation of expansion. The former refers to the structural markers of grammatical relations between clauses within sentences, namely paratactic, hypotactic and embedded relations. The latter pertains to various logicosemantic relationships existing between the primary and secondary members of the clause nexus. These relationships include how the secondary clause expands the primary clause, which may be achieved by (a) elaborating it, (b) extending it or (c) enhancing it. This study examines the impact of the European Commission‟s post-referendum communication methods on the portrayal of Europe, its role in facilitating the EU institutional process, and its articulation of a specific EU identity linked to distinct values. The research reveals that the language employed by the EU is evidently grounded in an explanatory logic, elucidating the rationale behind their institutionalised acts. Nevertheless, the minimal use of hypotaxis in the post-referendum booklets, coupled with the inconsistent yet increasing ratio of parataxis to hypotaxis, may suggest a potential shift towards a logic of appearance, characterised by a predominant reliance on coordination and additive, and elaborative logico-semantic relations.Keywords: systemic functional linguistics, logic of appearance, explanatory logic, interdependency, logico-semantic relation
Procedia PDF Downloads 12189 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 90188 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry
Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard
Abstract:
Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor
Procedia PDF Downloads 327187 Seismic Analysis of Vertical Expansion Hybrid Structure by Response Spectrum Method Concern with Disaster Management and Solving the Problems of Urbanization
Authors: Gautam, Gurcharan Singh, Mandeep Kaur, Yogesh Aggarwal, Sanjeev Naval
Abstract:
The present ground reality scenario of suffering of humanity shows the evidence of failure to take wrong decisions to shape the civilization with Irresponsibilities in the history. A strong positive will of right responsibilities make the right civilization structure which affects itself and the whole world. Present suffering of humanity shows and reflect the failure of past decisions taken to shape the true culture with right social structure of society, due to unplanned system of Indian civilization and its rapid disaster of population make the failure to face all kind of problems which make the society sufferer. Our India is still suffering from disaster like earthquake, floods, droughts, tsunamis etc. and we face the uncountable disaster of deaths from the beginning of humanity at the present time. In this research paper our focus is to make a Disaster Resistance Structure having the solution of dense populated urban cities area by high vertical expansion HYBRID STRUCTURE. Our efforts are to analyse the Reinforced Concrete Hybrid Structure at different seismic zones, these concrete frames were analyzed using the response spectrum method to calculate and compare the different seismic displacement and drift. Seismic analysis by this method generally is based on dynamic analysis of building. Analysis results shows that the Reinforced Concrete Building at seismic Zone V having maximum peak story shear, base shear, drift and node displacement as compare to the analytical results of Reinforced Concrete Building at seismic Zone III and Zone IV. This analysis results indicating to focus on structural drawings strictly at construction site to make a HYBRID STRUCTURE. The study case is deal with the 10 story height of a vertical expansion Hybrid frame structure at different zones i.e. zone III, zone IV and zone V having the column 0.45x0.36mt and beam 0.6x0.36mt. with total height of 30mt, to make the structure more stable bracing techniques shell be applied like mage bracing and V shape bracing. If this kind of efforts or structure drawings are followed by the builders and contractors then we save the lives during earthquake disaster at Bhuj (Gujarat State, India) on 26th January, 2001 which resulted in more than 19,000 deaths. This kind of Disaster Resistance Structure having the capabilities to solve the problems of densely populated area of cities by the utilization of area in vertical expansion hybrid structure. We request to Government of India to make new plans and implementing it to save the lives from future disasters instead of unnecessary wants of development plans like Bullet Trains.Keywords: history, irresponsibilities, unplanned social structure, humanity, hybrid structure, response spectrum analysis, DRIFT, and NODE displacement
Procedia PDF Downloads 211186 Gender, Agency, and Health: An Exploratory Study Using an Ethnographic Material for Illustrative Reasons
Authors: S. Gustafsson
Abstract:
The aim of this paper is to explore the connection between gender, agency, and health on personal and social levels over time. The use of gender as an analytical tool for health research has been shown to be useful to explore thoughts and ideas that are taken for granted, which have relevance for health. The paper highlights the following three issues. There are multiple forms of femininity and masculinity. Agency and social structure are closely related and referred to in this paper as 'gender agency'. Gender is illuminated as a product of history but also treated as a social factor and a producer of history. As a prominent social factor in the process of shaping living conditions, gender is highlighted as being significant for understanding health. To make health explicit as a dynamic and complex concept and not merely the opposite of disease requires a broader alliance with feminist theory and a post-Bourdieusian framework. A personal story, included with other ethnographic material about women’s networking in rural Sweden, is used as an empirical illustration. Ethnographic material was chosen for its ability to illustrate historical, local, and cultural ways of doing gendered and capitalized health. New concepts characterize ethnography, exemplified in this study by 'processes of transformation'. The semi-structured interviews followed an interview guide drafted with reference to the background theory of gender. The interviews lasted about an hour and were recorded and transcribed verbatim. The transcribed interviews and the author’s field notes formed the basis for the writing up of this paper. Initially, the participants' interests in weaving, sewing, and various handicrafts became obvious foci for networking activities and seemed at first to shape compliance with patriarchy, which generally does the opposite of promoting health. However, a significant event disrupted the stability of this phenomenon. What was permissible for the women began to crack and new spaces opened up. By exploiting these new spaces, the participants found opportunities to try out alternatives to emphasized femininity. Over time, they began combining feminized activities with degrees of masculinity, as leadership became part of the activities. In response to this, masculine enactment was gradually transformed and became increasingly gender neutral. As the tasks became more gender neutral the activities assumed a more formal character and the women stretched the limits of their capacity by enacting gender agency, a process the participants referred to as 'personal growth' and described as health promotion. What was described in terms of 'personal growth' can be interpreted as the effects of a raised status. Participation in women’s networking strengthened the participants’ structural position. More specifically, it was the gender-neutral position that was rewarded. To clarify the connection between gender, agency, and health on personal and social levels over time the concept processes of transformation is used. This concept is suggested as a dynamic equivalent to habitus. Health is thus seen as resulting from situational access to social recognition, prestige, capital assets and not least, meanings of gender.Keywords: a cross-gender bodily hexis, gender agency, gender as analytical tool, processes of transformation
Procedia PDF Downloads 159185 Modeling Taxane-Induced Peripheral Neuropathy Ex Vivo Using Patient-Derived Neurons
Authors: G. Cunningham, E. Cantor, X. Wu, F. Shen, G. Jiang, S. Philips, C. Bales, Y. Xiao, T. R. Cummins, J. C. Fehrenbacher, B. P. Schneider
Abstract:
Background: Taxane-induced peripheral neuropathy (TIPN) is the most devastating survivorship issue for patients receiving therapy. Dose reductions due to TIPN in the curative setting lead to inferior outcomes for African American patients, as prior research has shown that this group is more susceptible to developing severe neuropathy. The mechanistic underpinnings of TIPN, however, have not been entirely elucidated. While it would be appealing to use primary tissue to study the development of TIPN, procuring nerves from patients is not realistically feasible, as nerve biopsies are painful and may result in permanent damage. Therefore, our laboratory has investigated paclitaxel-induced neuronal morphological and molecular changes using an ex vivo model of human-induced pluripotent stem cell (iPSC)-derived neurons. Methods: iPSCs are undifferentiated and endlessly dividing cells that can be generated from a patient’s somatic cells, such as peripheral blood mononuclear cells (PBMCs). We successfully reprogrammed PBMCs into iPSCs using the Erythroid Progenitor Reprograming Kit (STEMCell Technologiesᵀᴹ); pluripotency was verified by flow cytometry analysis. iPSCs were then induced into neurons using a differentiation protocol that bypasses the neural progenitor stage and uses selected small-molecule modulators of key signaling pathways (SMAD, Notch, FGFR1 inhibition, and Wnt activation). Results: Flow cytometry analysis revealed expression of core pluripotency transcription factors Nanog, Oct3/4 and Sox2 in iPSCs overlaps with commercially purchased pluripotent cell line UCSD064i-20-2. Trilineage differentiation of iPSCs was confirmed with immunofluorescent imaging with germ-layer-specific markers; Sox17 and ExoA2 for ectoderm, Nestin, and Pax6 for mesoderm, and Ncam and Brachyury for endoderm. Sensory neuron markers, β-III tubulin, and Peripherin were applied to stain the cells for the maturity of iPSC-derived neurons. Patch-clamp electrophysiology and calcitonin gene-related peptide (CGRP) release data supported the functionality of the induced neurons and provided insight into the timing for which downstream assays could be performed (week 4 post-induction). We have also performed a cell viability assay and fluorescence-activated cell sorting (FACS) using four cell-surface markers (CD184, CD44, CD15, and CD24) to select a neuronal population. At least 70% of the cells were viable in the isolated neuron population. Conclusion: We have found that these iPSC-derived neurons recapitulate mature neuronal phenotypes and demonstrate functionality. Thus, this represents a patient-derived ex vivo neuronal model to investigate the molecular mechanisms of clinical TIPN.Keywords: chemotherapy, iPSC-derived neurons, peripheral neuropathy, taxane, paclitaxel
Procedia PDF Downloads 122