Search results for: social influence and social identity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16150

Search results for: social influence and social identity

220 Modern Architecture and the Scientific World Conception

Authors: Sean Griffiths

Abstract:

Introduction: This paper examines the expression of ‘objectivity’ in architecture in the context of the post-war rejection of this concept. It aims to re-examine the question in light of the assault on truth characterizing contemporary culture and of the unassailable truth of the climate emergency. The paper analyses the search for objective truth as it was prosecuted in the Modern Movement in the early 20th century, looking at the extent to which this quest was successful in contributing to the development of a radically new, politically-informed architecture and the extent to which its particular interpretation of objectivity, limited that development. The paper studies the influence of the Vienna Circle philosophers Rudolph Carnap and Otto Neurath on the pedagogy of the Bauhaus and the architecture of the Neue Sachlichkeit in Germany. Their logical positivism sought to determine objective truths through empirical analysis, expressed in an austere formal language as part of a ‘scientific world conception’ which would overcome metaphysics and unverifiable mystification. These ideas, and the concurrent prioritizing of measurement as the determinant of environmental quality, became key influences in the socially-driven architecture constructed in the 1920s and 30s by Bauhaus architects in numerous German Cities. Methodology: The paper reviews the history of the early Modern Movement and summarizes accounts of the relationship between the Vienna Circle and the Bauhaus. It looks at key differences in the approaches Neurath and Carnap took to the achievement of their shared philosophical and political aims. It analyses how the adoption of Carnap’s foundationalism influenced the architectural language of modern architecture and compares, through a close reading of the structure of Neurath’s ‘protocol sentences,’ the latter’s alternative approach, speculating on the possibility that its adoption offered a different direction of travel for Modern Architecture. Findings: The paper finds that the adoption of Carnap’s foundationalism, while helping Modern Architecture forge a new visual language, ultimately limited its development and is implicated in its failure to escape the very metaphysics against which it had set itself. It speculates that Neurath’s relational language-based approach to the issue of establishing objectivity has its architectural corollary in the process of revision and renovation that offers new ways an ‘objective’ language of architecture might be developed in a manner that is more responsive to our present-day crisis. Conclusion: The philosophical principles of the Vienna Circle and the architects of the Modern Movement had much in common. Both contributed to radical historical departures which sought to instantiate a world scientific conception in their respective fields, which would attempt to banish mystification and metaphysics and would align itself with socialism. However, in adopting Carnap’s foundationalism as the theoretical basis for the new architecture, Modern Architecture not only failed to escape metaphysics but arguably closed off new avenues of development to itself. The adoption of Neurath’s more open-ended and interactive approach to objectivity offers possibilities for new conceptions of the expression of objectivity in architecture that might be more tailored to the multiple crises we face today.

Keywords: Bauhaus, logical positivism, Neue Sachlichkeit, rationalism, Vienna Circle

Procedia PDF Downloads 53
219 Influence Study of the Molar Ratio between Solvent and Initiator on the Reaction Rate of Polyether Polyols Synthesis

Authors: María José Carrero, Ana M. Borreguero, Juan F. Rodríguez, María M. Velencoso, Ángel Serrano, María Jesús Ramos

Abstract:

Flame-retardants are incorporated in different materials in order to reduce the risk of fire, either by providing increased resistance to ignition, or by acting to slow down combustion and thereby delay the spread of flames. In this work, polyether polyols with fire retardant properties were synthesized due to their wide application in the polyurethanes formulation. The combustion of polyurethanes is primarily dependent on the thermal properties of the polymer, the presence of impurities and formulation residue in the polymer as well as the supply of oxygen. There are many types of flame retardants, most of them are phosphorous compounds of different nature and functionality. The addition of these compounds is the most common method for the incorporation of flame retardant properties. The employment of glycerol phosphate sodium salt as initiator for the polyol synthesis allows obtaining polyols with phosphate groups in their structure. However, some of the critical points of the use of glycerol phosphate salt are: the lower reactivity of the salt and the necessity of a solvent (dimethyl sulfoxide, DMSO). Thus, the main aim in the present work was to determine the amount of the solvent needed to get a good solubility of the initiator salt. Although the anionic polymerization mechanism of polyether formation is well known, it seems convenient to clarify the role that DMSO plays at the starting point of the polymerization process. Regarding the fact that the catalyst deprotonizes the hydroxyl groups of the initiator and as a result of this, two water molecules and glycerol phosphate alkoxide are formed. This alkoxide, together with DMSO, has to form a homogeneous mixture where the initiator (solid) and the propylene oxide (PO) are soluble enough to mutually interact. The addition rate of PO increased when the solvent/initiator ratios studied were increased, observing that it also made the initiation step shorter. Furthermore, the molecular weight of the polyol decreased when higher solvent/initiator ratios were used, what revealed that more amount of salt was activated, initiating more chains of lower length but allowing to react more phosphate molecules and to increase the percentage of phosphorous in the final polyol. However, the final phosphorous content was lower than the theoretical one because only a percentage of salt was activated. On the other hand, glycerol phosphate disodium salt was still partially insoluble in DMSO studied proportions, thus, the recovery and reuse of this part of the salt for the synthesis of new flame retardant polyols was evaluated. In the recovered salt case, the rate of addition of PO remained the same than in the commercial salt but a shorter induction period was observed, this is because the recovered salt presents a higher amount of deprotonated hydroxyl groups. Besides, according to molecular weight, polydispersity index, FT-IR spectrum and thermal stability, there were no differences between both synthesized polyols. Thus, it is possible to use the recovered glycerol phosphate disodium salt in the same way that the commercial one.

Keywords: DMSO, fire retardants, glycerol phosphate disodium salt, recovered initiator, solvent

Procedia PDF Downloads 253
218 Investigation of the Trunk Inclination Positioning Angle on Swallowing and Respiratory Function

Authors: Hsin-Yi Kathy Cheng, Yan-Ying JU, Wann-Yun Shieh, Chin-Man Wang

Abstract:

Although the coordination of swallowing and respiration has been discussed widely, the influence of the positioning angle on swallowing and respiration during feeding has rarely been investigated. This study aimed to investigate the timing and coordination of swallowing and respiration in different seat inclination angles, with liquid and bolus, to provide suggestions and guidelines for the design and develop a feedback-controlled seat angle adjustment device for the back-adjustable wheelchair. Twenty-six participants aged between 15-30 years old without any signs of swallowing difficulty were included. The combination of seat inclinations and food types was randomly assigned, with three repetitions in each combination. The trunk inclination angle was adjusted by a commercialized positioning wheelchair. A total of 36 swallows were done, with at least 30 seconds of rest between each swallow. We used a self-developed wearable device to measure the submandibular muscle surface EMG, the movement of the thyroid cartilage, and the respiratory status of the nasal cavity. Our program auto-analyzed the onset and offset of duration, and the excursion and strength of thyroid cartilage when it was moving, coordination between breathing and swallowing were also included. Variables measured include the EMG duration (DsEMG), swallowing apnea duration (SAD), total excursion time (TET), duration of 2nd deflection, FSR amplitude, Onset latency, DsEMG onset, DsEMG offset, FSR onset, and FSR offset. These measurements were done in four-seat inclination angles (5。, 15。, 30。, 45。) and three food contents (1ml water, 10ml water, and 5ml pudding bolus) for each subject. The data collected between different contents were compared. Descriptive statistics were used to describe the basic features of the data. Repeated measure ANOVAs were used to analyze the differences for the dependent variables in different seat inclination and food content combinations. The results indicated significant differences in seat inclination, mostly between 5。 and 45。, in all variables except FSR amplitude. It also indicated significant differences in food contents almost among all variables. Significant interactions between seat inclination and food contents were only found in FSR offsets. The same protocol will be applied to participants with disabilities. The results of this study would serve as clinical guidance for proper feeding positions with different food contents. The ergonomic data would also provide references for assistive technology professionals and practitioners in device design and development. In summary, the current results indicated that it is easier for a subject to lean backward during swallowing than when sitting upright and swallowing water is easier than swallowing pudding. The results of this study would serve as the clinical guidance for proper feeding position (such as wheelchair back angle adjustment) with different food contents. The same protocol can be applied to elderly participants or participants with physical disabilities. The ergonomic data would also provide references for assistive technology professionals and practitioners in device design and development.

Keywords: swallowing, positioning, assistive device, disability

Procedia PDF Downloads 48
217 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting

Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan

Abstract:

El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.

Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index

Procedia PDF Downloads 118
216 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 101
215 Potential Assessment and Techno-Economic Evaluation of Photovoltaic Energy Conversion System: A Case of Ethiopia Light Rail Transit System

Authors: Asegid Belay Kebede, Getachew Biru Worku

Abstract:

The Earth and its inhabitants have faced an existential threat as a result of severe manmade actions. Global warming and climate change have been the most apparent manifestations of this threat throughout the world, with increasingly intense heat waves, temperature rises, flooding, sea-level rise, ice sheet melting, and so on. One of the major contributors to this disaster is the ever-increasing production and consumption of energy, which is still primarily fossil-based and emits billions of tons of hazardous GHG. The transportation industry is recognized as the biggest actor in terms of emissions, accounting for 24% of direct CO2 emissions and being one of the few worldwide sectors where CO2 emissions are still growing. Rail transportation, which includes all from light rail transit to high-speed rail services, is regarded as one of the most efficient modes of transportation, accounting for 9% of total passenger travel and 7% of total freight transit. Nonetheless, there is still room for improvement in the transportation sector, which might be done by incorporating alternative and/or renewable energy sources. As a result of these rapidly changing global energy situations and rapidly dwindling fossil fuel supplies, we were driven to analyze the possibility of renewable energy sources for traction applications. Even a small achievement in energy conservation or harnessing might significantly influence the total railway system and have the potential to transform the railway sector like never before. As a result, the paper begins by assessing the potential for photovoltaic (PV) power generation on train rooftops and existing infrastructure such as railway depots, passenger stations, traction substation rooftops, and accessible land along rail lines. As a result, a method based on a Google Earth system (using Helioscopes software) is developed to assess the PV potential along rail lines and on train station roofs. As an example, the Addis Ababa light rail transit system (AA-LRTS) is utilized. The case study examines the electricity-generating potential and economic performance of photovoltaics installed on AALRTS. As a consequence, the overall capacity of solar systems on all stations, including train rooftops, reaches 72.6 MWh per day, with an annual power output of 10.6 GWh. Throughout a 25-year lifespan, the overall CO2 emission reduction and total profit from PV-AA-LRTS can reach 180,000 tons and 892 million Ethiopian birrs, respectively. The PV-AA-LRTS has a 200% return on investment. All PV stations have a payback time of less than 13 years, and the price of solar-generated power is less than $0.08/kWh, which can compete with the benchmark price of coal-fired electricity. Our findings indicate that PV-AA-LRTS has tremendous potential, with both energy and economic advantages.

Keywords: sustainable development, global warming, energy crisis, photovoltaic energy conversion, techno-economic analysis, transportation system, light rail transit

Procedia PDF Downloads 60
214 C-Coordinated Chitosan Metal Complexes: Design, Synthesis and Antifungal Properties

Authors: Weixiang Liu, Yukun Qin, Song Liu, Pengcheng Li

Abstract:

Plant diseases can cause the death of crops with great economic losses. Particularly, those diseases are usually caused by pathogenic fungi. Metal fungicides are a type of pesticide that has advantages of a low-cost, broad antimicrobial spectrum and strong sterilization effect. However, the frequent and wide application of traditional metal fungicides has caused serious problems such as environmental pollution, the outbreak of mites and phytotoxicity. Therefore, it is critically necessary to discover new organic metal fungicides alternatives that have a low metal content, low toxicity, and little influence on mites. Chitosan, the second most abundant natural polysaccharide next to cellulose, was proved to have broad-spectrum antifungal activity against a variety of fungi. However, the use of chitosan was limited due to its poor solubility and weaker antifungal activity compared with commercial fungicide. Therefore, in order to improve the water solubility and antifungal activity, many researchers grafted the active groups onto chitosan. The present work was to combine free metal ions with chitosan, to prepare more potent antifungal chitosan derivatives, thus, based on condensation reaction, chitosan derivative bearing amino pyridine group was prepared and subsequently followed by coordination with cupric ions, zinc ions and nickel ions to synthesize chitosan metal complexes. The calculations by density functional theory (DFT) show that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and all of them are coordinated by the carbon atom in the p-π conjugate group and the oxygen atoms in the acetate ion. The antifungal properties of chitosan metal complexes against Phytophthora capsici (P. capsici), Gibberella zeae (G. zeae), Fusarium oxysporum (F. oxysporum) and Botrytis cinerea (B. cinerea) were also assayed. In addition, a plant toxicity experiment was carried out. The experiments indicated that the derivatives have significantly enhanced antifungal activity after metal ions complexation compared with the original chitosan. It was shown that 0.20 mg/mL of O-CSPX-Cu can 100% inhibit the growth of P. capsici and 0.20 mg/mL of O-CSPX-Ni can 87.5% inhibit the growth of B. cinerea. In general, their activities are better than the positive control oligosaccharides. The combination of the pyridine formyl groups seems to favor biological activity. Additionally, the ligand fashion was precisely analyzed, and the results revealed that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and the carbon atoms of the p-π conjugate group and the oxygen atoms of acetate ion are involved in the coordination of metal ions. The phytotoxicity assay of O-CSPX-M was also conducted, unlike the traditional metal fungicides, the metal complexes were not significantly toxic to the leaves of wheat. O-CSPX-Zn can even increase chlorophyll content in wheat leaves at 0.40 mg/mL. This is mainly because chitosan itself promotes plant growth and counteracts the phytotoxicity of metal ions. The chitosan derivative described here may lend themselves to future applicative studies in crop protection.

Keywords: coordination, chitosan, metal complex, antifungal properties

Procedia PDF Downloads 292
213 Identification of ω-3 Fatty Acids Using GC-MS Analysis in Extruded Spelt Product

Authors: Jelena Filipovic, Marija Bodroza-Solarov, Milenko Kosutic, Nebojsa Novkovic, Vladimir Filipovic, Vesna Vucurovic

Abstract:

Spelt wheat is suitable raw material for extruded products such as pasta, special types of bread and other products of altered nutritional characteristics compared to conventional wheat products. During the process of extrusion, spelt is exposed to high temperature and high pressure, during which raw material is also mechanically treated by shear forces. Spelt wheat is growing without the use of pesticides in harsh ecological conditions and in marginal areas of cultivation. So it can be used for organic and health safe food. Pasta is the most popular foodstuff; its consumption has been observed to rise. Pasta quality depends mainly on the properties of flour raw materials, especially protein content and its quality but starch properties are of a lesser importance. Pasta is characterized by significant amounts of complex carbohydrates, low sodium, total fat fiber, minerals, and essential fatty acids and its nutritional value can be improved with additional functional component. Over the past few decades, wheat pasta has been successfully formulated using different ingredients in pasta to cater health-conscious consumers who prefer having a product rich in protein, healthy lipids and other health benefits. Flaxseed flour is used in the production of bakery and pasta products that have properties of functional foods. However, it should be taken into account that food products retain the technological and sensory quality despite the added flax seed. Flaxseed contains important substances in its composition such as vitamins and minerals elements, and it is also an excellent source of fiber and one of the best sources of ω-3 fatty acids and lignin. In this paper, the quality and identification of spelt extruded product with the addition of flax seed, which is positively contributing to the nutritive and technology changes of the product, is investigated. ω-3 fatty acids are polyunsaturated essential fatty acids, and they must be taken with food to satisfy the recommended daily intake. Flaxseed flour is added in the quantity of 10/100 g of sample and 20/100 g of sample on farina. It is shown that the presence of ω-3 fatty acids in pasta can be clearly distinguished from other fatty acids by gas chromatography with mass spectrometry. Addition of flax seed flour influence chemical content of pasta. The addition of flax seed flour in spelt pasta in the quantities of 20g/100 g significantly increases the share of ω-3 fatty acids, which results in improved ratio of ω-6/ω-3 1:2.4 and completely satisfies minimum daily needs of ω-3 essential fatty acids (3.8 g/100 g) recommended by FDA. Flex flour influenced the pasta quality by increasing of hardness (2377.8 ± 13.3; 2874.5 ± 7.4; 3076.3 ± 5.9) and work of shear (102.6 ± 11.4; 150.8 ± 11.3; 165.0 ± 18.9) and increasing of adhesiveness (11.8 ± 20.6; 9.,98 ± 0.12; 7.1 ± 12.5) of the final product. Presented data point at good indicators of technological quality of spelt pasta with flax seed and that GC-MS analysis can be used in the quality control for flax seed identification. Acknowledgment: The research was financed by the Ministry of Education and Science of the Republic of Serbia (Project No. III 46005).

Keywords: GC-MS analysis, ω-3 fatty acids, flex seed, spelt wheat, daily needs

Procedia PDF Downloads 133
212 Mediating Role of 'Investment Recovery' and 'Competitiveness' on the Impact of Green Supply Chain Management Practices over Firm Performance: An Empirical Study Based on Textile Industry of Pakistan

Authors: Mehwish Jawaad

Abstract:

Purpose: The concept of GrSCM (Green Supply Chain Management) in the academic and research field is still thought to be in the development stage especially in Asian Emerging Economies. The purpose of this paper is to contribute significantly to the first wave of empirical investigation on GrSCM Practices and Firm Performance measures in Pakistan. The aim of this research is to develop a more holistic approach towards investigating the impact of Green Supply Chain Management Practices (Ecodesign, Internal Environmental Management systems, Green Distribution, Green Purchasing and Cooperation with Customers) on multiple dimensions of Firm Performance Measures (Economic Performance, Environmental Performance and Operational Performance) with a mediating role of Investment Recovery and Competitiveness. This paper also serves as an initiative to identify if the relationship between Investment Recovery and Firm Performance Measures is mediated by Competitiveness. Design/ Methodology/Approach: This study is based on survey Data collected from 272, ISO (14001) Certified Textile Firms Based in Lahore, Faisalabad, and Karachi which are involved in Spinning, Dyeing, Printing or Bleaching. A Theoretical model was developed incorporating the constructs representing Green Activities and Firm Performance Measures of a firm. The data was analyzed using Partial Least Square Structural Equation Modeling. Senior and Mid-level managers provided the data reflecting the degree to which their organizations deal with both internal and external stakeholders to improve the environmental sustainability of their supply chain. Findings: Of the 36 proposed Hypothesis, 20 are considered valid and significant. The statistics result reveal that GrSCM practices positively impact Environmental Performance followed by Economic and Operational Performance. Investment Recovery acts as a strong mediator between Intra organizational Green activities and performance outcomes. The relationship of Reverse Logistics influencing outcomes is significantly mediated by Competitiveness. The pressure originating from customers exert significant positive influence on the firm to adopt Green Practices consequently leading to higher outcomes. Research Contribution/Originality: Underpinning the Resource dependence theory and as a first wave of investigating the impact of Green Supply chain on performance outcomes in Pakistan, this study intends to make a prominent mark in the field of research. Investment and Competitiveness together are tested as a mediator for the first time in this arena. Managerial implications: Practitioner is provided with a framework for assessing the synergistic impact of GrSCM practices on performance. Upgradation of Accreditations and Audit Programs on regular basis are the need of the hour. Making the processes leaner with the sale of excess inventories and scrap helps the firm to work more efficiently and productively.

Keywords: economic performance, environmental performance, green supply chain management practices, operational performance, sustainability, a textile sector of Pakistan

Procedia PDF Downloads 199
211 Mixed Monolayer and PEG Linker Approaches to Creating Multifunctional Gold Nanoparticles

Authors: D. Dixon, J. Nicol, J. A. Coulter, E. Harrison

Abstract:

The ease with which they can be functionalized, combined with their excellent biocompatibility, make gold nanoparticles (AuNPs) ideal candidates for various applications in nanomedicine. Indeed several promising treatments are currently undergoing human clinical trials (CYT-6091 and Auroshell). A successful nanoparticle treatment must first evade the immune system, then accumulate within the target tissue, before enter the diseased cells and delivering the payload. In order to create a clinically relevant drug delivery system, contrast agent or radiosensitizer, it is generally necessary to functionalize the AuNP surface with multiple groups; e.g. Polyethylene Glycol (PEG) for enhanced stability, targeting groups such as antibodies, peptides for enhanced internalization, and therapeutic agents. Creating and characterizing the biological response of such complex systems remains a challenge. The two commonly used methods to attach multiple groups to the surface of AuNPs are the creation of a mixed monolayer, or by binding groups to the AuNP surface using a bi-functional PEG linker. While some excellent in-vitro and animal results have been reported for both approaches further work is necessary to directly compare the two methods. In this study AuNPs capped with both PEG and a Receptor Mediated Endocytosis (RME) peptide were prepared using both mixed monolayer and PEG linker approaches. The PEG linker used was SH-PEG-SGA which has a thiol at one end for AuNP attachment, and an NHS ester at the other to bind to the peptide. The work builds upon previous studies carried out at the University of Ulster which have investigated AuNP synthesis, the influence of PEG on stability in a range of media and investigated intracellular payload release. 18-19nm citrate capped AuNPs were prepared using the Turkevich method via the sodium citrate reduction of boiling 0.01wt% Chloroauric acid. To produce PEG capped AuNPs, the required amount of PEG-SH (5000Mw) or SH-PEG-SGA (3000Mw Jenkem Technologies) was added, and the solution stirred overnight at room temperature. The RME (sequence: CKKKKKKSEDEYPYVPN, Biomatik) co-functionalised samples were prepared by adding the required amount of peptide to the PEG capped samples and stirring overnight. The appropriate amounts of PEG-SH and RME peptide were added to the AuNP to produce a mixed monolayer consisting of approximately 50% PEG and 50% RME. The PEG linker samples were first fully capped with bi-functional PEG before being capped with RME peptide. An increase in diameter from 18-19mm for the ‘as synthesized’ AuNPs to 40-42nm after PEG capping was observed via DLS. The presence of PEG and RME peptide on both the mixed monolayer and PEG linker co-functionalized samples was confirmed by both FTIR and TGA. Bi-functional PEG linkers allow the entire AuNP surface to be capped with PEG, enabling in-vitro stability to be achieved using a lower molecular weight PEG. The approach also allows the entire outer surface to be coated with peptide or other biologically active groups, whilst also offering the promise of enhanced biological availability. The effect of mixed monolayer versus PEG linker attachment on both stability and non-specific protein corona interactions was also studied.

Keywords: nanomedicine, gold nanoparticles, PEG, biocompatibility

Procedia PDF Downloads 311
210 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)

Authors: Kutangila Malundama Succes, Koita Mahamadou

Abstract:

In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.

Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation

Procedia PDF Downloads 41
209 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case

Authors: Gulhan Sen

Abstract:

Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.

Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making

Procedia PDF Downloads 224
208 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 298
207 Biomaterials Solutions to Medical Problems: A Technical Review

Authors: Ashish Thakur

Abstract:

This technical paper was written in view of focusing the biomaterials and its various applications in modern industries. Author tires to elaborate not only the medical, infect plenty of application in other industries. The scope of the research area covers the wide range of physical, biological and chemical sciences that underpin the design of biomaterials and the clinical disciplines in which they are used. A biomaterial is now defined as a substance that has been engineered to take a form which, alone or as part of a complex system, is used to direct, by control of interactions with components of living systems, the course of any therapeutic or diagnostic procedure. Biomaterials are invariably in contact with living tissues. Thus, interactions between the surface of a synthetic material and biological environment must be well understood. This paper reviews the benefits and challenges associated with surface modification of the metals in biomedical applications. The paper also elaborates how the surface characteristics of metallic biomaterials, such as surface chemistry, topography, surface charge, and wettability, influence the protein adsorption and subsequent cell behavior in terms of adhesion, proliferation, and differentiation at the biomaterial–tissue interface. The chapter also highlights various techniques required for surface modification and coating of metallic biomaterials, including physicochemical and biochemical surface treatments and calcium phosphate and oxide coatings. In this review, the attention is focused on the biomaterial-associated infections, from which the need for anti-infective biomaterials originates. Biomaterial-associated infections differ markedly for epidemiology, aetiology and severity, depending mainly on the anatomic site, on the time of biomaterial application, and on the depth of the tissues harbouring the prosthesis. Here, the diversity and complexity of the different scenarios where medical devices are currently utilised are explored, providing an overview of the emblematic applicative fields and of the requirements for anti-infective biomaterials. In addition to this, chapter introduces nanomedicine and the use of both natural and synthetic polymeric biomaterials, focuses on specific current polymeric nanomedicine applications and research, and concludes with the challenges of nanomedicine research. Infection is currently regarded as the most severe and devastating complication associated to the use of biomaterials. Osteoporosis is a worldwide disease with a very high prevalence in humans older than 50. The main clinical consequences are bone fractures, which often lead to patient disability or even death. A number of commercial biomaterials are currently used to treat osteoporotic bone fractures, but most of these have not been specifically designed for that purpose. Many drug- or cell-loaded biomaterials have been proposed in research laboratories, but very few have received approval for commercial use. Polymeric nanomaterial-based therapeutics plays a key role in the field of medicine in treatment areas such as drug delivery, tissue engineering, cancer, diabetes, and neurodegenerative diseases. Advantages in the use of polymers over other materials for nanomedicine include increased functionality, design flexibility, improved processability, and, in some cases, biocompatibility.

Keywords: nanomedicine, tissue, infections, biomaterials

Procedia PDF Downloads 239
206 Hydro-Mechanical Characterization of PolyChlorinated Biphenyls Polluted Sediments in Interaction with Geomaterials for Landfilling

Authors: Hadi Chahal, Irini Djeran-Maigre

Abstract:

This paper focuses on the hydro-mechanical behavior of polychlorinated biphenyl (PCB) polluted sediments when stored in landfills and the interaction between PCBs and geosynthetic clay liners (GCL) with respect to hydraulic performance of the liner and the overall performance and stability of landfills. A European decree, adopted in the French regulation forbids the reintroducing of contaminated dredged sediments containing more than 0,64mg/kg Σ 7 PCBs to rivers. At these concentrations, sediments are considered hazardous and a remediation process must be adopted to prevent the release of PCBs into the environment. Dredging and landfilling polluted sediments is considered an eco-environmental remediation solution. French regulations authorize the storage of PCBs contaminated components with less than 50mg/kg in municipal solid waste facilities. Contaminant migration via leachate may be possible. The interactions between PCBs contaminated sediments and the GCL barrier present in the bottom of a landfill for security confinement are not known. Moreover, the hydro-mechanical behavior of stored sediments may affect the performance and the stability of the landfill. In this article, hydro-mechanical characterization of the polluted sediment is presented. This characterization led to predict the behavior of the sediment at the storage site. Chemical testing showed that the concentration of PCBs in sediment samples is between 1.7 and 2,0 mg/kg. Physical characterization showed that the sediment is organic silty sand soil (%Silt=65, %Sand=27, %OM=8) characterized by a high plasticity index (Ip=37%). Permeability tests using permeameter and filter press showed that sediment permeability is in the order of 10-9 m/s. Compressibility tests showed that the sediment is a very compressible soil with Cc=0,53 and Cα =0,0086. In addition, effects of PCB on the swelling behavior of bentonite were studied and the hydraulic performance of the GCL in interaction with PCBs was examined. Swelling tests showed that PCBs don’t affect the swelling behavior of bentonite. Permeability tests were conducted on a 1.0 m pilot scale experiment, simulating a storage facility. PCBs contaminated sediments were directly placed over a passive barrier containing GCL to study the influence of the direct contact of polluted sediment leachate with the GCL. An automatic water system has been designed to simulate precipitation. Effluent quantity and quality have been examined. The sediment settlements and the water level in the sediment have been monitored. The results showed that desiccation affected the behavior of the sediment in the pilot test and that laboratory tests alone are not sufficient to predict the behavior of the sediment in landfill facility. Furthermore, the concentration of PCB in the sediment leachate was very low ( < 0,013 µg/l) and that the permeability of the GCL was affected by other components present in the sediment leachate. Desiccation and cracks were the main parameters that affected the hydro-mechanical behavior of the sediment in the pilot test. In order to reduce these infects, the polluted sediment should be stored at a water content inferior to its shrinkage limit (w=39%). We also propose to conduct other pilot tests with the maximum concentration of PCBs allowed in municipal solid waste facility of 50 mg/kg.

Keywords: geosynthetic clay liners, landfill, polychlorinated biphenyl, polluted dredged materials

Procedia PDF Downloads 96
205 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 379
204 Using Business Interactive Games to Improve Management Skills

Authors: Nuno Biga

Abstract:

Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.

Keywords: business interactive games, gamification, management empowerment skills, simulation living labs

Procedia PDF Downloads 86
203 Global Supply Chain Tuning: Role of National Culture

Authors: Aleksandr S. Demin, Anastasiia V. Ivanova

Abstract:

Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.

Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management

Procedia PDF Downloads 85
202 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 213
201 Biosocial Determinants of Maternal and Child Health in Northeast India: A Case Study

Authors: Benrithung Murry

Abstract:

This paper highlights the biosocial determinants of health-seeking behavior in tribal population groups of northeast India, focusing on maternal and child health. The northeastern region of India is a conglomeration of several ethnic groups, most of which are scheduled as tribal groups. A total of 750 ever-married women in reproductive ages (15-49 years) were interviewed from three tribal groups of Nagaland, India using pre-tested and modified maternal health schedule. Data pertaining to reproductive performance of the mothers and their children health status were collected from 12 villages of Dimapur district, Nagaland, India. The sample for study comprises 212 Angami women, 267 Ao women, and 271 Sumi women, all of which belonging to tribal populations of Northeast India. Sex ratios of 15-49 years in these three populations are 1018.18, 1086.69, and 1106.92, respectively. 90% of the populations in the study are nuclear families, with about 10% of households falling below the poverty line as per the cutoffs for India. Female literacy level in these population groups is higher than the national average of 65.46%; however, about 30% of all married women are not engaged in any sort of earnings. Total fertility rates of these populations are alarming (Total Fertility Rate ≥ 6) and far from replacement fertility level, while infant mortality rates are found to be much lower than the national average of 34 per 1000. The perception and practice of maternal health in this region is unimpressive despite the availability of medical amenities. Only 3 % of mothers in the study have reported 4 times antenatal checkups during last two pregnancies. Other mothers have reported 1 to 3 times of antenatal checkups, but about 25% of them never visited a doctor during the entire pregnancy period. About 15% of mothers never took tetanus injection, while 40% of mothers never took iron folic supplements during pregnancy. Almost half of all women and their husbands do not use birth control measures even for the spacing of children, which has an immense impact on prenatal mortality mainly due to deliberate abortions: the percentage of prenatal mortality among Angami, Ao and Sumi populations is 44.88, 31.88 and 54.98, respectively per 1000 live births. The steep decline in fertility levels in most countries is a consequence of the increasing use of modern methods of contraception. However, among users of birth control measures in these populations, it is seen that most couples use it only after they have the desired number of children, thus its use having no substantial influence in reducing fertility. It is also seen that the majority of the children were only partially vaccinated. With many child deliveries being done at home, many newborns are not administered with polio at birth. Two-third of all children do not have complete basic immunization against polio, diphtheria, tetanus, pertussis, bacillus, and hepatitis besides others. Certain adherence to traditional beliefs and customs apart from the socio-economic factors is believed to have been operating in these populations, which determines their health-seeking behavior. While a more in-depth study combining biological, socio-cultural, economic, and genetic factors is suggested, there is an urgent need for intervention in these populations to combat with the poor maternal and child health status.

Keywords: case study, health behavior, mother and child, northeast india

Procedia PDF Downloads 111
200 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach

Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft

Abstract:

Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.

Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology

Procedia PDF Downloads 88
199 Key Findings on Rapid Syntax Screening Test for Children

Authors: Shyamani Hettiarachchi, Thilini Lokubalasuriya, Shakeela Saleem, Dinusha Nonis, Isuru Dharmaratne, Lakshika Udugama

Abstract:

Introduction: Late identification of language difficulties in children could result in long-term negative consequences for communication, literacy and self-esteem. This highlights the need for early identification and intervention for speech, language and communication difficulties. Speech and language therapy is a relatively new profession in Sri Lanka and at present, there are no formal standardized screening tools to assess language skills in Sinhala-speaking children. The development and validation of a short, accurate screening tool to enable the identification of children with syntactic difficulties in Sinhala is a current need. Aims: 1) To develop test items for a Sinhala Syntactic Structures (S3 Short Form) test on children aged between 3;0 to 5;0 years 2) To validate the test of Sinhala Syntactic Structures (S3 Short Form) on children aged between 3; 0 to 5; 0 years Methods: The Sinhala Syntactic Structures (S3 Short Form) was devised based on the Renfrew Action Picture Test. As Sinhala contains post-positions in contrast to English, the principles of the Renfrew Action Picture Test were followed to gain an information score and a grammar score but the test devised reflected the linguistic-specificity and complexity of Sinhala and the pictures were in keeping with the culture of the country. This included the dative case marker ‘to give something to her’ (/ejɑ:ʈə/ meaning ‘to her’), the instrumental case marker ‘to get something from’ (/ejɑ:gən/ meaning ‘from him’ or /gɑhən/ meaning ‘from the tree’), possessive noun (/ɑmmɑge:/ meaning ‘mother’s’ or /gɑhe:/ meaning ‘of the tree’ or /male:/ meaning ‘of the flower’) and plural markers (/bɑllɑ:/ bɑllo:/ meaning ‘dog/dogs’, /mɑlə/mɑl/ meaning ‘flower/flowers’, /gɑsə/gɑs/ meaning ‘tree/trees’ and /wɑlɑ:kulə/wɑlɑ:kulu/ meaning ‘cloud/clouds’). The picture targets included socio-culturally appropriate scenes of the Sri Lankan New Year celebration, elephant procession and the Buddhist ‘Wesak’ ceremony. The test was piloted with a group of 60 participants and necessary changes made. In phase 1, the test was administered to 100 Sinhala-speaking children aged between 3; 0 and 5; 0 years in one district. In this presentation on phase 2, the test was administered to another 100 Sinhala-speaking children aged between 3; 0 to 5; 0 in three districts. In phase 2, the selection of the test items was assessed via measures of content validity, test-retest reliability and inter-rater reliability. The age of acquisition of each syntactic structure was determined using content and grammar scores which were statistically analysed using t-tests and one-way ANOVAs. Results: High percentage agreement was found on test-retest reliability on content validity and Pearson correlation measures and on inter-rater reliability. As predicted, there was a statistically significant influence of age on the production of syntactic structures at p<0.05. Conclusions: As the target test items included generated the information and the syntactic structures expected, the test could be used as a quick syntactic screening tool with preschool children.

Keywords: Sinhala, screening, syntax, language

Procedia PDF Downloads 320
198 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 207
197 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program

Authors: Pauline Calleja, Brooke Alexander

Abstract:

In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.

Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education

Procedia PDF Downloads 243
196 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 176
195 Study of Biomechanical Model for Smart Sensor Based Prosthetic Socket Design System

Authors: Wei Xu, Abdo S. Haidar, Jianxin Gao

Abstract:

Prosthetic socket is a component that connects the residual limb of an amputee with an artificial prosthesis. It is widely recognized as the most critical component that determines the comfort of a patient when wearing the prosthesis in his/her daily activities. Through the socket, the body weight and its associated dynamic load are distributed and transmitted to the prosthesis during walking, running or climbing. In order to achieve a good-fit socket for an individual amputee, it is essential to obtain the biomechanical properties of the residual limb. In current clinical practices, this is achieved by a touch-and-feel approach which is highly subjective. Although there have been significant advancements in prosthetic technologies such as microprocessor controlled knee and ankle joints in the last decade, the progress in designing a comfortable socket has been rather limited. This means that the current process of socket design is still very time-consuming, and highly dependent on the expertise of the prosthetist. Supported by the state-of-the-art sensor technologies and numerical simulations, a new socket design system is being developed to help prosthetists achieve rapid design of comfortable sockets for above knee amputees. This paper reports the research work related to establishing biomechanical models for socket design. Through numerical simulation using finite element method, comprehensive relationships between pressure on residual limb and socket geometry were established. This allowed local topological adjustment for the socket so as to optimize the pressure distributions across the residual limb. When the full body weight of a patient is exerted on the residual limb, high pressures and shear forces between the residual limb and the socket occur. During numerical simulations, various hyperplastic models, namely Ogden, Yeoh and Mooney-Rivlin, were used, and their effectiveness in representing the biomechanical properties of soft tissues of the residual limb was evaluated. This also involved reverse engineering, which resulted in an optimal representative model under compression test. To validate the simulation results, a range of silicone models were fabricated. They were tested by an indentation device which yielded the force-displacement relationships. Comparisons of results obtained from FEA simulations and experimental tests showed that the Ogden model did not fit well the soft tissue material indentation data, while the Yeoh model gave the best representation of the soft tissue mechanical behavior under indentation. Compared with hyperplastic model, the result showed that elastic model also had significant errors. In addition, normal and shear stress distributions on the surface of the soft tissue model were obtained. The effect of friction in compression testing and the influence of soft tissue stiffness and testing boundary conditions were also analyzed. All these have contributed to the overall goal of designing a good-fit socket for individual above knee amputees.

Keywords: above knee amputee, finite element simulation, hyperplastic model, prosthetic socket

Procedia PDF Downloads 177
194 Outdoor Thermal Comfort Strategies: The Case of Cool Facades

Authors: Noelia L. Alchapar, Cláudia C. Pezzuto, Erica N. Correa

Abstract:

Mitigating urban overheating is key to achieving the environmental and energy sustainability of cities. The management of the optical properties of the materials that make up the urban envelope -roofing, pavement, and facades- constitutes a profitable and effective tool to improve the urban microclimate and rehabilitate urban areas. Each material that makes up the urban envelope has a different capacity to reflect received solar radiation, which alters the fraction of solar radiation absorbed by the city. However, the paradigm of increasing solar reflectance in all areas of the city without distinguishing their relative position within the urban canyon can cause serious problems of overheating and discomfort among its inhabitants. The hypothesis that supports the research postulates that not all reflective technologies that contribute to urban radiative cooling favor the thermal comfort conditions of pedestrians to equal measure. The objective of this work is to determine to what degree the management of the optical properties of the facades modifies outdoor thermal comfort, given that the mitigation potential of materials with high reflectance in facades is strongly conditioned by geographical variables and by the geometric characteristics of the urban profile aspect ratio (H/W). This research was carried out under two climatic contexts, that of the city of Mendoza-Argentina and that of the city of Campinas-Brazil, according to the Köppen climate classification: BWk and Cwa, respectively. Two areas in two different climatic contexts (Mendoza - Argentina and Campinas - Brazil) were selected. Both areas have comparable urban morphology patterns. These areas are located in a region with low horizontal building density and residential zoning. The microclimatic conditions were monitored during the summer period with temperature and humidity fixed sensors inside vial channels. The microclimate model was simulated in ENVI-Met V5. A grid resolution of 3.5 x 3.5 x 3.5m was used for both cities, totaling an area of 145x145x30 grids. Based on the validated theoretical model, ten scenarios were simulated, modifying the height of buildings and the solar reflectivity of facades. The solar reflectivity façades ranges were: low (0.3) and high (0.75). The density scenarios range from 1th to the 5th level. The study scenarios' performance was assessed by comparing the air temperature, physiological equivalent temperature (PET), and thermal climate index (UTCI). As a result, it is observed that the behavior of the materials of the urban outdoor space depends on complex interactions. Many urban environmental factors influence including constructive characteristics, urban morphology, geographic locations, local climate, and so forth. The role of the vertical urban envelope is decisive for the reduction of urban overheating. One of the causes of thermal gain is the multiple reflections within the urban canyon, which affects not only the air temperature but also the pedestrian thermal comfort. One of the main findings of this work leads to the remarkable importance of considering both the urban warming and the thermal comfort aspects of pedestrians in urban mitigation strategies.

Keywords: materials facades, solar reflectivity, thermal comfort, urban cooling

Procedia PDF Downloads 66
193 The Role of Two Macrophyte Species in Mineral Nutrient Cycling in Human-Impacted Water Reservoirs

Authors: Ludmila Polechonska, Agnieszka Klink

Abstract:

The biogeochemical studies of macrophytes shed light on elements bioavailability, transfer through the food webs and their possible effects on the biota, and provide a basis for their practical application in aquatic monitoring and remediation. Measuring the accumulation of elements in plants can provide time-integrated information about the presence of chemicals in aquatic ecosystems. The aim of the study was to determine and compare the contents of micro- and macroelements in two cosmopolitan macrophytes, submerged Ceratophyllum demersum (hornworth) and free-floating Hydrocharis morsus-ranae (European frog-bit), in order to assess their bioaccumulation potential, elements stock accumulated in each plant and their role in nutrients cycling in small water reservoirs. Sampling sites were designated in 25 oxbow lakes in urban areas in Lower Silesia (SW Poland). In each sampling site, fresh whole plants of C. demersum and H. morsus-ranae were collected from squares of 1x1 meters each where the species coexisted. European frog-bit was separated into leaves, stems and roots. For biomass measurement all plants growing on 1 square meter were collected, dried and weighed. At the same time, water samples were collected from each reservoir and their pH and EC were determined. Water samples were filtered and acidified and plant samples were digested in concentrated nitric acid. Next, the content of Ca, Cu, Fe, K, Mg, Mn, Ni and Zn was determined using atomic absorption method (AAS). Statistical analysis showed that C. demersum and organs of H. morsus-ranae differed significantly in respect of metals content (Kruskal-Wallis Anova, p<0.05). Contents of Cu, Mn, Ni and Zn were higher in hornwort, while European frog-bit contained more Ca, Fe, K, Mg. Bioaccumulation Factors (BCF=content in plant/concentration in water) showed similar pattern of metal bioaccumulation – microelements were more intensively accumulated by hornwort and macroelements by frog-bit. Based on BCF values both species may be positively evaluated as good accumulators of Cu, Fe, Mn, Ni and Zn. However, the distribution of metals in H. morsus-ranae was uneven – the majority of studied elements were retained in roots, which may indicate to existence of physiological barriers developed for dealing with toxicity. Some percent of Ca and K was actively transported to stems, but to leaves Mg only. Although the biomass of C. demersum was two times greater than biomass of H. morsus-ranae, the element off-take was greater only for Cu, Mn, Ni and Zn. Nevertheless, it can be stated that despite a relatively small biomass, compared to other macrophytes, both species may have an influence on the removal of trace elements from aquatic ecosystems and, as they serve as food for some animals, also on the incorporation of toxic elements into food chains. There was a significant positive correlation between content of Mn and Fe in water and roots of H. morus-ranae (R=0.51 and R=0.60, respectively) as well as between Cu concentration in water and in C. demersum (R=0.41) (Spearman rank correlation, p<0.05). High bioaccumulation rates and correlation between plants and water elements concentrations point to their possible use as passive biomonitors of aquatic pollution.

Keywords: aquatic plants, bioaccumulation, biomonitoring, macroelements, phytoremediation, trace metals

Procedia PDF Downloads 156
192 Informalization and Feminization of Labour Force in the Context of Globalization of Production: Case Study of Women Migrant Workers in Kinfra Apparel Park of India

Authors: Manasi Mahanty

Abstract:

In the current phase of globalization, the mobility of capital facilitates outsourcing and subcontracting of production processes to the developing economies for cheap and flexible labour force. In such process, the globalization of production networks operates at multi-locational points within the nation. Under the new quota regime in the globalization period, the Indian manufacturing exporters came under the influence of corporate buyers and large retailers from the importing countries. As part of such process, the garment manufacturing sector is expected to create huge employment opportunities and to expand the export market in the country. While following these, expectations, the apparel and garment industries mostly target to hire female migrant workers with a purpose of establishing more flexible industrial relations through the casual nature of employment contract. It leads to an increasing women’s participation in the labour market as well as the rise in precarious forms of female paid employment. In the context, the main objective of the paper is to understand the wider dynamics of globalization of production and its link with informalization, feminization of labour force and internal migration process of the country. For this purpose, the study examines the changing labour relations in the KINFRA Apparel Park at Kerala’s Special Economic Zone which operates under the scheme ‘Apparel Parks for Export’ (APE) of the Government of India. The present study was based on both quantitative and qualitative analysis. In the first, the secondary sources of data were collected from the source location (SEAM centre) and destination (KINFRA Park). The official figures and data were discussed and analyzed in order to find out the various dimensions of labour relations under globalization of production. In the second, the primary survey was conducted to make a comparative analysis of local and migrant female workers. The study is executed by taking 100 workers in total. The local workers comprised of 53% of the sample whereas the outside state workers were 47%. Even personal interviews with management staff, and workers were also made for collecting the information regarding the organisational structure, nature, and mode of recruitment, work environment, etc. The study shows the enormous presence of rural women migrant workers in KINFRA Apparel Park. A Public Private Partnership (PPP) arranged migration system is found as Skills for Employment in Apparel Manufacturing (SEAM) from where young women and girls are being sent to work in garment factories of Kerala’s KINFRA International Apparel Park under the guise of an apprenticeship based recruitment. The study concludes that such arrangements try to avoid standard employment relationships and strengthen informalization, casualization and contractualization of work. In this process, the recruitment of women migrant workers is to be considered as best option for the employers of private industries which could be more easily hired and fired.

Keywords: female migration, globalization, informalization, KINFRA apparel park

Procedia PDF Downloads 311
191 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials

Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner

Abstract:

Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.

Keywords: CO₂ curing, carbonation, CCU, steel slag

Procedia PDF Downloads 77