Search results for: social influence and social identity
242 Energy Metabolism and Mitochondrial Biogenesis in Muscles of Rats Subjected to Cold Water Immersion
Authors: Bosiacki Mateusz, Anna Lubkowska, Dariusz Chlubek, Irena Baranowska-Bosiacka
Abstract:
Exposure to cold temperatures can be considered a stressor that can lead to adaptive responses. The present study hypothesized the possibility of a positive effect of cold water exercise on mitochondrial biogenesis and muscle energy metabolism in aging rats. The purpose of this study was to evaluate the effects of cold water exercise on energy status, purine compounds, and mitochondrial biogenesis in the muscles of aging rats as indicators of the effects of cold water exercise and their usefulness in monitoring adaptive changes. The study was conducted on 64 aging rats of both sexes, 15 months old at the time of the experiment. The rats (male and female separately) were randomly assigned to the following study groups: control, sedentary animals; 5°C groups animals - training swimming in cold water at 5°C; 36°C groups - animals training swimming in water at thermal comfort temperature. The study was conducted with the approval of the Local Ethical Committee for Animal Experiments. The animals in the experiment were subjected to swimming training for 9 weeks. During the first week of the study, the duration of the first swimming training was 2 minutes (on the first day), increasing daily by 0.5 minutes up to 4 minutes on the fifth day of the first week. From the second to the eighth week, the swimming training was 4 minutes per day, five days a week. At the end of the study, forty-eight hours after the last swim training, the animals were dissected. In the skeletal muscle tissue of the thighs of the rats, we determined the concentrations of ATP, ADP, AMP, Ado (HPLC), PGC-1a protein expression (Western blot), PGC1A, Mfn1, Mfn2, Opa1, and Drp1 gene expression (qRT PCR). The study showed that swimming in water at a thermally comfortable temperature improved the energy metabolism of the aging rat muscles by increasing the metabolic rate (increase in ATP, ADP, TAN, AEC) and enhancing mitochondrial fusion (increase in mRNA expression of regulatory proteins Mfn1 and Mfn2). Cold water swimming improved muscle energy metabolism in aging rats by increasing the rate of muscle energy metabolism (increase in ATP, ADP, TAN, AEC concentrations) and enhancing mitochondrial biogenesis and dynamics (increase in the mRNA expression of proteins of fusion-regulating factors – Mfn1, Mfn2, and Opa1, and the factor regulating mitochondrial fission – Drp1). The concentration of high-energy compounds and the expression of proteins regulating mitochondrial dynamics in the muscle may be a useful indicator in monitoring adaptive changes occurring in aging muscles under the influence of exercise in cold water. It represents a short-term adaptation to changing environmental conditions and has a beneficial effect on maintaining the bioenergetic capacity of muscles in the long term. Conclusion: exercise in cold water can exert positive effects on energy metabolism, biogenesis and dynamics of mitochondria in aging rat muscles. Enhancement of mitochondrial dynamics under cold water exercise conditions can improve mitochondrial function and optimize the bioenergetic capacity of mitochondria in aging rat muscles.Keywords: cold water immersion, adaptive responses, muscle energy metabolism, aging
Procedia PDF Downloads 81241 Strategic Planning Practice in a Global Perspective:the Case of Guangzhou, China
Authors: Shuyi Xie
Abstract:
As the vital city in south China since the ancient time, Guangzhou has been losing its leading role among the rising neighboring cities, especially, Hong Kong and Shenzhen, since the late 1980s, with the overloaded infrastructure and deteriorating urban environment in its old inner city. Fortunately, with the new expansion of its administrative area in 2000, the local municipality considered it as a great opportunity to solve a series of alarming urban problems. Thus, for the first time, strategic planning was introduced to China for providing more convincing and scientific basis towards better urban future. Differed from traditional Chinese planning practices, which rigidly and dogmatically focused on future blueprints, the strategic planning of Guangzhou proceeded from analyzing practical challenges and opportunities towards establishing reasonable developing objectives and proposing corresponding strategies. Moreover, it was pioneering that the municipality invited five planning institutions for proposals, among which, the paper focuses on the one proposed by China Academy of Urban Planning & Design from its theoretical basis to problems’ defining and analyzing the process, as well as planning results. Since it was closer to the following municipal decisions and had a more far-reaching influence for other Chinese cities' following practices. In particular, it demonstrated an innovative exploration on the role played by urban developing rate on deciding urban growth patterns (‘Spillover-reverberation’ or ‘Leapfrog’). That ultimately established an unprecedented paradigm on deciding an appropriate urban spatial structure in future, including its specific location, function and scale. Besides the proposal itself, this article highlights the role of interactions, among actors, as well as proposals, subsequent discussions, summaries and municipal decisions, especially the establishment of the rolling dynamic evaluation system for periodical reviews on implementation situations, as the first attempt in China. Undoubtedly, strategic planning of Guangzhou has brought out considerable benefits, especially opening the strategic mind for plentiful Chinese cities in the following years through establishing a flexible and dynamic planning mechanism highlighted the interactions among multiple actors with innovative and effective tools, methodologies and perspectives on regional, objective-approach and comparative analysis. However, compared with some developed countries, the strategic planning in China just started and has been greatly relied on empirical studies rather than scientific analysis. Moreover, it still faced a bit of controversy, for instance, the certain gap among institutional proposals, final municipal decisions and implemented results, due to the lacking legal constraint. Also, how to improve the public involvement in China with an absolute up-down administrative system is another urgent task. In future, despite of irresistible and irretrievable weakness, some experiences and lessons from previous international practices, with the combination of specific Chinese situations and domestic practices, would enable to promote the further advance on strategic planning in China.Keywords: evaluation system, global perspective, Guangzhou, interactions, strategic planning, urban growth patterns
Procedia PDF Downloads 390240 Assessment of Tidal Influence in Spatial and Temporal Variations of Water Quality in Masan Bay, Korea
Abstract:
Slack-tide sampling was carried out at seven stations at high and low tides for a tidal cycle, in summer (7, 8, 9) and fall (10), 2016 to determine the differences of water quality according to tides in Masan Bay. The data were analyzed by Pearson correlation and factor analysis. The mixing state of all the water quality components investigated is well explained by the correlation with salinity (SAL). Turbidity (TURB), dissolved silica (DSi), nitrite and nitrate nitrogen (NNN) and total nitrogen (TN), which find their way into the bay from the streams and have no internal source and sink reaction, showed a strong negative correlation with SAL at low tide, indicating the property of conservative mixing. On the contrary, in summer and fall, dissolved oxygen (DO), hydrogen sulfide (H2S) and chemical oxygen demand with KMnO4 (CODMn) of the surface and bottom water, which were sensitive to an internal source and sink reaction, showed no significant correlation with SAL at high and low tides. The remaining water quality parameters showed a conservative or a non-conservative mixing pattern depending on the mixing characteristics at high and low tides, determined by the functional relationship between the changes of the flushing time and the changes of the characteristics of water quality components of the end-members in the bay. Factor analysis performed on the concentration difference data sets between high and low tides helped in identifying the principal latent variables for them. The concentration differences varied spatially and temporally. Principal factors (PFs) scores plots for each monitoring situation showed high associations of the variations to the monitoring sites. At sampling station 1 (ST1), temperature (TEMP), SAL, DSi, TURB, NNN and TN of the surface water in summer, TEMP, SAL, DSi, DO, TURB, NNN, TN, reactive soluble phosphorus (RSP) and total phosphorus (TP) of the bottom water in summer, TEMP, pH, SAL, DSi, DO, TURB, CODMn, particulate organic carbon (POC), ammonia nitrogen (AMN), NNN, TN and fecal coliform (FC) of the surface water in fall, TEMP, pH, SAL, DSi, H2S, TURB, CODMn, AMN, NNN and TN of the bottom water in fall commonly showed up as the most significant parameters and the large concentration differences between high and low tides. At other stations, the significant parameters showed differently according to the spatial and temporal variations of mixing pattern in the bay. In fact, there is no estuary that always maintains steady-state flow conditions. The mixing regime of an estuary might be changed at any time from linear to non-linear, due to the change of flushing time according to the combination of hydrogeometric properties, inflow of freshwater and tidal action, And furthermore the change of end-member conditions due to the internal sinks and sources makes the occurrence of concentration difference inevitable. Therefore, when investigating the water quality of the estuary, it is necessary to take a sampling method considering the tide to obtain average water quality data.Keywords: conservative mixing, end-member, factor analysis, flushing time, high and low tide, latent variables, non-conservative mixing, slack-tide sampling, spatial and temporal variations, surface and bottom water
Procedia PDF Downloads 130239 Selling Electric Vehicles: Experiences from Car Salesmen in Sweden
Authors: Jens Hagman, Jenny Janhager Stier, Ellen Olausson, Anne Y. Faxer, Ana Magazinius
Abstract:
Sweden has the second highest electric vehicle (plug-in hybrid and battery electric vehicle) sales per capita in Europe but in relation to sales of internal combustion engine electric vehicles sales are still minuscular (< 4%). Much research effort has been placed on various technical and user focused barriers and enablers for adoption of electric vehicles. Less effort has been placed on investigating the retail (dealership-customer) sales process of vehicles in general and electric vehicles in particular. Arguably, no one ought to be better informed about needs and desires of potential electric vehicle buyers than car salesmen, originating from their daily encounters with customers at the dealership. The aim of this paper is to explore the conditions of selling electric vehicle from a car salesmen’s perspective. This includes identifying barriers and enablers for electric vehicle sales originating from internal (dealership and brand) and external (customer, government) sources. In this interview study five car brands (manufacturers) that sell both electric and internal combustion engine vehicles have been investigated. A total of 15 semi-structured interviews have been conducted (three per brand, in rural and urban settings and at different dealerships). Initial analysis reveals several barriers and enablers, experienced by car salesmen, which influence electric vehicle sales. Examples of as reported by car salesmen identified barriers are: -Electric vehicles earn car salesmen less commission on average compared to internal combustion engine vehicles. -It takes more time to sell and deliver an electric vehicle than an internal combustion engine vehicle. -Current leasing contracts entails relatively low second-hand value estimations for electric vehicles and thus a high leasing fee, which negatively affects the attractiveness of electric vehicles for private consumers in particular. -High purchasing price discourages many consumers from considering electric vehicles. -The education and knowledge level of electric vehicles differs between car salesmen, which could affect their self-confidence in meeting well prepared and question prone electric vehicle buyers. Examples of identified enablers are: -Company car tax regulation promotes sales of electric vehicles; in particular, plug-in hybrid electric vehicles are sold extensively to companies (up to 95 % of sales). -Low operating cost of electric vehicles such as fuel and service is an advantage when understood by consumers. -The drive performance of electric vehicles (quick, silent and fun to drive) is attractive to consumers. -Environmental aspects are considered important for certain consumer groups. -Fast technological improvements, such as increased range are opening up a wider market for electric vehicles. -For one of the brands; attractive private lease campaigns have proved effective to promote sales. This paper gives insights of an important but often overlooked aspect for the diffusion of electric vehicles (and durable products in general); the interaction between car salesmen and customers at the critical acquiring moment. Extracted through interviews with multiple car salesmen. The results illuminate untapped potential for sellers (salesmen, dealerships and brands) to mitigating sales barriers and strengthening sales enablers and thus becoming a more important actor in the electric vehicle diffusion process.Keywords: customer barriers, electric vehicle promotion, sales of electric vehicles, interviews with car salesmen
Procedia PDF Downloads 229238 Iron-Metal-Organic Frameworks: Potential Application as Theranostics for Inhalable Therapy of Tuberculosis
Authors: Gabriela Wyszogrodzka, Przemyslaw Dorozynski, Barbara Gil, Maciej Strzempek, Bartosz Marszalek, Piotr Kulinowski, Wladyslaw Piotr Weglarz, Elzbieta Menaszek
Abstract:
MOFs (Metal-Organic Frameworks) belong to a new group of porous materials with a hybrid organic-inorganic construction. Their structure is a network consisting of metal cations or clusters (acting as metallic centers, nodes) and the organic linkers between nodes. The interest in MOFs is primarily associated with the use of their well-developed surface and large porous. Possibility to build MOFs of biocompatible components let to use them as potential drug carriers. Furthermore, forming MOFs structure from cations possessing paramagnetic properties (e.g. iron cations) allows to use them as MRI (Magnetic Resonance Imaging) contrast agents. The concept of formation of particles that combine the ability to transfer active substance with imaging properties has been called theranostic (from words combination therapy and diagnostics). By building MOF structure from iron cations it is possible to use them as theranostic agents and monitoring the distribution of the active substance after administration in real time. In the study iron-MOF: Fe-MIL-101-NH2 was chosen, consisting of iron cluster in nodes of the structure and amino-terephthalic acid as a linker. The aim of the study was to investigate the possibility of applying Fe-MIL-101-NH2 as inhalable theranostic particulate system for the first-line anti-tuberculosis antibiotic – isoniazid. The drug content incorporated into Fe-MIL-101-NH2 was evaluated by dissolution study using spectrophotometric method. Results showed isoniazid encapsulation efficiency – ca. 12.5% wt. Possibility of Fe-MIL-101-NH2 application as the MRI contrast agent was demonstrated by magnetic resonance tomography. FeMIL-101-NH2 effectively shortening T1 and T2 relaxation times (increasing R1 and R2 relaxation rates) linearly with the concentrations of suspended material. Images obtained using multi-echo magnetic resonance imaging sequence revealed possibility to use FeMIL-101-NH2 as positive and negative contrasts depending on applied repetition time. MOFs micronization via ultrasound was evaluated by XRD, nitrogen adsorption, FTIR, SEM imaging and did not influence their crystal shape and size. Ultrasonication let to break the aggregates and achieve very homogeneously looking SEM images. MOFs cytotoxicity was evaluated in in vitro test with a highly sensitive resazurin based reagent PrestoBlue™ on L929 fibroblast cell line. After 24h no inhibition of cell proliferation was observed. All results proved potential possibility of application of ironMOFs as an isoniazid carrier and as MRI contrast agent in inhalatory treatment of tuberculosis. Acknowledgments: Authors gratefully acknowledge the National Science Center Poland for providing financial support, grant no 2014/15/B/ST5/04498.Keywords: imaging agents, metal-organic frameworks, theranostics, tuberculosis
Procedia PDF Downloads 251237 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development
Authors: Sinisa J. Vukicevic
Abstract:
Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.Keywords: urban growth model, scenario development, spatial indicators, Metronamica
Procedia PDF Downloads 95236 A Mixed-Methods Design and Implementation Study of ‘the Attach Project’: An Attachment-Based Educational Intervention for Looked after Children in Northern Ireland
Authors: Hannah M. Russell
Abstract:
‘The Attach Project’ (TAP), is an educational intervention aimed at improving educational and socio-emotional outcomes for children who are looked after. TAP is underpinned by Attachment Theory and is adapted from Dyadic Developmental Psychotherapy (DDP), which is a treatment for children and young people impacted by complex trauma and disorders of attachment. TAP has been implemented in primary schools in Northern Ireland throughout the 2018/19 academic year. During this time, a design and implementation study has been conducted to assess the promise of effectiveness for the future dissemination and ‘scaling-up’ of the programme for a larger, randomised control trial. TAP has been designed specifically for implementation in a school setting and is comprised of a whole school element and a more individualised Key Adult-Key Child pairing. This design and implementation study utilises a mixed-methods research design consisting of quantitative, qualitative, and observational measures with stakeholder input and involvement being considered an integral component. The use of quantitative measures, such as self-report questionnaires prior to and eight months following the implementation of TAP, enabled the analysis of the strengths and direction of relations between the various components of the programme, as well as the influence of implementation factors. The use of qualitative measures, incorporating semi-structured interviews and focus groups, enabled the assessment of implementation factors, identification of implementation barriers, and potential methods of addressing these issues. Observational measures facilitated the continual development and improvement of ‘TAP training’ for school staff. Preliminary findings have provided evidence of promise for the effectiveness of TAP and indicate the potential benefits of introducing this type of attachment-based intervention across other educational settings. This type of intervention could benefit not only children who are looked after but all children who may be impacted by complex trauma or disorders of attachment. Furthermore, findings from this study demonstrate that it is possible for children to form a secondary attachment relationship with a significant adult in school. However, various implementation factors which should be addressed were identified throughout the study, such as the necessity of protected time being introduced to facilitate the development of a positive Key Adult- Key Child relationship. Furthermore, additional ‘re-cap’ training is required in future dissemination of the programme, to maximise ‘attachment friendly practice’ in the whole staff team. Qualitative findings have also indicated that there is a general opinion across school staff that this type of Key Adult- Key Child pairing could be more effective if it was introduced as soon as children begin primary school. This research has provided ample evidence for the need to introduce relationally based interventions in schools, to help to ensure that children who are looked after, or who are impacted by complex trauma or disorders of attachment, can thrive in the school environment. In addition, this research has facilitated the identification of important implementation factors and barriers to implementation, which can be addressed prior to the ‘scaling-up’ of TAP for a robust, randomised controlled trial.Keywords: attachment, complex trauma, educational interventions, implementation
Procedia PDF Downloads 194235 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland
Authors: A. Sgobba, C. Meskell
Abstract:
The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources
Procedia PDF Downloads 129234 Modern Architecture and the Scientific World Conception
Authors: Sean Griffiths
Abstract:
Introduction: This paper examines the expression of ‘objectivity’ in architecture in the context of the post-war rejection of this concept. It aims to re-examine the question in light of the assault on truth characterizing contemporary culture and of the unassailable truth of the climate emergency. The paper analyses the search for objective truth as it was prosecuted in the Modern Movement in the early 20th century, looking at the extent to which this quest was successful in contributing to the development of a radically new, politically-informed architecture and the extent to which its particular interpretation of objectivity, limited that development. The paper studies the influence of the Vienna Circle philosophers Rudolph Carnap and Otto Neurath on the pedagogy of the Bauhaus and the architecture of the Neue Sachlichkeit in Germany. Their logical positivism sought to determine objective truths through empirical analysis, expressed in an austere formal language as part of a ‘scientific world conception’ which would overcome metaphysics and unverifiable mystification. These ideas, and the concurrent prioritizing of measurement as the determinant of environmental quality, became key influences in the socially-driven architecture constructed in the 1920s and 30s by Bauhaus architects in numerous German Cities. Methodology: The paper reviews the history of the early Modern Movement and summarizes accounts of the relationship between the Vienna Circle and the Bauhaus. It looks at key differences in the approaches Neurath and Carnap took to the achievement of their shared philosophical and political aims. It analyses how the adoption of Carnap’s foundationalism influenced the architectural language of modern architecture and compares, through a close reading of the structure of Neurath’s ‘protocol sentences,’ the latter’s alternative approach, speculating on the possibility that its adoption offered a different direction of travel for Modern Architecture. Findings: The paper finds that the adoption of Carnap’s foundationalism, while helping Modern Architecture forge a new visual language, ultimately limited its development and is implicated in its failure to escape the very metaphysics against which it had set itself. It speculates that Neurath’s relational language-based approach to the issue of establishing objectivity has its architectural corollary in the process of revision and renovation that offers new ways an ‘objective’ language of architecture might be developed in a manner that is more responsive to our present-day crisis. Conclusion: The philosophical principles of the Vienna Circle and the architects of the Modern Movement had much in common. Both contributed to radical historical departures which sought to instantiate a world scientific conception in their respective fields, which would attempt to banish mystification and metaphysics and would align itself with socialism. However, in adopting Carnap’s foundationalism as the theoretical basis for the new architecture, Modern Architecture not only failed to escape metaphysics but arguably closed off new avenues of development to itself. The adoption of Neurath’s more open-ended and interactive approach to objectivity offers possibilities for new conceptions of the expression of objectivity in architecture that might be more tailored to the multiple crises we face today.Keywords: Bauhaus, logical positivism, Neue Sachlichkeit, rationalism, Vienna Circle
Procedia PDF Downloads 87233 Influence Study of the Molar Ratio between Solvent and Initiator on the Reaction Rate of Polyether Polyols Synthesis
Authors: María José Carrero, Ana M. Borreguero, Juan F. Rodríguez, María M. Velencoso, Ángel Serrano, María Jesús Ramos
Abstract:
Flame-retardants are incorporated in different materials in order to reduce the risk of fire, either by providing increased resistance to ignition, or by acting to slow down combustion and thereby delay the spread of flames. In this work, polyether polyols with fire retardant properties were synthesized due to their wide application in the polyurethanes formulation. The combustion of polyurethanes is primarily dependent on the thermal properties of the polymer, the presence of impurities and formulation residue in the polymer as well as the supply of oxygen. There are many types of flame retardants, most of them are phosphorous compounds of different nature and functionality. The addition of these compounds is the most common method for the incorporation of flame retardant properties. The employment of glycerol phosphate sodium salt as initiator for the polyol synthesis allows obtaining polyols with phosphate groups in their structure. However, some of the critical points of the use of glycerol phosphate salt are: the lower reactivity of the salt and the necessity of a solvent (dimethyl sulfoxide, DMSO). Thus, the main aim in the present work was to determine the amount of the solvent needed to get a good solubility of the initiator salt. Although the anionic polymerization mechanism of polyether formation is well known, it seems convenient to clarify the role that DMSO plays at the starting point of the polymerization process. Regarding the fact that the catalyst deprotonizes the hydroxyl groups of the initiator and as a result of this, two water molecules and glycerol phosphate alkoxide are formed. This alkoxide, together with DMSO, has to form a homogeneous mixture where the initiator (solid) and the propylene oxide (PO) are soluble enough to mutually interact. The addition rate of PO increased when the solvent/initiator ratios studied were increased, observing that it also made the initiation step shorter. Furthermore, the molecular weight of the polyol decreased when higher solvent/initiator ratios were used, what revealed that more amount of salt was activated, initiating more chains of lower length but allowing to react more phosphate molecules and to increase the percentage of phosphorous in the final polyol. However, the final phosphorous content was lower than the theoretical one because only a percentage of salt was activated. On the other hand, glycerol phosphate disodium salt was still partially insoluble in DMSO studied proportions, thus, the recovery and reuse of this part of the salt for the synthesis of new flame retardant polyols was evaluated. In the recovered salt case, the rate of addition of PO remained the same than in the commercial salt but a shorter induction period was observed, this is because the recovered salt presents a higher amount of deprotonated hydroxyl groups. Besides, according to molecular weight, polydispersity index, FT-IR spectrum and thermal stability, there were no differences between both synthesized polyols. Thus, it is possible to use the recovered glycerol phosphate disodium salt in the same way that the commercial one.Keywords: DMSO, fire retardants, glycerol phosphate disodium salt, recovered initiator, solvent
Procedia PDF Downloads 278232 Investigation of the Trunk Inclination Positioning Angle on Swallowing and Respiratory Function
Authors: Hsin-Yi Kathy Cheng, Yan-Ying JU, Wann-Yun Shieh, Chin-Man Wang
Abstract:
Although the coordination of swallowing and respiration has been discussed widely, the influence of the positioning angle on swallowing and respiration during feeding has rarely been investigated. This study aimed to investigate the timing and coordination of swallowing and respiration in different seat inclination angles, with liquid and bolus, to provide suggestions and guidelines for the design and develop a feedback-controlled seat angle adjustment device for the back-adjustable wheelchair. Twenty-six participants aged between 15-30 years old without any signs of swallowing difficulty were included. The combination of seat inclinations and food types was randomly assigned, with three repetitions in each combination. The trunk inclination angle was adjusted by a commercialized positioning wheelchair. A total of 36 swallows were done, with at least 30 seconds of rest between each swallow. We used a self-developed wearable device to measure the submandibular muscle surface EMG, the movement of the thyroid cartilage, and the respiratory status of the nasal cavity. Our program auto-analyzed the onset and offset of duration, and the excursion and strength of thyroid cartilage when it was moving, coordination between breathing and swallowing were also included. Variables measured include the EMG duration (DsEMG), swallowing apnea duration (SAD), total excursion time (TET), duration of 2nd deflection, FSR amplitude, Onset latency, DsEMG onset, DsEMG offset, FSR onset, and FSR offset. These measurements were done in four-seat inclination angles (5。, 15。, 30。, 45。) and three food contents (1ml water, 10ml water, and 5ml pudding bolus) for each subject. The data collected between different contents were compared. Descriptive statistics were used to describe the basic features of the data. Repeated measure ANOVAs were used to analyze the differences for the dependent variables in different seat inclination and food content combinations. The results indicated significant differences in seat inclination, mostly between 5。 and 45。, in all variables except FSR amplitude. It also indicated significant differences in food contents almost among all variables. Significant interactions between seat inclination and food contents were only found in FSR offsets. The same protocol will be applied to participants with disabilities. The results of this study would serve as clinical guidance for proper feeding positions with different food contents. The ergonomic data would also provide references for assistive technology professionals and practitioners in device design and development. In summary, the current results indicated that it is easier for a subject to lean backward during swallowing than when sitting upright and swallowing water is easier than swallowing pudding. The results of this study would serve as the clinical guidance for proper feeding position (such as wheelchair back angle adjustment) with different food contents. The same protocol can be applied to elderly participants or participants with physical disabilities. The ergonomic data would also provide references for assistive technology professionals and practitioners in device design and development.Keywords: swallowing, positioning, assistive device, disability
Procedia PDF Downloads 72231 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting
Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan
Abstract:
El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index
Procedia PDF Downloads 153230 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 131229 Potential Assessment and Techno-Economic Evaluation of Photovoltaic Energy Conversion System: A Case of Ethiopia Light Rail Transit System
Authors: Asegid Belay Kebede, Getachew Biru Worku
Abstract:
The Earth and its inhabitants have faced an existential threat as a result of severe manmade actions. Global warming and climate change have been the most apparent manifestations of this threat throughout the world, with increasingly intense heat waves, temperature rises, flooding, sea-level rise, ice sheet melting, and so on. One of the major contributors to this disaster is the ever-increasing production and consumption of energy, which is still primarily fossil-based and emits billions of tons of hazardous GHG. The transportation industry is recognized as the biggest actor in terms of emissions, accounting for 24% of direct CO2 emissions and being one of the few worldwide sectors where CO2 emissions are still growing. Rail transportation, which includes all from light rail transit to high-speed rail services, is regarded as one of the most efficient modes of transportation, accounting for 9% of total passenger travel and 7% of total freight transit. Nonetheless, there is still room for improvement in the transportation sector, which might be done by incorporating alternative and/or renewable energy sources. As a result of these rapidly changing global energy situations and rapidly dwindling fossil fuel supplies, we were driven to analyze the possibility of renewable energy sources for traction applications. Even a small achievement in energy conservation or harnessing might significantly influence the total railway system and have the potential to transform the railway sector like never before. As a result, the paper begins by assessing the potential for photovoltaic (PV) power generation on train rooftops and existing infrastructure such as railway depots, passenger stations, traction substation rooftops, and accessible land along rail lines. As a result, a method based on a Google Earth system (using Helioscopes software) is developed to assess the PV potential along rail lines and on train station roofs. As an example, the Addis Ababa light rail transit system (AA-LRTS) is utilized. The case study examines the electricity-generating potential and economic performance of photovoltaics installed on AALRTS. As a consequence, the overall capacity of solar systems on all stations, including train rooftops, reaches 72.6 MWh per day, with an annual power output of 10.6 GWh. Throughout a 25-year lifespan, the overall CO2 emission reduction and total profit from PV-AA-LRTS can reach 180,000 tons and 892 million Ethiopian birrs, respectively. The PV-AA-LRTS has a 200% return on investment. All PV stations have a payback time of less than 13 years, and the price of solar-generated power is less than $0.08/kWh, which can compete with the benchmark price of coal-fired electricity. Our findings indicate that PV-AA-LRTS has tremendous potential, with both energy and economic advantages.Keywords: sustainable development, global warming, energy crisis, photovoltaic energy conversion, techno-economic analysis, transportation system, light rail transit
Procedia PDF Downloads 76228 C-Coordinated Chitosan Metal Complexes: Design, Synthesis and Antifungal Properties
Authors: Weixiang Liu, Yukun Qin, Song Liu, Pengcheng Li
Abstract:
Plant diseases can cause the death of crops with great economic losses. Particularly, those diseases are usually caused by pathogenic fungi. Metal fungicides are a type of pesticide that has advantages of a low-cost, broad antimicrobial spectrum and strong sterilization effect. However, the frequent and wide application of traditional metal fungicides has caused serious problems such as environmental pollution, the outbreak of mites and phytotoxicity. Therefore, it is critically necessary to discover new organic metal fungicides alternatives that have a low metal content, low toxicity, and little influence on mites. Chitosan, the second most abundant natural polysaccharide next to cellulose, was proved to have broad-spectrum antifungal activity against a variety of fungi. However, the use of chitosan was limited due to its poor solubility and weaker antifungal activity compared with commercial fungicide. Therefore, in order to improve the water solubility and antifungal activity, many researchers grafted the active groups onto chitosan. The present work was to combine free metal ions with chitosan, to prepare more potent antifungal chitosan derivatives, thus, based on condensation reaction, chitosan derivative bearing amino pyridine group was prepared and subsequently followed by coordination with cupric ions, zinc ions and nickel ions to synthesize chitosan metal complexes. The calculations by density functional theory (DFT) show that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and all of them are coordinated by the carbon atom in the p-π conjugate group and the oxygen atoms in the acetate ion. The antifungal properties of chitosan metal complexes against Phytophthora capsici (P. capsici), Gibberella zeae (G. zeae), Fusarium oxysporum (F. oxysporum) and Botrytis cinerea (B. cinerea) were also assayed. In addition, a plant toxicity experiment was carried out. The experiments indicated that the derivatives have significantly enhanced antifungal activity after metal ions complexation compared with the original chitosan. It was shown that 0.20 mg/mL of O-CSPX-Cu can 100% inhibit the growth of P. capsici and 0.20 mg/mL of O-CSPX-Ni can 87.5% inhibit the growth of B. cinerea. In general, their activities are better than the positive control oligosaccharides. The combination of the pyridine formyl groups seems to favor biological activity. Additionally, the ligand fashion was precisely analyzed, and the results revealed that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and the carbon atoms of the p-π conjugate group and the oxygen atoms of acetate ion are involved in the coordination of metal ions. The phytotoxicity assay of O-CSPX-M was also conducted, unlike the traditional metal fungicides, the metal complexes were not significantly toxic to the leaves of wheat. O-CSPX-Zn can even increase chlorophyll content in wheat leaves at 0.40 mg/mL. This is mainly because chitosan itself promotes plant growth and counteracts the phytotoxicity of metal ions. The chitosan derivative described here may lend themselves to future applicative studies in crop protection.Keywords: coordination, chitosan, metal complex, antifungal properties
Procedia PDF Downloads 316227 Identification of ω-3 Fatty Acids Using GC-MS Analysis in Extruded Spelt Product
Authors: Jelena Filipovic, Marija Bodroza-Solarov, Milenko Kosutic, Nebojsa Novkovic, Vladimir Filipovic, Vesna Vucurovic
Abstract:
Spelt wheat is suitable raw material for extruded products such as pasta, special types of bread and other products of altered nutritional characteristics compared to conventional wheat products. During the process of extrusion, spelt is exposed to high temperature and high pressure, during which raw material is also mechanically treated by shear forces. Spelt wheat is growing without the use of pesticides in harsh ecological conditions and in marginal areas of cultivation. So it can be used for organic and health safe food. Pasta is the most popular foodstuff; its consumption has been observed to rise. Pasta quality depends mainly on the properties of flour raw materials, especially protein content and its quality but starch properties are of a lesser importance. Pasta is characterized by significant amounts of complex carbohydrates, low sodium, total fat fiber, minerals, and essential fatty acids and its nutritional value can be improved with additional functional component. Over the past few decades, wheat pasta has been successfully formulated using different ingredients in pasta to cater health-conscious consumers who prefer having a product rich in protein, healthy lipids and other health benefits. Flaxseed flour is used in the production of bakery and pasta products that have properties of functional foods. However, it should be taken into account that food products retain the technological and sensory quality despite the added flax seed. Flaxseed contains important substances in its composition such as vitamins and minerals elements, and it is also an excellent source of fiber and one of the best sources of ω-3 fatty acids and lignin. In this paper, the quality and identification of spelt extruded product with the addition of flax seed, which is positively contributing to the nutritive and technology changes of the product, is investigated. ω-3 fatty acids are polyunsaturated essential fatty acids, and they must be taken with food to satisfy the recommended daily intake. Flaxseed flour is added in the quantity of 10/100 g of sample and 20/100 g of sample on farina. It is shown that the presence of ω-3 fatty acids in pasta can be clearly distinguished from other fatty acids by gas chromatography with mass spectrometry. Addition of flax seed flour influence chemical content of pasta. The addition of flax seed flour in spelt pasta in the quantities of 20g/100 g significantly increases the share of ω-3 fatty acids, which results in improved ratio of ω-6/ω-3 1:2.4 and completely satisfies minimum daily needs of ω-3 essential fatty acids (3.8 g/100 g) recommended by FDA. Flex flour influenced the pasta quality by increasing of hardness (2377.8 ± 13.3; 2874.5 ± 7.4; 3076.3 ± 5.9) and work of shear (102.6 ± 11.4; 150.8 ± 11.3; 165.0 ± 18.9) and increasing of adhesiveness (11.8 ± 20.6; 9.,98 ± 0.12; 7.1 ± 12.5) of the final product. Presented data point at good indicators of technological quality of spelt pasta with flax seed and that GC-MS analysis can be used in the quality control for flax seed identification. Acknowledgment: The research was financed by the Ministry of Education and Science of the Republic of Serbia (Project No. III 46005).Keywords: GC-MS analysis, ω-3 fatty acids, flex seed, spelt wheat, daily needs
Procedia PDF Downloads 161226 Mediating Role of 'Investment Recovery' and 'Competitiveness' on the Impact of Green Supply Chain Management Practices over Firm Performance: An Empirical Study Based on Textile Industry of Pakistan
Authors: Mehwish Jawaad
Abstract:
Purpose: The concept of GrSCM (Green Supply Chain Management) in the academic and research field is still thought to be in the development stage especially in Asian Emerging Economies. The purpose of this paper is to contribute significantly to the first wave of empirical investigation on GrSCM Practices and Firm Performance measures in Pakistan. The aim of this research is to develop a more holistic approach towards investigating the impact of Green Supply Chain Management Practices (Ecodesign, Internal Environmental Management systems, Green Distribution, Green Purchasing and Cooperation with Customers) on multiple dimensions of Firm Performance Measures (Economic Performance, Environmental Performance and Operational Performance) with a mediating role of Investment Recovery and Competitiveness. This paper also serves as an initiative to identify if the relationship between Investment Recovery and Firm Performance Measures is mediated by Competitiveness. Design/ Methodology/Approach: This study is based on survey Data collected from 272, ISO (14001) Certified Textile Firms Based in Lahore, Faisalabad, and Karachi which are involved in Spinning, Dyeing, Printing or Bleaching. A Theoretical model was developed incorporating the constructs representing Green Activities and Firm Performance Measures of a firm. The data was analyzed using Partial Least Square Structural Equation Modeling. Senior and Mid-level managers provided the data reflecting the degree to which their organizations deal with both internal and external stakeholders to improve the environmental sustainability of their supply chain. Findings: Of the 36 proposed Hypothesis, 20 are considered valid and significant. The statistics result reveal that GrSCM practices positively impact Environmental Performance followed by Economic and Operational Performance. Investment Recovery acts as a strong mediator between Intra organizational Green activities and performance outcomes. The relationship of Reverse Logistics influencing outcomes is significantly mediated by Competitiveness. The pressure originating from customers exert significant positive influence on the firm to adopt Green Practices consequently leading to higher outcomes. Research Contribution/Originality: Underpinning the Resource dependence theory and as a first wave of investigating the impact of Green Supply chain on performance outcomes in Pakistan, this study intends to make a prominent mark in the field of research. Investment and Competitiveness together are tested as a mediator for the first time in this arena. Managerial implications: Practitioner is provided with a framework for assessing the synergistic impact of GrSCM practices on performance. Upgradation of Accreditations and Audit Programs on regular basis are the need of the hour. Making the processes leaner with the sale of excess inventories and scrap helps the firm to work more efficiently and productively.Keywords: economic performance, environmental performance, green supply chain management practices, operational performance, sustainability, a textile sector of Pakistan
Procedia PDF Downloads 224225 Mixed Monolayer and PEG Linker Approaches to Creating Multifunctional Gold Nanoparticles
Authors: D. Dixon, J. Nicol, J. A. Coulter, E. Harrison
Abstract:
The ease with which they can be functionalized, combined with their excellent biocompatibility, make gold nanoparticles (AuNPs) ideal candidates for various applications in nanomedicine. Indeed several promising treatments are currently undergoing human clinical trials (CYT-6091 and Auroshell). A successful nanoparticle treatment must first evade the immune system, then accumulate within the target tissue, before enter the diseased cells and delivering the payload. In order to create a clinically relevant drug delivery system, contrast agent or radiosensitizer, it is generally necessary to functionalize the AuNP surface with multiple groups; e.g. Polyethylene Glycol (PEG) for enhanced stability, targeting groups such as antibodies, peptides for enhanced internalization, and therapeutic agents. Creating and characterizing the biological response of such complex systems remains a challenge. The two commonly used methods to attach multiple groups to the surface of AuNPs are the creation of a mixed monolayer, or by binding groups to the AuNP surface using a bi-functional PEG linker. While some excellent in-vitro and animal results have been reported for both approaches further work is necessary to directly compare the two methods. In this study AuNPs capped with both PEG and a Receptor Mediated Endocytosis (RME) peptide were prepared using both mixed monolayer and PEG linker approaches. The PEG linker used was SH-PEG-SGA which has a thiol at one end for AuNP attachment, and an NHS ester at the other to bind to the peptide. The work builds upon previous studies carried out at the University of Ulster which have investigated AuNP synthesis, the influence of PEG on stability in a range of media and investigated intracellular payload release. 18-19nm citrate capped AuNPs were prepared using the Turkevich method via the sodium citrate reduction of boiling 0.01wt% Chloroauric acid. To produce PEG capped AuNPs, the required amount of PEG-SH (5000Mw) or SH-PEG-SGA (3000Mw Jenkem Technologies) was added, and the solution stirred overnight at room temperature. The RME (sequence: CKKKKKKSEDEYPYVPN, Biomatik) co-functionalised samples were prepared by adding the required amount of peptide to the PEG capped samples and stirring overnight. The appropriate amounts of PEG-SH and RME peptide were added to the AuNP to produce a mixed monolayer consisting of approximately 50% PEG and 50% RME. The PEG linker samples were first fully capped with bi-functional PEG before being capped with RME peptide. An increase in diameter from 18-19mm for the ‘as synthesized’ AuNPs to 40-42nm after PEG capping was observed via DLS. The presence of PEG and RME peptide on both the mixed monolayer and PEG linker co-functionalized samples was confirmed by both FTIR and TGA. Bi-functional PEG linkers allow the entire AuNP surface to be capped with PEG, enabling in-vitro stability to be achieved using a lower molecular weight PEG. The approach also allows the entire outer surface to be coated with peptide or other biologically active groups, whilst also offering the promise of enhanced biological availability. The effect of mixed monolayer versus PEG linker attachment on both stability and non-specific protein corona interactions was also studied.Keywords: nanomedicine, gold nanoparticles, PEG, biocompatibility
Procedia PDF Downloads 339224 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)
Authors: Kutangila Malundama Succes, Koita Mahamadou
Abstract:
In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation
Procedia PDF Downloads 65223 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case
Authors: Gulhan Sen
Abstract:
Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making
Procedia PDF Downloads 245222 Effects of AI-driven Applications on Bank Performance in West Africa
Authors: Ani Wilson Uchenna, Ogbonna Chikodi
Abstract:
This study examined the impact of artificial intelligence driven applications on banks’ performance in West Africa using Nigeria and Ghana as case studies. Specifically, the study examined the extent to which deployment of smart automated teller machine impacts the banks’ net worth within the reference period in Nigeria and Ghana. It ascertained the impact of point of sale on banks’ net worth within the reference period in Nigeria and Ghana. Thirdly, it verified the extent to which webpay services can influence banks’ performance in Nigeria and Ghana and finally, determined the impact of mobile pay services on banks’ performance in Nigeria and Ghana. The study used automated teller machine (ATM), Point of sale services (POS), Mobile pay services (MOP) and Web pay services (WBP) as proxies for explanatory variables while Bank net worth was used as explained variable for the study. The data for this study were sourced from central bank of Nigeria (CBN) Statistical Bulletin as well as Bank of Ghana (BoGH) Statistical Bulletin, Ghana payment systems oversight annual report and world development indicator (WDI). Furthermore, the mixed order of integration observed from the panel unit test result justified the use of autoregressive distributed lag (ARDL) approach to data analysis which the study adopted. While the cointegration test showed the existence of cointegration among the studied variables, bound test result justified the presence of long-run relationship among the series. Again, ARDL error correction estimate established satisfactory (13.92%) speed of adjustment from long run disequilibrium back to short run dynamic relationship. The study found that while Automated teller machine (ATM) had statistically significant impact on bank net worth (BNW) of Nigeria and Ghana, point of sale services application (POS) statistically and significantly impact on bank net worth within the study period, mobile pay services application was statistically significant in impacting the changes in the bank net worth of the countries of study while web pay services (WBP) had no statistically significant impact on bank net worth of the countries of reference. The study concluded that artificial intelligence driven application have significant an positive impact on bank performance with exception of web pay which had negative impact on bank net worth. The study recommended that management of banks both in Nigerian and Ghanaian should encourage more investments in AI-powered smart ATMs aimed towards delivering more secured banking services in order to increase revenue, discourage excessive queuing in the banking hall, reduced fraud and minimize error in processing transaction. Banks within the scope of this study should leverage on modern technologies to checkmate the excesses of the private operators POS in order to build more confidence on potential customers. Government should convert mobile pay services to a counter terrorism tool by ensuring that restrictions on over-the-counter withdrawals to a minimum amount is maintained and place sanctions on withdrawals above that limit.Keywords: artificial intelligence (ai), bank performance, automated teller machines (atm), point of sale (pos)
Procedia PDF Downloads 7221 Sustainable Pavements with Reflective and Photoluminescent Properties
Authors: A.H. Martínez, T. López-Montero, R. Miró, R. Puig, R. Villar
Abstract:
An alternative to mitigate the heat island effect is to pave streets and sidewalks with pavements that reflect incident solar energy, keeping their surface temperature lower than conventional pavements. The “Heat island mitigation to prevent global warming by designing sustainable pavements with reflective and photoluminescent properties (RELUM) Project” has been carried out with this intention in mind. Its objective has been to develop bituminous mixtures for urban pavements that help in the fight against global warming and climate change, while improving the quality of life of citizens. The technology employed has focused on the use of reflective pavements, using bituminous mixes made with synthetic bitumens and light pigments that provide high solar reflectance. In addition to this advantage, the light surface colour achieved with these mixes can improve visibility, especially at night. In parallel and following the latter approach, an appropriate type of treatment has also been developed on bituminous mixtures to make them capable of illuminating at night, giving rise to photoluminescent applications, which can reduce energy consumption and increase road safety due to improved night-time visibility. The work carried out consisted of designing different bituminous mixtures in which the nature of the aggregate was varied (porphyry, granite and limestone) and also the colour of the mixture, which was lightened by adding pigments (titanium dioxide and iron oxide). The reflectance of each of these mixtures was measured, as well as the temperatures recorded throughout the day, at different times of the year. The results obtained make it possible to propose bituminous mixtures whose characteristics can contribute to the reduction of urban heat islands. Among the most outstanding results is the mixture made with synthetic bitumen, white limestone aggregate and a small percentage of titanium dioxide, which would be the most suitable for urban surfaces without road traffic, given its high reflectance and the greater temperature reduction it offers. With this solution, a surface temperature reduction of 9.7°C is achieved at the beginning of the night in the summer season with the highest radiation. As for luminescent pavements, paints with different contents of strontium aluminate and glass microspheres have been applied to asphalt mixtures, and the luminance of all the applications designed has been measured by exciting them with electric bulbs that simulate the effect of sunlight. The results obtained at this stage confirm the ability of all the designed dosages to emit light for a certain time, varying according to the proportions used. Not only the effect of the strontium aluminate and microsphere content has been observed, but also the influence of the colour of the base on which the paint is applied; the lighter the base, the higher the luminance. Ongoing studies are focusing on the evaluation of the durability of the designed solutions in order to determine their lifetime.Keywords: heat island, luminescent paints, reflective pavement, temperature reduction
Procedia PDF Downloads 30220 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 326219 Biomaterials Solutions to Medical Problems: A Technical Review
Authors: Ashish Thakur
Abstract:
This technical paper was written in view of focusing the biomaterials and its various applications in modern industries. Author tires to elaborate not only the medical, infect plenty of application in other industries. The scope of the research area covers the wide range of physical, biological and chemical sciences that underpin the design of biomaterials and the clinical disciplines in which they are used. A biomaterial is now defined as a substance that has been engineered to take a form which, alone or as part of a complex system, is used to direct, by control of interactions with components of living systems, the course of any therapeutic or diagnostic procedure. Biomaterials are invariably in contact with living tissues. Thus, interactions between the surface of a synthetic material and biological environment must be well understood. This paper reviews the benefits and challenges associated with surface modification of the metals in biomedical applications. The paper also elaborates how the surface characteristics of metallic biomaterials, such as surface chemistry, topography, surface charge, and wettability, influence the protein adsorption and subsequent cell behavior in terms of adhesion, proliferation, and differentiation at the biomaterial–tissue interface. The chapter also highlights various techniques required for surface modification and coating of metallic biomaterials, including physicochemical and biochemical surface treatments and calcium phosphate and oxide coatings. In this review, the attention is focused on the biomaterial-associated infections, from which the need for anti-infective biomaterials originates. Biomaterial-associated infections differ markedly for epidemiology, aetiology and severity, depending mainly on the anatomic site, on the time of biomaterial application, and on the depth of the tissues harbouring the prosthesis. Here, the diversity and complexity of the different scenarios where medical devices are currently utilised are explored, providing an overview of the emblematic applicative fields and of the requirements for anti-infective biomaterials. In addition to this, chapter introduces nanomedicine and the use of both natural and synthetic polymeric biomaterials, focuses on specific current polymeric nanomedicine applications and research, and concludes with the challenges of nanomedicine research. Infection is currently regarded as the most severe and devastating complication associated to the use of biomaterials. Osteoporosis is a worldwide disease with a very high prevalence in humans older than 50. The main clinical consequences are bone fractures, which often lead to patient disability or even death. A number of commercial biomaterials are currently used to treat osteoporotic bone fractures, but most of these have not been specifically designed for that purpose. Many drug- or cell-loaded biomaterials have been proposed in research laboratories, but very few have received approval for commercial use. Polymeric nanomaterial-based therapeutics plays a key role in the field of medicine in treatment areas such as drug delivery, tissue engineering, cancer, diabetes, and neurodegenerative diseases. Advantages in the use of polymers over other materials for nanomedicine include increased functionality, design flexibility, improved processability, and, in some cases, biocompatibility.Keywords: nanomedicine, tissue, infections, biomaterials
Procedia PDF Downloads 264218 Hydro-Mechanical Characterization of PolyChlorinated Biphenyls Polluted Sediments in Interaction with Geomaterials for Landfilling
Authors: Hadi Chahal, Irini Djeran-Maigre
Abstract:
This paper focuses on the hydro-mechanical behavior of polychlorinated biphenyl (PCB) polluted sediments when stored in landfills and the interaction between PCBs and geosynthetic clay liners (GCL) with respect to hydraulic performance of the liner and the overall performance and stability of landfills. A European decree, adopted in the French regulation forbids the reintroducing of contaminated dredged sediments containing more than 0,64mg/kg Σ 7 PCBs to rivers. At these concentrations, sediments are considered hazardous and a remediation process must be adopted to prevent the release of PCBs into the environment. Dredging and landfilling polluted sediments is considered an eco-environmental remediation solution. French regulations authorize the storage of PCBs contaminated components with less than 50mg/kg in municipal solid waste facilities. Contaminant migration via leachate may be possible. The interactions between PCBs contaminated sediments and the GCL barrier present in the bottom of a landfill for security confinement are not known. Moreover, the hydro-mechanical behavior of stored sediments may affect the performance and the stability of the landfill. In this article, hydro-mechanical characterization of the polluted sediment is presented. This characterization led to predict the behavior of the sediment at the storage site. Chemical testing showed that the concentration of PCBs in sediment samples is between 1.7 and 2,0 mg/kg. Physical characterization showed that the sediment is organic silty sand soil (%Silt=65, %Sand=27, %OM=8) characterized by a high plasticity index (Ip=37%). Permeability tests using permeameter and filter press showed that sediment permeability is in the order of 10-9 m/s. Compressibility tests showed that the sediment is a very compressible soil with Cc=0,53 and Cα =0,0086. In addition, effects of PCB on the swelling behavior of bentonite were studied and the hydraulic performance of the GCL in interaction with PCBs was examined. Swelling tests showed that PCBs don’t affect the swelling behavior of bentonite. Permeability tests were conducted on a 1.0 m pilot scale experiment, simulating a storage facility. PCBs contaminated sediments were directly placed over a passive barrier containing GCL to study the influence of the direct contact of polluted sediment leachate with the GCL. An automatic water system has been designed to simulate precipitation. Effluent quantity and quality have been examined. The sediment settlements and the water level in the sediment have been monitored. The results showed that desiccation affected the behavior of the sediment in the pilot test and that laboratory tests alone are not sufficient to predict the behavior of the sediment in landfill facility. Furthermore, the concentration of PCB in the sediment leachate was very low ( < 0,013 µg/l) and that the permeability of the GCL was affected by other components present in the sediment leachate. Desiccation and cracks were the main parameters that affected the hydro-mechanical behavior of the sediment in the pilot test. In order to reduce these infects, the polluted sediment should be stored at a water content inferior to its shrinkage limit (w=39%). We also propose to conduct other pilot tests with the maximum concentration of PCBs allowed in municipal solid waste facility of 50 mg/kg.Keywords: geosynthetic clay liners, landfill, polychlorinated biphenyl, polluted dredged materials
Procedia PDF Downloads 123217 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems
Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille
Abstract:
Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable
Procedia PDF Downloads 399216 Using Business Interactive Games to Improve Management Skills
Authors: Nuno Biga
Abstract:
Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.Keywords: business interactive games, gamification, management empowerment skills, simulation living labs
Procedia PDF Downloads 112215 Global Supply Chain Tuning: Role of National Culture
Authors: Aleksandr S. Demin, Anastasiia V. Ivanova
Abstract:
Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management
Procedia PDF Downloads 106214 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 240213 Biosocial Determinants of Maternal and Child Health in Northeast India: A Case Study
Authors: Benrithung Murry
Abstract:
This paper highlights the biosocial determinants of health-seeking behavior in tribal population groups of northeast India, focusing on maternal and child health. The northeastern region of India is a conglomeration of several ethnic groups, most of which are scheduled as tribal groups. A total of 750 ever-married women in reproductive ages (15-49 years) were interviewed from three tribal groups of Nagaland, India using pre-tested and modified maternal health schedule. Data pertaining to reproductive performance of the mothers and their children health status were collected from 12 villages of Dimapur district, Nagaland, India. The sample for study comprises 212 Angami women, 267 Ao women, and 271 Sumi women, all of which belonging to tribal populations of Northeast India. Sex ratios of 15-49 years in these three populations are 1018.18, 1086.69, and 1106.92, respectively. 90% of the populations in the study are nuclear families, with about 10% of households falling below the poverty line as per the cutoffs for India. Female literacy level in these population groups is higher than the national average of 65.46%; however, about 30% of all married women are not engaged in any sort of earnings. Total fertility rates of these populations are alarming (Total Fertility Rate ≥ 6) and far from replacement fertility level, while infant mortality rates are found to be much lower than the national average of 34 per 1000. The perception and practice of maternal health in this region is unimpressive despite the availability of medical amenities. Only 3 % of mothers in the study have reported 4 times antenatal checkups during last two pregnancies. Other mothers have reported 1 to 3 times of antenatal checkups, but about 25% of them never visited a doctor during the entire pregnancy period. About 15% of mothers never took tetanus injection, while 40% of mothers never took iron folic supplements during pregnancy. Almost half of all women and their husbands do not use birth control measures even for the spacing of children, which has an immense impact on prenatal mortality mainly due to deliberate abortions: the percentage of prenatal mortality among Angami, Ao and Sumi populations is 44.88, 31.88 and 54.98, respectively per 1000 live births. The steep decline in fertility levels in most countries is a consequence of the increasing use of modern methods of contraception. However, among users of birth control measures in these populations, it is seen that most couples use it only after they have the desired number of children, thus its use having no substantial influence in reducing fertility. It is also seen that the majority of the children were only partially vaccinated. With many child deliveries being done at home, many newborns are not administered with polio at birth. Two-third of all children do not have complete basic immunization against polio, diphtheria, tetanus, pertussis, bacillus, and hepatitis besides others. Certain adherence to traditional beliefs and customs apart from the socio-economic factors is believed to have been operating in these populations, which determines their health-seeking behavior. While a more in-depth study combining biological, socio-cultural, economic, and genetic factors is suggested, there is an urgent need for intervention in these populations to combat with the poor maternal and child health status.Keywords: case study, health behavior, mother and child, northeast india
Procedia PDF Downloads 129