Search results for: MEMS capacitive pressure sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5453

Search results for: MEMS capacitive pressure sensor

173 Globalisation and Diplomacy: How Can Small States Improve the Practice of Diplomacy to Secure Their Foreign Policy Objectives?

Authors: H. M. Ross-McAlpine

Abstract:

Much of what is written on diplomacy, globalization and the global economy addresses the changing nature of relationships between major powers. While the most dramatic and influential changes have resulted from these developing relationships the world is not, on deeper inspection, governed neatly by major powers. Due to advances in technology, the shifting balance of power and a changing geopolitical order, small states have the ability to exercise a greater influence than ever before. Increasingly interdependent and ever complex, our world is too delicate to be handled by a mighty few. The pressure of global change requires small states to adapt their diplomatic practices and diversify their strategic alliances and relationships. The nature and practice of diplomacy must be re-evaluated in light of the pressures resulting from globalization. This research examines: how small states can best secure their foreign policy objectives? Small state theory is used as a foundation for exploring the case study of New Zealand. The research draws on secondary sources to evaluate the existing theory in relation to modern practices of diplomacy. As New Zealand lacks the required economic and military power to play an active, influential role in international affairs what strategies are used to exert influence? Furthermore, New Zealand lies in a remote corner of the Pacific and is geographically isolated from its nearest neighbors how does this affect security and trade priorities? The findings note a significant shift since the 1970’s in New Zealand’s diplomatic relations. This shift is arguably a direct result of globalization, regionalism and a growing independence from the traditional bi-lateral relationships. The need to source predictable trade, investment and technology are an essential driving force for New Zealand’s diplomatic relations. A lack of hard power aligns New Zealand’s prosperity with a secure, rules-based international system that increases the likelihood of a stable and secure global order. New Zealand’s diplomacy and prosperity has been intrinsically reliant on its reputation. A vital component of New Zealand’s diplomacy is preserving a reputation for integrity and global responsibility. It is the use of this soft power that facilitates the influence that New Zealand enjoys on the world stage. To weave a comprehensive network of successful diplomatic relationships, New Zealand must maintain a reputation of international credibility. Globalization has substantially influenced the practice of diplomacy for New Zealand. The current world order places economic and military might in the hands of a few, subsequently requiring smaller states to use other means for securing their interests. There are clear strategies evident in New Zealand’s diplomacy practice that draw attention to how other smaller states might best secure their foreign policy objectives. While these findings are limited, as with all case study research, there is value in applying the findings to other small states struggling to secure their interests in the wake of rapid globalization.

Keywords: diplomacy, foreign policy, globalisation, small state

Procedia PDF Downloads 398
172 Reduction of Specific Energy Consumption in Microfiltration of Bacillus velezensis Broth by Air Sparging and Turbulence Promoter

Authors: Jovana Grahovac, Ivana Pajcin, Natasa Lukic, Jelena Dodic, Aleksandar Jokic

Abstract:

To obtain purified biomass to be used in the plant pathogen biocontrol or as soil biofertilizer, it is necessary to eliminate residual broth components at the end of the fermentation process. The main drawback of membrane separation techniques is permeate flux decline due to the membrane fouling. Fouling mitigation measures increase the pressure drop along membrane channel due to the increased resistance to flow of the feed suspension, thus increasing the hydraulic power drop. At the same time, these measures lead to an increase in the permeate flux due to the reduced resistance of the filtration cake on the membrane surface. Because of these opposing effects, the energy efficiency of fouling mitigation measures is limited, and the justification of its application is provided by information on a reducing specific energy consumption compared to a case without any measures employed. In this study, the influence of static mixer (Kenics) and air-sparging (two-phase flow) on reduction of specific energy consumption (ER) was investigated. Cultivation Bacillus velezensis was carried out in the 3-L bioreactor (Biostat® Aplus) containing 2 L working volume with two parallel Rushton turbines and without internal baffles. Cultivation was carried out at 28 °C on at 150 rpm with an aeration rate of 0.75 vvm during 96 h. The experiments were carried out in a conventional cross-flow microfiltration unit. During experiments, permeate and retentate were recycled back to the broth vessel to simulate continuous process. The single channel ceramic membrane (TAMI Deutschland) used had a nominal pore size 200 nm with the length of 250 mm and an inner/external diameter of 6/10 mm. The useful membrane channel surface was 4.33×10⁻³ m². Air sparging was brought by the pressurized air connected by a three-way valve to the feed tube by a simple T-connector without diffusor. The different approaches to flux improvement are compared in terms of energy consumption. Reduction of specific energy consumption compared to microfiltration without fouling mitigation is around 49% and 63%, for use of two-phase flow and a static mixer, respectively. In the case of a combination of these two fouling mitigation methods, ER is 60%, i.e., slightly lower compared to the use of turbulence promoter alone. The reason for this result can be found in the fact that flux increase is more affected by the presence of a Kenics static mixer while sparging results in an increase of energy used during microfiltration. By comparing combined method with turbulence promoter flux enhancement method ER is negative (-7%) which can be explained by increased power consumption for air flow with moderate contribution to the flux increase. Another confirmation for this fact can be found by comparing energy consumption values for combined method with energy consumption in the case of two-phase flow. In this instance energy reduction (ER) is 22% that demonstrates that turbulence promoter is more efficient compared to two phase flow. Antimicrobial activity of Bacillus velezensis biomass against phytopathogenic isolates Xanthomonas campestris was preserved under different fouling reduction methods.

Keywords: Bacillus velezensis, microfiltration, static mixer, two-phase flow

Procedia PDF Downloads 118
171 A Comparative Study on the Influencing Factors of Urban Residential Land Prices Among Regions

Authors: Guo Bingkun

Abstract:

With the rapid development of China's social economy and the continuous improvement of urbanization level, people's living standards have undergone tremendous changes, and more and more people are gathering in cities. The demand for urban residents' housing has been greatly released in the past decade. The demand for housing and related construction land required for urban development has brought huge pressure to urban operations, and land prices have also risen rapidly in the short term. On the other hand, from the comparison of the eastern and western regions of China, there are also great differences in urban socioeconomics and land prices in the eastern, central and western regions. Although judging from the current overall market development, after more than ten years of housing market reform and development, the quality of housing and land use efficiency in Chinese cities have been greatly improved. However, the current contradiction between land demand for urban socio-economic development and land supply, especially the contradiction between land supply and demand for urban residential land, has not been effectively alleviated. Since land is closely linked to all aspects of society, changes in land prices will be affected by many complex factors. Therefore, this paper studies the factors that may affect urban residential land prices and compares them among eastern, central and western cities, and finds the main factors that determine the level of urban residential land prices. This paper provides guidance for urban managers in formulating land policies and alleviating land supply and demand. It provides distinct ideas for improving urban planning and improving urban planning and promotes the improvement of urban management level. The research in this paper focuses on residential land prices. Generally, the indicators for measuring land prices mainly include benchmark land prices, land price level values, parcel land prices, etc. However, considering the requirements of research data continuity and representativeness, this paper chooses to use residential land price level values. Reflects the status of urban residential land prices. First of all, based on the existing research at home and abroad, the paper considers the two aspects of land supply and demand and, based on basic theoretical analysis, determines some factors that may affect urban housing, such as urban expansion, taxation, land reserves, population, and land benefits. Factors of land price and correspondingly selected certain representative indicators. Secondly, using conventional econometric analysis methods, we established a model of factors affecting urban residential land prices, quantitatively analyzed the relationship and intensity of influencing factors and residential land prices, and compared the differences in the impact of urban residential land prices between the eastern, central and western regions. Compare similarities. Research results show that the main factors affecting China's urban residential land prices are urban expansion, land use efficiency, taxation, population size, and residents' consumption. Then, the main reason for the difference in residential land prices between the eastern, central and western regions is the differences in urban expansion patterns, industrial structures, urban carrying capacity and real estate development investment.

Keywords: urban housing, urban planning, housing prices, comparative study

Procedia PDF Downloads 50
170 Radiation Stability of Structural Steel in the Presence of Hydrogen

Authors: E. A. Krasikov

Abstract:

As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.

Keywords: hydrogen, radiation, stability, structural steel

Procedia PDF Downloads 273
169 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust

Authors: A. V. Harutyunyan

Abstract:

The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).

Keywords: dehydration, diamonds, hydrocarbons, serpentinites

Procedia PDF Downloads 341
168 Effect of Noise at Different Frequencies on Heart Rate Variability - Experimental Study Protocol

Authors: A. Bortkiewcz, A. Dudarewicz, P. Małecki, M. Kłaczyński, T. Wszołek, Małgorzata Pawlaczyk-Łuszczyńska

Abstract:

Low-frequency noise (LFN) has been recognized as a special environmental pollutant. It is usually considered a broadband noise with the dominant content of low frequencies from 10 Hz to 250 Hz. A growing body of data shows that LFN differs in nature from other environmental noises, which are at comparable levels but not dominated by low-frequency components. The primary and most frequent adverse effect of LFN exposure is annoyance. Moreover, some recent investigations showed that LFN at relatively low A-weighted sound pressure levels (40−45 dB) occurring in office-like areas could adversely affect the mental performance, especially of high-sensitive subjects. It is well documented that high-frequency noise disturbs various types of human functions; however, there is very little data on the impact of LFN on well-being and health, including the cardiovascular system. Heart rate variability (HRV) is a sensitive marker of autonomic regulation of the circulatory system. Walker and co-workers found that LFN has a significantly more negative impact on cardiovascular response than exposure to high-frequency noise and that changes in HRV parameters resulting from LFN exposure tend to persist over time. The negative reactions of the cardiovascular system in response to LFN generated by wind turbines (20-200 Hz) were confirmed by Chiu. The scientific aim of the study is to assess the relationship between the spectral-temporal characteristics of LFN and the activity of the autonomic nervous system, considering the subjective assessment of annoyance, sensitivity to this type of noise, and cognitive and general health status. The study will be conducted in 20 male students in a special, acoustically prepared, constantly supervised room. Each person will be tested 4 times (4 sessions), under conditions of non-exposure (sham) and exposure to noise of wind turbines recorded at a distance of 250 meters from the turbine with different frequencies and frequency ranges: acoustic band 20 Hz-20 kHz, infrasound band 5-20 Hz, acoustic band + infrasound band. The order of sessions of the experiment will be randomly selected. Each session will last 1 h. There will be a 2-3 days break between sessions to exclude the possibility of the earlier session influencing the results of the next one. Before the first exposure, a questionnaire will be conducted on noise sensitivity, general health status using the GHQ questionnaire, hearing organ status and sociodemographic data. Before each of the 4 exposures, subjects will complete a brief questionnaire on their mood and sleep quality the night before the test. After the test, the subjects will be asked about any discomfort and subjective symptoms during the exposure. Before the test begins, Holter ECG monitoring equipment will be installed. HRV will be analyzed from the ECG recordings, including time and frequency domain parameters. The tests will always be performed in the morning (9-12) to avoid the influence of diurnal rhythm on HRV results. Students will perform psychological tests 15 minutes before the end of the test (Vienna Test System).

Keywords: neurovegetative control, heart rate variability (HRV), cognitive processes, low frequency noise

Procedia PDF Downloads 80
167 Denitrification Diesel Hydrocarbons Using Triethanolamine-Glycerol Deep Eutectic Solvent

Authors: Hocine Sifaoui

Abstract:

The manufacture and marketing of the gasoline and diesel without aromatic compounds, particularly nitrogen heteroaromatics and sulfur heteroaromatics, is the main objective of researchers and the petrochemical industry to reply to the requirements of the environmental protection. This work is part of this line of research and for this a triethanolamine/glycerol (TEoA:Gly) deep eutectic solvent (DES), was used to remove two model nitrogen compounds, pyridine and quinoline from n-decane. Experimentally two liquid-liquid equilibrium systems {n-decane + pyridine/quinoline + DES} were measured at 298.15 K and 1.01 bar using the equilibrium cell method. This study aims to evaluate the potential of this DES as sustainable alternative to organic solvents for the denitrogenation of petroleum feedstocks by liquid-liquid extraction. Experimentally, the DES were prepared by the heating method. Accurately weighed triethanolamine as hydrogen bond acceptor (HBA) and glycerol as hydrogen bond donor (HBD), were placed in a round-bottomed flask. An Ohaus Adventurer balance with a precision of ±0.0001 g was used for weighing the HBA and HBD. The mixtures were then stirred and heated at 343.15 K under atmospheric pressure using a rotary evaporator. The preparation was completed when a clear and homogeneous liquid was obtained. To evaluate the equilibrium behaviour of pseudo-ternary systems {n-decane + pyridine or quinoline + DES}, mixtures were prepared with the nitrogenous compound (pyridine or quinoline) at varying mass percentages in the n-decane, along with a fixed (2:1) ratio between the n-decane and DES phases. Defined amounts of these three components were precisely weighed to achieve mixtures within the biphasic region before vigorous stirring at 400 rpm using an Avantor VWR KS 4000 agitator shaker for 4 hours at 298.15 K, followed by overnight settling to attain thermodynamic equilibrium evidenced by phase separation. Aliquot from the upper phase rich in n-decane and the lower phase rich in DES were carefully weighed. The mass of each sample was precisely recorded for quantification by gas chromatography. The DES content was calculated by mass balance after analysing the composition of the other species such as n-decane, pyridine or quinoline. All samples were diluted with pure ethanol before their analysis by GC. Distribution ratios and selectivities toward pyridine and quinoline compounds were also measured at the same phase molar ratios. The consistency and reliability of the experimental data, were verified and validated by the Othmer-Tobias and Batchman correlations. The experimental results show that the highest value of the partition coefficient =7.08 was obtained with pyridine extraction and the highest selectivity S=801.4 was obtained with quinoline extraction. The experimental liquid-liquid equilibrium data of these ternary systems were correlated by using the Non Random Two-Liquids (NRTL) and COnductor-like Screening MOdel for Real Solvents (COSMO-RS) models. A good agreement with the experimental data was observed with NRTL and COSMO-RS models for the two systems. The performance of this DES was compared to those of ionic liquids and organic solvents reported in the literature.

Keywords: piridyne, quinoline, n-decane, deep eutectic solvent

Procedia PDF Downloads 3
166 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 140
165 Globalization of Pesticide Technology and Sustainable Agriculture

Authors: Gagandeep Kaur

Abstract:

The pesticide industry is a big supplier of agricultural inputs. The uses of pesticides control weeds, fungal diseases, etc., which causes of yield losses in agricultural production. In agribusiness and agrichemical industry, Globalization of markets, competition and innovation are the dominant trends. By the tradition of increasing the productivity of agro-systems through generic, universally applicable technologies, innovation in the agrichemical industry is limited. The marketing of technology of agriculture needs to deal with some various trends such as locally-organized forces that envision regionalized sustainable agriculture in the future. Agricultural production has changed dramatically over the past century. Before World War second agricultural production was featured as a low input of money, high labor, mixed farming and low yields. Although mineral fertilizers were applied already in the second half of the 19th century, most f the crops were restricted by local climatic, geological and ecological conditions. After World War second, in the period of reconstruction, political and socioeconomic pressure changed the nature of agricultural production. For a growing population, food security at low prices and securing farmer income at acceptable levels became political priorities. Current agricultural policy the new European common agricultural policy is aimed to reduce overproduction, liberalization of world trade and the protection of landscape and natural habitats. Farmers have to increase the quality of their productivity and they have to control costs because of increased competition from the world market. Pesticides should be more effective at lower application doses, less toxic and not pose a threat to groundwater. There is a big debate taking place about how and whether to mitigate the intensive use of pesticides. This debate is about the future of agriculture which is sustainable agriculture. This is possible by moving away from conventional agriculture. Conventional agriculture is featured as high inputs and high yields. The use of pesticides in conventional agriculture implies crop production in a wide range. To move away from conventional agriculture is possible through the gradual adoption of less disturbing and polluting agricultural practices at the level of the cropping system. For a healthy environment for crop production in the future there is a need for the maintenance of chemical, physical or biological properties. There is also required to minimize the emission of volatile compounds in the atmosphere. Companies are limiting themselves to a particular interpretation of sustainable development, characterized by technological optimism and production-maximizing. So the main objective of the paper will present the trends in the pesticide industry and in agricultural production in the era of Globalization. The second objective is to analyze sustainable agriculture. Companies of pesticides seem to have identified biotechnology as a promising alternative and supplement to the conventional business of selling pesticides. The agricultural sector is in the process of transforming its conventional mode of operation. Some experts give suggestions to farmers to move towards precision farming and some suggest engaging in organic farming. The methodology of the paper will be historical and analytical. Both primary and secondary sources will be used.

Keywords: globalization, pesticides, sustainable development, organic farming

Procedia PDF Downloads 99
164 An Indispensable Parameter in Lipid Ratios to Discriminate between Morbid Obesity and Metabolic Syndrome in Children: High Density Lipoprotein Cholesterol

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Obesity is a low-grade inflammatory disease and may lead to health problems such as hypertension, dyslipidemia, diabetes. It is also associated with important risk factors for cardiovascular diseases. This requires the detailed evaluation of obesity, particularly in children. The aim of this study is to enlighten the potential associations between lipid ratios and obesity indices and to introduce those with discriminating features among children with obesity and metabolic syndrome (MetS). A total of 408 children (aged between six and eighteen years) participated in the scope of the study. Informed consent forms were taken from the participants and their parents. Ethical Committee approval was obtained. Anthropometric measurements such as weight, height as well as waist, hip, head, neck circumferences and body fat mass were taken. Systolic and diastolic blood pressure values were recorded. Body mass index (BMI), diagnostic obesity notation model assessment index-II (D2 index), waist-to-hip, head-to-neck ratios were calculated. Total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDLChol), low-density lipoprotein cholesterol (LDLChol) analyses were performed in blood samples drawn from 110 children with normal body weight, 164 morbid obese (MO) children and 134 children with MetS. Age- and sex-adjusted BMI percentiles tabulated by World Health Organization were used to classify groups; normal body weight, MO and MetS. 15th-to-85th percentiles were used to define normal body weight children. Children, whose values were above the 99th percentile, were described as MO. MetS criteria were defined. Data were evaluated statistically by SPSS Version 20. The degree of statistical significance was accepted as p≤0.05. Mean±standard deviation values of BMI for normal body weight children, MO children and those with MetS were 15.7±1.1, 27.1±3.8 and 29.1±5.3 kg/m2, respectively. Corresponding values for the D2 index were calculated as 3.4±0.9, 14.3±4.9 and 16.4±6.7. Both BMI and D2 index were capable of discriminating the groups from one another (p≤0.01). As far as other obesity indices were considered, waist-to hip and head-to-neck ratios did not exhibit any statistically significant difference between MO and MetS groups (p≥0.05). Diagnostic obesity notation model assessment index-II was correlated with the triglycerides-to-HDL-C ratio in normal body weight and MO (r=0.413, p≤0.01 and r=0.261, (p≤0.05, respectively). Total cholesterol-to-HDL-C and LDL-C-to-HDL-C showed statistically significant differences between normal body weight and MO as well as MO and MetS (p≤0.05). The only group in which these two ratios were significantly correlated with waist-to-hip ratio was MetS group (r=0.332 and r=0.334, p≤0.01, respectively). Lack of correlation between the D2 index and the triglycerides-to-HDL-C ratio was another important finding in MetS group. In this study, parameters and ratios, whose associations were defined previously with increased cardiovascular risk or cardiac death have been evaluated along with obesity indices in children with morbid obesity and MetS. Their profiles during childhood have been investigated. Aside from the nature of the correlation between the D2 index and triglycerides-to-HDL-C ratio, total cholesterol-to-HDL-C as well as LDL-C-to- HDL-C ratios along with their correlations with waist-to-hip ratio showed that the combination of obesity-related parameters predicts better than one parameter and appears to be helpful for discriminating MO children from MetS group.

Keywords: children, lipid ratios, metabolic syndrome, obesity indices

Procedia PDF Downloads 159
163 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 59
162 The Establishment of Primary Care Networks (England, UK) Throughout the COVID-19 Pandemic: A Qualitative Exploration of Workforce Perceptions

Authors: Jessica Raven Gates, Gemma Wilson-Menzfeld, Professor Alison Steven

Abstract:

In 2019, the Primary Care system in the UK National Health Service (NHS) was subject to reform and restructuring. Primary Care Networks (PCNs) were established, which aligned with a trend towards integrated care both within the NHS and internationally. The introduction of PCNs brought groups of GP practices in a locality together, to operate as a network, build on existing services and collaborate at a larger scale. PCNs were expected to bring a range of benefits to patients and address some of the workforce pressures in the NHS, through an expanded and collaborative workforce. The early establishment of PCNs was disrupted by the emerging COVID-19 pandemic. This study, set in the context of the pandemic, aimed to explore experiences of the PCN workforce, and their perceptions of the establishment of PCNs. Specific objectives focussed on examining factors perceived as enabling or hindering the success of a PCN, the impact on day-to-day work, the approach to implementing change, and the influence of the COVID-19 pandemic upon PCN development. This study is part of a three-phase PhD project that utilized qualitative approaches and was underpinned by social constructionist philosophy. Phase 1: a systematic narrative review explored the provision of preventative healthcare services in UK primary settings and examined facilitators and barriers to delivery as experienced by the workforce. Phase 2: informed by the findings of phase 1, semi-structured interviews were conducted with fifteen participants (PCN workforce). Phase 3: follow-up interviews were conducted with original participants to examine any changes to their experiences and perceptions of PCNs. Three main themes span across phases 2 and 3 and were generated through a Framework Analysis approach: 1) working together at scale, 2) network infrastructure, and 3) PCN leadership. Findings suggest that through efforts to work together at scale and collaborate as a network, participants have broadly accepted the concept of PCNs. However, the workforce has been hampered by system design and system complexity. Operating against such barriers has led to a negative psychological impact on some PCN leaders and others in the PCN workforce. While the pandemic undeniably increased pressure on healthcare systems around the world, it also acted as a disruptor, offering a glimpse into how collaboration in primary care can work well. Through the integration of findings from all phases, a new theoretical model has been developed, which conceptualises the findings from this Ph.D. study and demonstrates how the workforce has experienced change associated with the establishment of PCNs. The model includes a contextual component of the COVID-19 pandemic and has been informed by concepts from Complex Adaptive Systems theory. This model is the original contribution to knowledge of the PhD project, alongside recommendations for practice, policy and future research. This study is significant in the realm of health services research, and while the setting for this study is the UK NHS, the findings will be of interest to an international audience as the research provides insight into how the healthcare workforce may experience imposed policy and service changes.

Keywords: health services research, qualitative research, NHS workforce, primary care

Procedia PDF Downloads 60
161 Positive Incentives to Reduce Private Car Use: A Theory-Based Critical Analysis

Authors: Rafael Alexandre Dos Reis

Abstract:

Research has shown a substantial increase in the participation of Conventionally Fuelled Vehicles (CFVs) in the urban transport modal split. The reasons for this unsustainable reality are multiple, from economic interventions to individual behaviour. The development and delivery of positive incentives for the adoption of more environmental-friendly modes of transport is an emerging strategy to help in tackling the problem of excessive use of conventionally fuelled vehicles. The efficiency of this approach, like other information-based schemes, can benefit from the knowledge of their potential impacts in theoretical constructs of multiple behaviour change theories. The goal of this research is to critically analyse theories of behaviour that are relevant to transport research and the impacts of positive incentives on the theoretical determinants of behaviour, strengthening the current body of evidence about the benefits of this approach. The main method to investigate this will involve a literature review on two main topics: the current theories of behaviour that have empirical support in transport research and the past or ongoing positive incentives programs that had an impact on car use reduction. The reviewed programs of positive incentives were the following: The TravelSmart®; Spitsmijden®; Incentives for Singapore Commuters® (INSINC); COMMUTEGREENER®; MOVESMARTER®; STREETLIFE®; SUPERHUB®; SUNSET® and the EMPOWER® project. The theories analysed were the heory of Planned Behaviour (TPB); The Norm Activation Theory (NAM); Social Learning Theory (SLT); The Theory of Interpersonal Behaviour (TIB); The Goal-Setting Theory (GST) and The Value-Belief-Norm Theory (VBN). After the revisions of the theoretical constructs of each of the theories and their influence on car use, it can be concluded that positive incentives schemes impact on behaviour change in the following manners: -Changing individual’s attitudes through informational incentives; -Increasing feelings of moral obligations to reduce the use of CFVs; -Increase the perceived social pressure to engage in more sustainable mobility behaviours through the use of comparison mechanisms in social media, for example; -Increase the perceived control of behaviour through informational incentives and training incentives; -Increasing personal norms with reinforcing information; -Providing tools for self-monitoring and self-evaluation; -Providing real experiences in alternative modes to the car; -Making the observation of others’ car use reduction possible; -Informing about consequences of behaviour and emphasizing the individual’s responsibility with society and the environment; -Increasing the perception of the consequences of car use to an individual’s valued objects; -Increasing the perceived ability to reduce threats to environment; -Help establishing goals to reduce car use; - iving personalized feedback on the goal; -Increase feelings of commitment to the goal; -Reducing the perceived complexity of the use of alternatives to the car. It is notable that the emerging technique of delivering positive incentives are systematically connected to causal determinants of travel behaviour. The preliminary results of the reviewed programs evidence how positive incentives might strengthen these determinants and help in the process of behaviour change.

Keywords: positive incentives, private car use reduction, sustainable behaviour, voluntary travel behaviour change

Procedia PDF Downloads 341
160 Smart Meters and In-Home Displays to Encourage Water Conservation through Behavioural Change

Authors: Julia Terlet, Thomas H. Beach, Yacine Rezgui

Abstract:

Urbanization, population growth, climate change and the current increase in water demand have made the adoption of innovative demand management strategies crucial to the water industry. Water conservation in urban areas has to be improved by encouraging consumers to adopt more sustainable habits and behaviours. This includes informing and educating them about their households’ water consumption and advising them about ways to achieve significant savings on a daily basis. This paper presents a study conducted in the context of the European FP7 WISDOM Project. By integrating innovative Information and Communication Technologies (ICT) frameworks, this project aims at achieving a change in water savings. More specifically, behavioural change will be attempted by implementing smart meters and in-home displays in a trial group of selected households within Cardiff (UK). Using this device, consumers will be able to receive feedback and information about their consumption but will also have the opportunity to compare their consumption to the consumption of other consumers and similar households. Following an initial survey, it appeared necessary to implement these in-home displays in a way that matches consumer's motivations to save water. The results demonstrated the importance of various factors influencing people’s daily water consumption. Both the relevant literature on the subject and the results of our survey therefore led us to include within the in-home device a variety of elements. It first appeared crucial to make consumers aware of the economic aspect of water conservation and especially of the significant financial savings that can be achieved by reducing their household’s water consumption on the long term. Likewise, reminding participants of the impact of their consumption on the environment by making them more aware of water scarcity issues around the world will help increasing their motivation to save water. Additionally, peer pressure and social comparisons with neighbours and other consumers, accentuated by the use of online social networks such as Facebook or Twitter, will likely encourage consumers to reduce their consumption. Participants will also be able to compare their current consumption to their past consumption and to observe the consequences of their efforts to save water through diverse graphs and charts. Finally, including a virtual water game within the display will help the whole household, children and adults, to achieve significant reductions by providing them with simple tips and advice to save water on a daily basis. Moreover, by setting daily and weekly goals for them to reach, the game will expectantly generate cooperation between family members. Members of each household will indeed be encouraged to work together to reduce their water consumption within different rooms of the house, such as the bathroom, the kitchen, or the toilets. Overall, this study will allow us to understand the elements that attract consumers the most and the features that are most commonly used by the participants. In this way, we intend to determine the main factors influencing water consumption in order to identify the measures that will most encourage water conservation in both the long and short term.

Keywords: behavioural change, ICT technologies, water consumption, water conservation

Procedia PDF Downloads 337
159 Identification of ω-3 Fatty Acids Using GC-MS Analysis in Extruded Spelt Product

Authors: Jelena Filipovic, Marija Bodroza-Solarov, Milenko Kosutic, Nebojsa Novkovic, Vladimir Filipovic, Vesna Vucurovic

Abstract:

Spelt wheat is suitable raw material for extruded products such as pasta, special types of bread and other products of altered nutritional characteristics compared to conventional wheat products. During the process of extrusion, spelt is exposed to high temperature and high pressure, during which raw material is also mechanically treated by shear forces. Spelt wheat is growing without the use of pesticides in harsh ecological conditions and in marginal areas of cultivation. So it can be used for organic and health safe food. Pasta is the most popular foodstuff; its consumption has been observed to rise. Pasta quality depends mainly on the properties of flour raw materials, especially protein content and its quality but starch properties are of a lesser importance. Pasta is characterized by significant amounts of complex carbohydrates, low sodium, total fat fiber, minerals, and essential fatty acids and its nutritional value can be improved with additional functional component. Over the past few decades, wheat pasta has been successfully formulated using different ingredients in pasta to cater health-conscious consumers who prefer having a product rich in protein, healthy lipids and other health benefits. Flaxseed flour is used in the production of bakery and pasta products that have properties of functional foods. However, it should be taken into account that food products retain the technological and sensory quality despite the added flax seed. Flaxseed contains important substances in its composition such as vitamins and minerals elements, and it is also an excellent source of fiber and one of the best sources of ω-3 fatty acids and lignin. In this paper, the quality and identification of spelt extruded product with the addition of flax seed, which is positively contributing to the nutritive and technology changes of the product, is investigated. ω-3 fatty acids are polyunsaturated essential fatty acids, and they must be taken with food to satisfy the recommended daily intake. Flaxseed flour is added in the quantity of 10/100 g of sample and 20/100 g of sample on farina. It is shown that the presence of ω-3 fatty acids in pasta can be clearly distinguished from other fatty acids by gas chromatography with mass spectrometry. Addition of flax seed flour influence chemical content of pasta. The addition of flax seed flour in spelt pasta in the quantities of 20g/100 g significantly increases the share of ω-3 fatty acids, which results in improved ratio of ω-6/ω-3 1:2.4 and completely satisfies minimum daily needs of ω-3 essential fatty acids (3.8 g/100 g) recommended by FDA. Flex flour influenced the pasta quality by increasing of hardness (2377.8 ± 13.3; 2874.5 ± 7.4; 3076.3 ± 5.9) and work of shear (102.6 ± 11.4; 150.8 ± 11.3; 165.0 ± 18.9) and increasing of adhesiveness (11.8 ± 20.6; 9.,98 ± 0.12; 7.1 ± 12.5) of the final product. Presented data point at good indicators of technological quality of spelt pasta with flax seed and that GC-MS analysis can be used in the quality control for flax seed identification. Acknowledgment: The research was financed by the Ministry of Education and Science of the Republic of Serbia (Project No. III 46005).

Keywords: GC-MS analysis, ω-3 fatty acids, flex seed, spelt wheat, daily needs

Procedia PDF Downloads 163
158 Neighborhood-Scape as a Methodology for Enhancing Gulf Region Cities' Quality of Life: Case of Doha, Qatar

Authors: Eman AbdelSabour

Abstract:

Sustainability is increasingly being considered as a critical aspect in shaping the urban environment. It works as an invention development basis for global urban growth. Currently, different models and structures impact the means of interpreting the criteria that would be included in defining a sustainable city. There is a collective need to improve the growth path to an extremely durable path by presenting different suggestions regarding multi-scale initiatives. The global rise in urbanization has led to increased demand and pressure for better urban planning choice and scenarios for a better sustainable urban alternative. The need for an assessment tool at the urban scale was prompted due to the trend of developing increasingly sustainable urban development (SUD). The neighborhood scale is being managed by a growing research committee since it seems to be a pertinent scale through which economic, environmental, and social impacts could be addressed. Although neighborhood design is a comparatively old practice, it is in the initial years of the 21st century when environmentalists and planners started developing sustainable assessment at the neighborhood level. Through this, urban reality can be considered at a larger scale whereby themes which are beyond the size of a single building can be addressed, while it still stays small enough that concrete measures could be analyzed. The neighborhood assessment tool has a crucial role in helping neighborhood sustainability to perform approach and fulfill objectives through a set of themes and criteria. These devices are also known as neighborhood assessment tool, district assessment tool, and sustainable community rating tool. The primary focus of research has been on sustainability from the economic and environmental aspect, whereas the social, cultural issue is rarely focused. Therefore, this research is based on Doha, Qatar, the current urban conditions of the neighborhoods is discussed in this study. The research problem focuses on the spatial features in relation to the socio-cultural aspects. This study is outlined in three parts; the first section comprises of review of the latest use of wellbeing assessment methods to enhance decision process of retrofitting physical features of the neighborhood. The second section discusses the urban settlement development, regulations and the process of decision-making rule. An analysis of urban development policy with reference to neighborhood development is also discussed in this section. Moreover, it includes a historical review of the urban growth of the neighborhoods as an atom of the city system present in Doha. Last part involves developing quantified indicators regarding subjective well-being through a participatory approach. Additionally, applying GIS will be utilized as a visualizing tool for the apparent Quality of Life (QoL) that need to develop in the neighborhood area as an assessment approach. Envisaging the present QoL situation in Doha neighborhoods is a process to improve current condition neighborhood function involves many days to day activities of the residents, due to which areas are considered dynamic.

Keywords: neighborhood, subjective wellbeing, decision support tools, Doha, retrofiring

Procedia PDF Downloads 138
157 Mediating Role of 'Investment Recovery' and 'Competitiveness' on the Impact of Green Supply Chain Management Practices over Firm Performance: An Empirical Study Based on Textile Industry of Pakistan

Authors: Mehwish Jawaad

Abstract:

Purpose: The concept of GrSCM (Green Supply Chain Management) in the academic and research field is still thought to be in the development stage especially in Asian Emerging Economies. The purpose of this paper is to contribute significantly to the first wave of empirical investigation on GrSCM Practices and Firm Performance measures in Pakistan. The aim of this research is to develop a more holistic approach towards investigating the impact of Green Supply Chain Management Practices (Ecodesign, Internal Environmental Management systems, Green Distribution, Green Purchasing and Cooperation with Customers) on multiple dimensions of Firm Performance Measures (Economic Performance, Environmental Performance and Operational Performance) with a mediating role of Investment Recovery and Competitiveness. This paper also serves as an initiative to identify if the relationship between Investment Recovery and Firm Performance Measures is mediated by Competitiveness. Design/ Methodology/Approach: This study is based on survey Data collected from 272, ISO (14001) Certified Textile Firms Based in Lahore, Faisalabad, and Karachi which are involved in Spinning, Dyeing, Printing or Bleaching. A Theoretical model was developed incorporating the constructs representing Green Activities and Firm Performance Measures of a firm. The data was analyzed using Partial Least Square Structural Equation Modeling. Senior and Mid-level managers provided the data reflecting the degree to which their organizations deal with both internal and external stakeholders to improve the environmental sustainability of their supply chain. Findings: Of the 36 proposed Hypothesis, 20 are considered valid and significant. The statistics result reveal that GrSCM practices positively impact Environmental Performance followed by Economic and Operational Performance. Investment Recovery acts as a strong mediator between Intra organizational Green activities and performance outcomes. The relationship of Reverse Logistics influencing outcomes is significantly mediated by Competitiveness. The pressure originating from customers exert significant positive influence on the firm to adopt Green Practices consequently leading to higher outcomes. Research Contribution/Originality: Underpinning the Resource dependence theory and as a first wave of investigating the impact of Green Supply chain on performance outcomes in Pakistan, this study intends to make a prominent mark in the field of research. Investment and Competitiveness together are tested as a mediator for the first time in this arena. Managerial implications: Practitioner is provided with a framework for assessing the synergistic impact of GrSCM practices on performance. Upgradation of Accreditations and Audit Programs on regular basis are the need of the hour. Making the processes leaner with the sale of excess inventories and scrap helps the firm to work more efficiently and productively.

Keywords: economic performance, environmental performance, green supply chain management practices, operational performance, sustainability, a textile sector of Pakistan

Procedia PDF Downloads 226
156 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)

Authors: Kutangila Malundama Succes, Koita Mahamadou

Abstract:

In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.

Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation

Procedia PDF Downloads 65
155 Optimization Of Biogas Production Using Co-digestion Feedstocks Via Anaerobic Technologhy

Authors: E Tolufase

Abstract:

The demand, high costs and health implications of using energy derived from hydrocarbon compound have necessitated the continuous search for alternative source of energy. The World energy market is facing some challenges viz: depletion of fossil fuel reserves, population explosion, lack of energy security, economic and urbanization growth and also, in Nigeria some rural areas still depend largely on wood, charcoal, kerosene, petrol among others, as the sources of their energy. To overcome these short falls in energy supply and demand, as well as taking into consideration the risks from global climate change due to effect of greenhouse gas emissions and other pollutants from fossil fuels’ combustion, brought a lot of attention on efficiently harnessing the renewable energy sources. A very promising among the renewable energy resources for a clean energy technology for power production, vehicle and domestic usage is biogas. Therefore, optimization of biogas yield and quality is imperative. Hence, this study investigated yield and quality of biogas using low cost bio-digester and combination of various feed stocks referred to as co-digestion. Batch/Discontinuous Bio-digester type was used because it was cheap, easy, plausible and appropriate for different substrates used to get the desired results. Three substrates were used; cow dung, chicken droppings and lemon grass digested in five separate 21 litre digesters, A, B, C, D, and E and the gas collection system was designed using locally available materials. For single digestion we had; cow dung, chicken droppings, lemon grass, in Bio-digesters A, B, and C respectively, the co-digested three substrates in different mixed ratio 7:1:2 in digester D and E in ratio 5:3:2. The respective feed-stocks materials were collected locally, digested and analyzed in accordance with standard procedures. They were pre-fermented for a period of 10 days before being introduced into the digesters. They were digested for a retention period of 28 days, the physiochemical parameters namely; pressure, temperature, pH, volume of the gas collector system and volume of biogas produced were all closely monitored and recorded daily. The values of pH and temperature ranged 6.0 - 8.0, and 220C- 350C respectively. For the single substrate, bio-digester A(Cow dung only) produced biogas of total volume 0.1607m3(average volume of 0.0054m3 daily),while B (Chicken droppings ) produced 0.1722m3 (average of 0.0057m3 daily) and C (lemon grass) produced 0.1035m3 (average of 0.0035m3 daily). For the co-digested substrates in bio-digester D the total biogas produced was 0.2007m³ (average volume of 0.0067m³ daily) and bio-digester E produced 0.1991m³ (average volume of 0.0066m³ daily) It’s obvious from the results, that combining different substrates gave higher yields than when a singular feed stock was used and also mixing ratio played some roles in the yield improvement. Bio-digesters D and E contained the same substrates but mixed with different ratios, but higher yield was noticed in D with mixing ratio of 7:1:2 than in E with ratio 5:3:2.Therefore, co-digestion of substrates and mixing proportions are important factors for biogas production optimization.

Keywords: anaerobic, batch, biogas, biodigester, digestion, fermentation, optimization

Procedia PDF Downloads 30
154 The United States Film Industry and Its Impact on Latin American Identity Rationalizations

Authors: Alfonso J. García Osuna

Abstract:

Background and Significance: The objective of this paper is to analyze the inception and development of identity archetypes in early XX century Latin America, to explore their roots in United States culture, to discuss the influences that came to bear upon Latin Americans as the United States began to export images of standard identity paradigms through its film industry, and to survey how these images evolved and impacted Latin Americans’ ideas of national distinctiveness from the early 1900s to the present. Therefore, the general hypothesis of this work is that United States film in many ways influenced national identity patterning in its neighbors, especially in those nations closest to its borders, Cuba and Mexico. Very little research has been done on the social impact of the United States film industry on the country’s southern neighbors. From a historical perspective, the US’s influence has been examined as the projection of political and economic power, that is to say, that American influence is seen as a catalyst to align the forces that the US wants to see wield the power of the State. But the subtle yet powerful cultural influence exercised by film, the eminent medium for exporting ideas and ideals in the XX century, has not been significantly explored. Basic Methodologies and Description: Gramscian Marxist theory underpins the study, where it is argued that film, as an exceptional vehicle for culture, is an important site of political and social struggle; in this context, it aims to show how United States capitalist structures of power not only use brute force to generate and maintain control of overseas markets, but also promote their ideas through artistic products such as film in order to infiltrate the popular culture of subordinated peoples. In this same vein, the work of neo-Marxist theoreticians of popular culture is employed in order to contextualize the agency of subordinated peoples in the process of cultural assimilations. Indication of the Major Findings of the Study: The study has yielded much data of interest. The salient finding is that each particular nation receives United States film according to its own particular social and political context, regardless of the amount of pressure exerted upon it. An example of this is the unmistakable dissimilarity between Cuban and Mexican reception of US films. The positive reception given in Cuba to American film has to do with the seamless acceptance of identity paradigms that, for historical reasons discussed herein, were incorporated into the national identity grid quite unproblematically. Such is not the case with Mexico, whose express rejection of identity paradigms offered by the United States reflects not only past conflicts with the northern neighbor, but an enduring recognition of the country’s indigenous roots, one that precluded such paradigms. Concluding Statement: This paper is an endeavor to elucidate the ways in which US film contributed to the outlining of Latin American identity blueprints, offering archetypes that would be accepted or rejected according to each nation’s particular social requirements, constraints and ethnic makeup.

Keywords: film studies, United States, Latin America, identity studies

Procedia PDF Downloads 300
153 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 327
152 Information and Communication Technology Skills of Finnish Students in Particular by Gender

Authors: Antero J. S. Kivinen, Suvi-Sadetta Kaarakainen

Abstract:

Digitalization touches every aspect of contemporary society, changing the way we live our everyday life. Contemporary society is sometimes described as knowledge society including unprecedented amount of information people face daily. The tools to manage this information flow are ICT-skills which are both technical skills and reflective skills needed to manage incoming information. Therefore schools are under constant pressure of revision. In the latest Programme for International Student Assessment (PISA) girls have been outperforming boys in all Organization for Economic Co-operation and Development (OECD) member countries and the gender gap between girls and boys is widest in Finland. This paper presents results of the Comprehensive Schools in the Digital Age project of RUSE, University of Turku. The project is in connection with Finnish Government Analysis, Assessment and Research Activities. First of all, this paper examines gender differences in ICT-skills of Finnish upper comprehensive school students. Secondly, it explores in which way differences are changing when students proceed to upper secondary and vocational education. ICT skills are measured using a performance-based ICT-skill test. Data is collected in 3 phases, January-March 2017 (upper comprehensive schools, n=5455), September-December 2017 (upper secondary and vocational schools, n~3500) and January-March 2018 (Upper comprehensive schools). The age of upper comprehensive school student’s is 15-16 and upper secondary and vocational school 16-18. The test is divided into 6 categories: basic operations, productivity software, social networking and communication, content creation and publishing, applications and requirements for the ICT study programs. Students have filled a survey about their ICT-usage and study materials they use in school and home. Cronbach's alpha was used to estimate the reliability of the ICT skill test. Statistical differences between genders were examined using two-tailed independent samples t-test. Results of first data from upper comprehensive schools show that there is no statistically significant difference in ICT-skill tests total scores between genders (boys 10.24 and girls 10.64, maximum being 36). Although, there were no gender difference in total test scores, there are differences in above mentioned six categories. Girls get better scores on school related and social networking test subjects while boys perform better on more technical oriented subjects. Test scores on basic operations are quite low for both groups. Perhaps these can partly be explained by the fact that the test was made on computers and majority of students ICT-usage consist of smartphones and tablets. Against this background it is important to analyze further the reasons for these differences. In a context of ongoing digitalization of everyday life and especially working life, the significant purpose of this analyses is to find answers how to guarantee the adequate ICT skills for all students.

Keywords: basic education, digitalization, gender differences, ICT-skills, upper comprehensive education, upper secondary education, vocational education

Procedia PDF Downloads 135
151 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model

Authors: Muhammad Karim Ahmadzai

Abstract:

Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.

Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis

Procedia PDF Downloads 119
150 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method

Authors: Jiahui You, Kyung Jae Lee

Abstract:

Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.

Keywords: reactive-transport , Shale, Kerogen, precipitation

Procedia PDF Downloads 165
149 Study of Formation and Evolution of Disturbance Waves in Annular Flow Using Brightness-Based Laser-Induced Fluorescence (BBLIF) Technique

Authors: Andrey Cherdantsev, Mikhail Cherdantsev, Sergey Isaenkov, Dmitriy Markovich

Abstract:

In annular gas-liquid flow, liquid flows as a film along pipe walls sheared by high-velocity gas stream. Film surface is covered by large-scale disturbance waves which affect pressure drop and heat transfer in the system and are necessary for entrainment of liquid droplets from film surface into the core of gas stream. Disturbance waves are a highly complex and their properties are affected by numerous parameters. One of such aspects is flow development, i.e., change of flow properties with the distance from the inlet. In the present work, this question is studied using brightness-based laser-induced fluorescence (BBLIF) technique. This method enables one to perform simultaneous measurements of local film thickness in large number of points with high sampling frequency. In the present experiments first 50 cm of upward and downward annular flow in a vertical pipe of 11.7 mm i.d. is studied with temporal resolution of 10 kHz and spatial resolution of 0.5 mm. Thus, spatiotemporal evolution of film surface can be investigated, including scenarios of formation, acceleration and coalescence of disturbance waves. The behaviour of disturbance waves' velocity depending on phases flow rates and downstream distance was investigated. Besides measuring the waves properties, the goal of the work was to investigate the interrelation between disturbance waves properties and integral characteristics of the flow such as interfacial shear stress and flow rate of dispersed phase. In particular, it was shown that the initial acceleration of disturbance waves, defined by the value of shear stress, linearly decays with downstream distance. This lack of acceleration which may even lead to deceleration is related to liquid entrainment. Flow rate of disperse phase linearly grows with downstream distance. During entrainment events, liquid is extracted directly from disturbance waves, reducing their mass, area of interaction to the gas shear and, hence, velocity. Passing frequency of disturbance waves at each downstream position was measured automatically with a new algorithm of identification of characteristic lines of individual disturbance waves. Scenarios of coalescence of individual disturbance waves were identified. Transition from initial high-frequency Kelvin-Helmholtz waves appearing at the inlet to highly nonlinear disturbance waves with lower frequency was studied near the inlet using 3D realisation of BBLIF method in the same cylindrical channel and in a rectangular duct with cross-section of 5 mm by 50 mm. It was shown that the initial waves are generally two-dimensional but are promptly broken into localised three-dimensional wavelets. Coalescence of these wavelets leads to formation of quasi two-dimensional disturbance waves. Using cross-correlation analysis, loss and restoration of two-dimensionality of film surface with downstream distance were studied quantitatively. It was shown that all the processes occur closer to the inlet at higher gas velocities.

Keywords: annular flow, disturbance waves, entrainment, flow development

Procedia PDF Downloads 252
148 Innocent Victims and Immoral Women: Sex Workers in the Philippines through the Lens of Mainstream Media

Authors: Sharmila Parmanand

Abstract:

This paper examines dominant media representations of prostitution in the Philippines and interrogates sex workers’ interactions with the media establishment. This analysis of how sex workers are constituted in media, often as both innocent victims and immoral actors, contributes to an understanding of public discourse on sex work in the Philippines, where decriminalisation has recently been proposed and sex workers are currently classified as potential victims under anti-trafficking laws but also as criminals under the penal code. The first part is an analysis of media coverage of two prominent themes on prostitution: first, raid and rescue operations conducted by law enforcement; and second, prostitution on military bases and tourism hotspots. As a result of pressure from activists and international donors, these two themes often define the policy conversations on sex work in the Philippines. The discourses in written and televised news reports and documentaries from established local and international media sources that address these themes are explored through content analysis. Conclusions are drawn based on specific terms commonly used to refer to sex workers, how sex workers are seen as performing their cultural roles as mothers and wives, how sex work is depicted, associations made between sex work and public health, representations of clients and managers and ‘rescuers’ such as the police, anti-trafficking organisations, and faith-based groups, and which actors are presumed to be issue experts. Images of how prostitution is used as a metaphor for relations between the Philippines and foreign nations are also deconstructed, along with common tropes about developing world female subjects. In general, sex workers are simultaneously portrayed as bad mothers who endanger their family’s morality but also as long-suffering victims who endure exploitation for the sake of their children. They are also depicted as unclean, drug-addicted threats to public health. Their managers and clients are portrayed as cold, abusive, and sometimes violent, and their rescuers as moral and altruistic agents who are essential for sex workers’ rehabilitation and restoration as virtuous citizens. The second part explores sex workers’ own perceptions of their interactions with media, through interviews with members of the Philippine Sex Workers Collective, a loose organisation of sex workers around the Philippines. They reveal that they are often excluded by media practitioners and that they do not feel that they have space for meaningful self-revelation about their work when they do engage with journalists, who seem to have an overt agenda of depicting them as either victims or women of loose morals. In their assessment, media narratives do not necessarily reflect their lived experiences, and in some cases, coverage of rescues and raid operations endangers their privacy and instrumentalises their suffering. Media representations of sex workers may produce subject positions such as ‘victims’ or ‘criminals’ and legitimize specific interventions while foreclosing other ways of thinking. Further, in light of media’s power to reflect and shape public consciousness, it is a valuable academic and political project to examine whether sex workers are able to assert agency in determining how they are represented.

Keywords: discourse analysis, news media, sex work, trafficking

Procedia PDF Downloads 397
147 Multiparticulate SR Formulation of Dexketoprofen Trometamol by Wurster Coating Technique

Authors: Bhupendra G. Prajapati, Alpesh R. Patel

Abstract:

The aim of this research work is to develop sustained release multi-particulates dosage form of Dexketoprofen trometamol, which is the pharmacologically active isomer of ketoprofen. The objective is to utilization of active enantiomer with minimal dose and administration frequency, extended release multi-particulates dosage form development for better patience compliance was explored. Drug loaded and sustained release coated pellets were prepared by fluidized bed coating principle by wurster coater. Microcrystalline cellulose as core pellets, povidone as binder and talc as anti-tacking agents were selected during drug loading while Kollicoat SR 30D as sustained release polymer, triethyl citrate as plasticizer and micronized talc as an anti-adherent were used in sustained release coating. Binder optimization trial in drug loading showed that there was increase in process efficiency with increase in the binder concentration. 5 and 7.5%w/w concentration of Povidone K30 with respect to drug amount gave more than 90% process efficiency while higher amount of rejects (agglomerates) were observed for drug layering trial batch taken with 7.5% binder. So for drug loading, optimum Povidone concentration was selected as 5% of drug substance quantity since this trial had good process feasibility and good adhesion of the drug onto the MCC pellets. 2% w/w concentration of talc with respect to total drug layering solid mass shows better anti-tacking property to remove unnecessary static charge as well as agglomeration generation during spraying process. Optimized drug loaded pellets were coated for sustained release coating from 16 to 28% w/w coating to get desired drug release profile and results suggested that 22% w/w coating weight gain is necessary to get the required drug release profile. Three critical process parameters of Wurster coating for sustained release were further statistically optimized for desired quality target product profile attributes like agglomerates formation, process efficiency, and drug release profile using central composite design (CCD) by Minitab software. Results show that derived design space consisting 1.0 to 1.2 bar atomization air pressure, 7.8 to 10.0 gm/min spray rate and 29-34°C product bed temperature gave pre-defined drug product quality attributes. Scanning Image microscopy study results were also dictate that optimized batch pellets had very narrow particle size distribution and smooth surface which were ideal properties for reproducible drug release profile. The study also focused on optimized dexketoprofen trometamol pellets formulation retain its quality attributes while administering with common vehicle, a liquid (water) or semisolid food (apple sauce). Conclusion: Sustained release multi-particulates were successfully developed for dexketoprofen trometamol which may be useful to improve acceptability and palatability of a dosage form for better patient compliance.

Keywords: dexketoprofen trometamol, pellets, fluid bed technology, central composite design

Procedia PDF Downloads 136
146 Development of Mesoporous Gel Based Nonwoven Structure for Thermal Barrier Application

Authors: R. P. Naik, A. K. Rakshit

Abstract:

In recent years, with the rapid development in science and technology, people have increasing requirements on uses of clothing for new functions, which contributes to opportunities for further development and incorporation of new technologies along with novel materials. In this context, textiles are of fast decalescence or fast heat radiation media as per as comfort accountability of textile articles are concern. The microstructure and texture of textiles play a vital role in determining the heat-moisture comfort level of the human body because clothing serves as a barrier to the outside environment and a transporter of heat and moisture from the body to the surrounding environment to keep thermal balance between body heat produced and body heat loss. The main bottleneck which is associated with textile materials to be successful as thermal insulation materials can be enumerated as; firstly, high loft or bulkiness of material so as to provide predetermined amount of insulation by ensuring sufficient trapping of air. Secondly, the insulation depends on forced convection; such convective heat loss cannot be prevented by textile material. Third is that the textile alone cannot reach the level of thermal conductivity lower than 0.025 W/ m.k of air. Perhaps, nano-fibers can do so, but still, mass production and cost-effectiveness is a problem. Finally, such high loft materials for thermal insulation becomes heavier and uneasy to manage especially when required to carry over a body. The proposed works aim at developing lightweight effective thermal insulation textiles in combination with nanoporous silica-gel which provides the fundamental basis for the optimization of material properties to achieve good performance of the clothing system. This flexible nonwoven silica-gel composites fabric in intact monolith was successfully developed by reinforcing SiO2-gel in thermal bonded nonwoven fabric via sol-gel processing. Ambient Pressure Drying method is opted for silica gel preparation for cost-effective manufacturing. The formed structure of the nonwoven / SiO₂ -gel composites were analyzed, and the transfer properties were measured. The effects of structure and fibre on the thermal properties of the SiO₂-gel composites were evaluated. Samples are then tested against untreated samples of same GSM in order to study the effect of SiO₂-gel application on various properties of nonwoven fabric. The nonwoven fabric composites reinforced with aerogel showed intact monolith structure were also analyzed for their surface structure, functional group present, microscopic images. Developed product reveals a significant reduction in pores' size and air permeability than the conventional nonwoven fabric. Composite made from polyester fibre with lower GSM shows lowest thermal conductivity. Results obtained were statistically analyzed by using STATISTICA-6 software for their level of significance. Univariate tests of significance for various parameters are practiced which gives the P value for analyzing significance level along with that regression summary for dependent variable are also studied to obtain correlation coefficient.

Keywords: silica-gel, heat insulation, nonwoven fabric, thermal barrier clothing

Procedia PDF Downloads 112
145 Prevalence and Molecular Characterization of Extended-Spectrum–β Lactamase and Carbapenemase-Producing Enterobacterales from Tunisian Seafood

Authors: Mehdi Soula, Yosra Mani, Estelle Saras, Antoine Drapeau, Raoudha Grami, Mahjoub Aouni, Jean-Yves Madec, Marisa Haenni, Wejdene Mansour

Abstract:

Multi-resistance to antibiotics in gram-negative bacilli and particularly in enterobacteriaceae, has become frequent in hospitals in Tunisia. However, data on antibiotic resistant bacteria in aquatic products are scarce. The aims of this study are to estimate the proportion of ESBL- and carbapenemase-producing Enterobacterales in seafood (clams and fish) in Tunisia and to molecularly characterize the collected isolates. Two types of seafood were sampled in unrelated markets in four different regions in Tunisia (641 pieces of farmed fish and 1075 mediterranean clams divided into 215 pools, and each pool contained 5 pieces). Once purchased, all samples were incubated in tubes containing peptone salt broth for 24 to 48h at 37°C. After incubation, overnight cultures were isolated on selective MacConkey agar plates supplemented with either imipenem or cefotaxime, identified using API20E test strips (bioMérieux, Marcy-l’Étoile, France) and confirmed by Maldi-TOF MS. Antimicrobial susceptibility was determined by the disk diffusion method on Mueller-Hinton agar plates and results were interpreted according to CA-SFM 2021. ESBL-producing Enterobacterales were detected using the Double Disc Synergy Test (DDST). Carbapenem-resistance was detected using an ertapenem disk and was respectively confirmed using the ROSCO KPC/MBL and OXA-48 Confirm Kit (ROSCO Diagnostica, Taastrup, Denmark). DNA was extracted using a NucleoSpin Microbial DNA extraction kit (Macherey-Nagel, Hoerdt, France), according to the manufacturer’s instructions. Resistance genes were determined using the CGE online tools. The replicon content and plasmid formula were identified from the WGS data using PlasmidFinder 2.0.1 and pMLST 2.0. From farmed fishes, nine ESBL-producing strains (9/641, 1.4%) were isolated, which were identified as E. coli (n=6) and K. pneumoniae (n=3). Among the 215 pools of 5 clams analyzed, 18 ESBL-producing isolates were identified, including 14 E. coli and 4 K. pneumoniae. A low isolation rate of ESBL-producing Enterobacterales was detected 1.6% (18/1075) in clam pools. In fish, the ESBL phenotype was due to the presence of the blaCTX-M-15 gene in all nine isolates, but no carbapenemase gene was identified. In clams, the predominant ESBL phenotype was blaCTX-M-1 (n=6/18). blaCPE (NDM1, OXA48) was detected only in 3 isolates ‘K. pneumoniae isolates’. Replicon typing on the strains carring the ESBL and carbapenemase gene revelead that the major type plasmid carried ESBL were IncF (42.3%) [n=11/26]. In all, our results suggest that seafood can be a reservoir of multi-drug resistant bacteria, most probably of human origin but also by the selection pressure of antibiotic. Our findings raise concerns that seafood bought for consumption may serve as potential reservoirs of AMR genes and pose serious threat to public health.

Keywords: BLSE, carbapenemase, enterobacterales, tunisian seafood

Procedia PDF Downloads 110
144 International Trade, Manufacturing and Employment: The First Two Decades of South African Democracy

Authors: Phillip F. Blaauw, Anna M. Pretorius

Abstract:

South Africa re-entered the international economy in the early 1990s, after Apartheid, at a time when globalisation was gathering momentum. Globalisation led to a more open economy, increased export volumes and a changed export mix. Manufacturing goods gained ground relative to mining products. After 21 years of democracy, South African researchers and policymakers need to evaluate the impact of international trade on the level of employment and compensation of employees in the South African manufacturing industry. This is important given the consistent and high levels of unemployment in South Africa. This paper has this evaluation as its aim. Two complimenting approaches are utilised. The 27 sub divisions of the South African manufacturing industry are classified according to capital/labour ratios. Possible trends in employment levels and employee compensation for these categories are then identified when comparing levels in 1995 to those in 2014. The supplementing empirical approach is cross-sectional and panel data regressions for the same period. The aim of the regression analysis is to explain the observed changes in employment and employee compensation levels between 1995 and 2014. The first part of the empirical approach revealed that over the 20-year period the intermediate capital intensive, labour intensive an ultra-labour intensive manufacturing industries all showed massive declines in overall employment. Only three of the 19 industries for these classifications showed marginal overall employment gains. The only meaningful gains were recorded in three of the eight capital intensive manufacturing industries. The overall performance of the South African manufacturing industry is therefore dismal at best. This scenario plays itself out for the skilled section of the intermediate capital intensive, labour intensive an ultra-labour intensive manufacturing industries as well. 18 out of the 19 industries displayed declines even for the skilled section of the labour force. The formal regression analysis supplements the above results. Real production growth is a statistically significant (95 per cent confidence level) explanatory variable of the overall employment level for the period under consideration, albeit with a small positive coefficient. The variables with the most significant negative relationship with changes in overall employment were the dummy variables for intermediate capital intensive and labour intensive manufacturing goods. Disaggregating overall changes in employment further in terms of skill levels revealed that skilled employment in particular responded negatively to increases in the ratio between imported and local inputs for manufacturing. The dummy variable for the labour intensive sectors remained negative and statistically significant, indicating that the labour intensive sectors of South African manufacturing remain vulnerable to the loss of employment opportunities. Whereas the first period (1995 to 2001) after the opening of the South African economy brought positive changes for skilled employment, continued increases in imported inputs displaced some of the skilled labour as well, putting further pressure on the South African economy with already high and persistent unemployment levels. Given the negative for the world commodity cycle and a stagnant local manufacturing sector, the challenge for policymakers is getting even more pronounced after South Africa’s political coming of age.

Keywords: capital/labour ratios, employment, employee compensation, manufacturing

Procedia PDF Downloads 221