Search results for: collaborative problem solving
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8473

Search results for: collaborative problem solving

343 SME Internationalisation and Its Financing: An Exploratory Study That Analyses Government Support and Funding Mechanisms for Irish and Scottish International SMEs

Authors: L. Spencer, S. O’ Donohoe

Abstract:

Much of the research to date on internationalisation relates to large firms with much less known about how small and medium-sized enterprises (SMEs) engage in internationalisation. Given the crucial role of SMEs in contributing to economic growth, there is now an emphasis on the need for SMEs internationalise. Yet little is known about how SMEs undertake and finance such expansion and whether or not internationalisation actually hinders or helps them in securing finance. The purpose of this research is to explore the internationalisation process for SMEs, the sources of funding used in financing this expansion and support received from the state agencies in assisting their overseas expansion. A conceptual framework has been devised which marries the two strands of literature together (internationalisation and financing the firm). The exploratory nature of this research dictates that the most appropriate methodology was to use semi-structured interviews with SME owners; bank representatives and support agencies. In essence, a triangulated approach to the research problem facilitates assessment of the perceptions and experiences from firms, the state and the financial institutions. Our sample is drawn from SMEs operating in Ireland and Scotland, two small but very open economies where SMEs are the dominant form of organisation. The sample includes a range of industry sectors. Key findings to date suggest some SMEs are born global; others are born again global whilst a significant cohort can be classed as traditional internationalisers. Unsurprisingly there is a strong industry effect with firms in the high tech sector more likely to be faster internationalisers in contrast to those in the traditional manufacturing sectors. Owner manager’s own funds are deemed key to financing initial internationalisation lending support for the financial growth life cycle model albeit more important for the faster internationalisers in contrast to the slower cohort who are more likely to deploy external sources especially bank finance. Retained earnings remain the predominant source of on-going financing for internationalising firms but trade credit is often used and invoice discounting is utilised quite frequently. In terms of lending, asset based lending backed by personal guarantees appears paramount for securing bank finance. Whilst the lack of diversified sources of funding for internationalising SMEs was found in both jurisdictions there appears no evidence to suggest that internationalisation impedes firms in securing finance. Finally state supports were cited as important to the internationalisation process, in particular those provided by Enterprise Ireland were deemed very valuable. Considering the paucity of studies to date on SME internationalisation and in particular the funding mechanisms deployed by them; this study seeks to contribute to the body of knowledge in both the international business and finance disciplines.

Keywords: funding, government support, international pathways, modes of entry

Procedia PDF Downloads 245
342 Exploring Disengaging and Engaging Behavior of Doctoral Students

Authors: Salome Schulze

Abstract:

The delay of students in completing their dissertations is a worldwide problem. At the University of South Africa where this research was done, only about a third of the students complete their studies within the required period of time. This study explored the reasons why the students interrupted their studies, and why they resumed their research at a later stage. If this knowledge could be utilised to improve the throughput of doctoral students, it could have significant economic benefits for institutions of higher education while at the same time enhancing their academic prestige. To inform the investigation, attention was given to key theories concerning the learning of doctoral students, namely the situated learning theory, the social capital theory and the self-regulated learning theory, based on the social cognitive theory of learning. Ten students in the faculty of Education were purposefully selected on the grounds of their poor progress, or of having been in the system for too long. The collection of the data was in accordance with a Finnish study, since the two studies had the same aims, namely to investigate student engagement and disengagement. Graphic elicitation interviews, based on visualisations were considered appropriate to collect the data. This method could stimulate the reflection and recall of the participants’ ‘stories’ with very little input from the interviewer. The interviewees were requested to visualise, on paper, their journeys as doctoral students from the time when they first registered. They were to indicate the significant events that occurred and which facilitated their engagement or disengagement. In the interviews that followed, they were requested to elaborate on these motivating or challenging events by explaining when and why they occurred, and what prompted them to resume their studies. The interviews were tape-recorded and transcribed verbatim. Information-rich data were obtained containing visual metaphors. The data indicated that when the students suffered a period of disengagement, it was sometimes related to a lack of self-regulated learning, in particular, a lack of autonomy, and the inability to manage their time effectively. When the students felt isolated from the academic community of practice disengagement also occurred. This included poor guidance by their supervisors, which accordingly deprived them of significant social capital. The study also revealed that situational factors at home or at work were often the main reasons for the students’ procrastinating behaviour. The students, however, remained in the system. They were motivated towards a renewed engagement with their studies if they were self-regulated learners, and if they felt a connectedness with the academic community of practice because of positive relationships with their supervisors and of participation in the activities of the community (e.g., in workshops or conferences). In support of their learning, networking with significant others who were sources of information provided the students with the necessary social capital. Generally, institutions of higher education cannot address the students’ personal issues directly, but they can deal with key institutional factors in order to improve the throughput of doctoral students. It is also suggested that graphic elicitation interviews be used more often in social research that investigates the learning and development of the students.

Keywords: doctoral students, engaging and disengaging experiences, graphic elicitation interviews, student procrastination

Procedia PDF Downloads 193
341 Stress Perception, Social Supports and Family Function among Military Inpatients with Adjustment Disorders in Taiwan

Authors: Huey-Fang Sun, Wei-Kai Weng, Mei-Kuang Chao, Hui-Shan Hsu, Tsai-Yin Shih

Abstract:

Psycho-social stress is important for mental illness and the presence of emotional and behavioral symptoms to an identifiable event is the central feature of adjustment disorders. However, whether patients with adjustment disorders have been raised in family with poor family functions and social supports and have higher stress perception than their peer group when they both experienced a similar stressful environment remains unknown. The specific aims of the study are to investigate the correlation among the family function, social supports and the level of stress perception and to test the hypothesis that military patients with adjustment disorders would have lower family function, lower social supports and higher stress perception than their healthy colleagues recruited in the same cohort for military services given their common exposure to similar stressful environments. Methods: The study was conducted in four hospitals of northern part of Taiwan from July 1, 2015 to June 30, 2017 and a matched case-control study design was used. The inclusion criteria for potential patient participants were psychiatric inpatients that serviced in military during the study period and met the diagnosis of adjustment disorders. Patients who had been admitted to psychiatric ward before or had illiteracy problem were excluded. A healthy military control sample matched by the same military service unit, gender, and recruited cohort was invited to participate the study as well. Totally 74 participants (37 patients and 37 controls) completed the consent forms and filled out the research questionnaires. Questionnaires used in the study included Perceived Stress Scale (PSS) as a measure of stress perception; Family APGAR as a measure of family function, and Multidimensional Scale of Perceived Social Support (MSPSS) as a measure of social supports. Pearson correlation analysis and t-test were applied for statistical analysis. Results: The analysis results showed that PSS level significantly negatively correlated with three social support subscales (family subscale, r= -.37, P < .05; friend subscale, r= -.38, P < .05; significant other subscale, r= -.39, P < .05). A negative correlation between PSS level and Family APGAR only reached a borderline significant level (P= .06). The t-test results for PSS scores, Family APGAR levels, and three subscale scores of MSPSS between patient and control participants were all significantly different (P < .001, P < .05, P < .05, P < .05, P < .05, respectively) and the patient participants had higher stress perception scores, lower social supports and lower family function scores than the healthy control participants. Conclusions: Our study suggested that family function and social supports were negatively correlated with patients’ subjective stress perception. Military patients with adjustment disorders tended to have higher stress perception and lower family function and social supports than those military peers who remained healthy and still provided services in their military units.

Keywords: adjustment disorders, family function, social support, stress perception

Procedia PDF Downloads 193
340 Geomorphology and Flood Analysis Using Light Detection and Ranging

Authors: George R. Puno, Eric N. Bruno

Abstract:

The natural landscape of the Philippine archipelago plus the current realities of climate change make the country vulnerable to flood hazards. Flooding becomes the recurring natural disaster in the country resulting to lose of lives and properties. Musimusi is among the rivers which exhibited inundation particularly at the inhabited floodplain portion of its watershed. During the event, rescue operations and distribution of relief goods become a problem due to lack of high resolution flood maps to aid local government unit identify the most affected areas. In the attempt of minimizing impact of flooding, hydrologic modelling with high resolution mapping is becoming more challenging and important. This study focused on the analysis of flood extent as a function of different geomorphologic characteristics of Musimusi watershed. The methods include the delineation of morphometric parameters in the Musimusi watershed using Geographic Information System (GIS) and geometric calculations tools. Digital Terrain Model (DTM) as one of the derivatives of Light Detection and Ranging (LiDAR) technology was used to determine the extent of river inundation involving the application of Hydrologic Engineering Center-River Analysis System (HEC-RAS) and Hydrology Modelling System (HEC-HMS) models. The digital elevation model (DEM) from synthetic Aperture Radar (SAR) was used to delineate watershed boundary and river network. Datasets like mean sea level, river cross section, river stage, discharge and rainfall were also used as input parameters. Curve number (CN), vegetation, and soil properties were calibrated based on the existing condition of the site. Results showed that the drainage density value of the watershed is low which indicates that the basin is highly permeable subsoil and thick vegetative cover. The watershed’s elongation ratio value of 0.9 implies that the floodplain portion of the watershed is susceptible to flooding. The bifurcation ratio value of 2.1 indicates higher risk of flooding in localized areas of the watershed. The circularity ratio value (1.20) indicates that the basin is circular in shape, high discharge of runoff and low permeability of the subsoil condition. The heavy rainfall of 167 mm brought by Typhoon Seniang last December 29, 2014 was characterized as high intensity and long duration, with a return period of 100 years produced 316 m3s-1 outflows. Portion of the floodplain zone (1.52%) suffered inundation with 2.76 m depth at the maximum. The information generated in this study is helpful to the local disaster risk reduction management council in monitoring the affected sites for more appropriate decisions so that cost of rescue operations and relief goods distribution is minimized.

Keywords: flooding, geomorphology, mapping, watershed

Procedia PDF Downloads 230
339 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 79
338 Acute Antihyperglycemic Activity of a Selected Medicinal Plant Extract Mixture in Streptozotocin Induced Diabetic Rats

Authors: D. S. N. K. Liyanagamage, V. Karunaratne, A. P. Attanayake, S. Jayasinghe

Abstract:

Diabetes mellitus is an ever increasing global health problem which causes disability and untimely death. Current treatments using synthetic drugs have caused numerous adverse effects as well as complications, leading research efforts in search of safe and effective alternative treatments for diabetes mellitus. Even though there are traditional Ayurvedic remedies which are effective, due to a lack of scientific exploration, they have not been proven to be beneficial for common use. Hence the aim of this study is to evaluate the traditional remedy made of mixture of plant components, namely leaves of Murraya koenigii L. Spreng (Rutaceae), cloves of Allium sativum L. (Amaryllidaceae), fruits of Garcinia queasita Pierre (Clusiaceae) and seeds of Piper nigrum L. (Piperaceae) used for the treatment of diabetes. We report herein the preliminary results for the in vivo study of the anti-hyperglycaemic activity of the extracts of the above plant mixture in Wistar rats. A mixture made out of equal weights (100 g) of the above mentioned medicinal plant parts were extracted into cold water, hot water (3 h reflux) and water: acetone mixture (1:1) separately. Male wistar rats were divided into six groups that received different treatments. Diabetes mellitus was induced by intraperitoneal administration of streptozotocin at a dose of 70 mg/ kg in male Wistar rats in group two, three, four, five and six. Group one (N=6) served as the healthy untreated and group two (N=6) served as diabetic untreated control and both groups received distilled water. Cold water, hot water, and water: acetone plant extracts were orally administered in diabetic rats in groups three, four and five, respectively at different doses of 0.5 g/kg (n=6), 1.0 g/kg(n=6) and 1.5 g/kg(n=6) for each group. Glibenclamide (0.5 mg/kg) was administered to diabetic rats in group six (N=6) served as the positive control. The acute anti-hyperglycemic effect was evaluated over a four hour period using the total area under the curve (TAUC) method. The results of the test group of rats were compared with the diabetic untreated control. The TAUC of healthy and diabetic rats were 23.16 ±2.5 mmol/L.h and 58.31±3.0 mmol/L.h, respectively. A significant dose dependent improvement in acute anti-hyperglycaemic activity was observed in water: acetone extract (25%), hot water extract ( 20 %), and cold water extract (15 %) compared to the diabetic untreated control rats in terms of glucose tolerance (P < 0.05). Therefore, the results suggest that the plant mixture has a potent antihyperglycemic effect and thus validating their used in Ayurvedic medicine for the management of diabetes mellitus. Future studies will be focused on the determination of the long term in vivo anti-diabetic mechanisms and isolation of bioactive compounds responsible for the anti-diabetic activity.

Keywords: acute antihyperglycemic activity, herbal mixture, oral glucose tolerance test, Sri Lankan medicinal plant extracts

Procedia PDF Downloads 179
337 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 123
336 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures

Authors: Haytam Kasem

Abstract:

The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.

Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model

Procedia PDF Downloads 239
335 Study on Electromagnetic Plasma Acceleration Using Rotating Magnetic Field Scheme

Authors: Takeru Furuawa, Kohei Takizawa, Daisuke Kuwahara, Shunjiro Shinohara

Abstract:

In the field of a space propulsion, an electric propulsion system has been developed because its fuel efficiency is much higher than a conventional chemical one. However, the practical electric propulsion systems, e.g., an ion engine, have a problem of short lifetime due to a damage of generation and acceleration electrodes of the plasma. A helicon plasma thruster is proposed as a long-lifetime electric thruster which has non-direct contact electrodes. In this system, both generation and acceleration methods of a dense plasma are executed by antennas from the outside of a discharge tube. Development of the helicon plasma thruster has been conducting under the Helicon Electrodeless Advanced Thruster (HEAT) project. Our helicon plasma thruster has two important processes. First, we generate a dense source plasma using a helicon wave with an excitation frequency between an ion and an electron cyclotron frequencies, fci and fce, respectively, applied from the outside of a discharge using a radio frequency (RF) antenna. The helicon plasma source can provide a high-density (~1019 m-3), a high-ionization ratio (up to several tens of percent), and a high particle generation efficiency. Second, in order to achieve high thrust and specific impulse, we accelerate the dense plasma by the axial Lorentz force fz using the product of the induced azimuthal current jθ and the static radial magnetic field Br, shown as fz = jθ × Br. The HEAT project has proposed several kinds of electrodeless acceleration schemes, and in our particular case, a Rotating Magnetic Field (RMF) method has been extensively studied. The RMF scheme was originally developed as a concept to maintain the Field Reversed Configuration (FRC) in a magnetically confined fusion research. Here, RMF coils are expected to generate jθ due to a nonlinear effect shown below. First, the rotating magnetic field Bω is generated by two pairs of RMF coils with AC currents, which have a phase difference of 90 degrees between the pairs. Due to the Faraday’s law, an axial electric field is induced. Second, an axial current is generated by the effects of an electron-ion and an electron-neutral collisions through the Ohm’s law. Third, the azimuthal electric field is generated by the nonlinear term, and the retarding torque generated by the collision effects again. Then, azimuthal current jθ is generated as jθ = - nₑ er ∙ 2π fRMF. Finally, the axial Lorentz force fz for plasma acceleration is generated. Here, jθ is proportional to nₑ and frequency of RMF coil current fRMF, when Bω is fully penetrated into the plasma. Our previous study has achieved 19 % increase of ion velocity using the 5 MHz and 50 A of the RMF coil power supply. In this presentation, we will show the improvement of the ion velocity using the lower frequency and higher current supplied by RMF power supply. In conclusion, helicon high-density plasma production and electromagnetic acceleration by the RMF scheme with a concept of electrodeless condition have been successfully executed.

Keywords: electric propulsion, electrodeless thruster, helicon plasma, rotating magnetic field

Procedia PDF Downloads 261
334 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 268
333 Аnalysis of the Perception of Medical Professionalism by Specialists of Family Medicine in Kazakhstan

Authors: Nurgul A. Abenova, Gaukhar S. Dilmagambetova, Lazzat M. Zhamaliyeva

Abstract:

Professionalism is a core competency that all medical students must achieve throughout their studies. Clinical knowledge, good communication skills and an understanding of ethics form the basis of professionalism. Patients, medical societies and accrediting organizations expect future specialists to be professionals in their field, which in turn leads to the best clinical results. Currently, there are no studies devoted to the study of medical professionalism in the Republic of Kazakhstan. As a result, medical education in the Kazakhstani system has a limited perception of the concept of professionalism compared to many Western medical schools. Thus, the primary purpose of this study is to analyze the perception of medical professionalism among residents and teachers of family medicine at the West Kazakhstan Marat Ospanov Medical University. А qualitative research method was used based on the content analysis methodology. A focus group discussion was held with 60 residents and 12 family medicine teachers to gather participants' views and experiences in the field of medical professionalism. The received information was processed using the MAXQDA-2020 software package. Respondents were selected for the study based on their age, gender, and educational level. The results of the conducted survey confirmed the respondents’ acknowledgment of the basic attributes of professionalism, such as medical knowledge and skills (more than 40% of the answers), personal and moral qualities of the doctor (more than 25% of the answers), respect for the interests of the patient (15% of the answers), the relationship between the doctor and the patient and among professionals themselves (15% of responses). Another important discovery of the survey was that residents are five times more likely to define the relationship between a doctor and a patient in a model “respect for the interests of the patient” in comparison with teachers of family medicine, who primarily reported responsibility and collegiality to be the basis for the development of professionalism and traditionally view doctor-patient relationship to be formed on the basis of paternalism defined by a high degree of control over patients. This significant difference demonstrates a rift among specialists in the field of family medicine, which causes a lot of problems. For example, nowadays, professional family doctors regularly face burnout problem due to many reasons and factors that force them to abandon their jobs. In addition to that, elements of professionalism such as reflective skills, time management and feedback collection were presented to the least extent (less than 1%) by both groups, which differs from the perception of the Western medical school and is a significant issue that needs to be solved. The qualitative nature of our study provides a detailed understanding of medical professionalism in the context of the Central Asian healthcare system, revealing many aspects that are inferior to the Western medical school counterparts and provides a solution, which is to teach the attributes and skills required for medical professionalism at all stages of medical education of family doctors.

Keywords: family medicine, family doctors, medical professionalism, medical education

Procedia PDF Downloads 141
332 The Inclusive Human Trafficking Checklist: A Dialectical Measurement Methodology

Authors: Maria C. Almario, Pam Remer, Jeff Resse, Kathy Moran, Linda Theander Adam

Abstract:

The identification of victims of human trafficking and consequential service provision is characterized by a significant disconnection between the estimated prevalence of this issue and the number of cases identified. This poses as tremendous problem for human rights advocates as it prevents data collection, information sharing, allocation of resources and opportunities for international dialogues. The current paper introduces the Inclusive Human Trafficking Checklist (IHTC) as a measurement methodology with theoretical underpinnings derived from dialectic theory. The presence of human trafficking in a person’s life is conceptualized as a dynamic and dialectic interaction between vulnerability and exploitation. The current papers explores the operationalization of exploitation and vulnerability, evaluates the metric qualities of the instrument, evaluates whether there are differences in assessment based on the participant’s profession, level of knowledge, and training, and assesses if users of the instrument perceive it as useful. A total of 201 participants were asked to rate three vignettes predetermined by experts to qualify as a either human trafficking case or not. The participants were placed in three conditions: business as usual, utilization of the IHTC with and without training. The results revealed a statistically significant level of agreement between the expert’s diagnostic and the application of the IHTC with an improvement of 40% on identification when compared with the business as usual condition While there was an improvement in identification in the group with training, the difference was found to have a small effect size. Participants who utilized the IHTC showed an increased ability to identify elements of identity-based vulnerabilities as well as elements of fraud, which according to the results, are distinctive variables in cases of human trafficking. In terms of the perceived utility, the results revealed higher mean scores for the groups utilizing the IHTC when compared to the business as usual condition. These findings suggest that the IHTC improves appropriate identification of cases and that it is perceived as a useful instrument. The application of the IHTC as a multidisciplinary instrumentation that can be utilized in legal and human services settings is discussed as a pivotal piece of helping victims restore their sense of dignity, and advocate for legal, physical and psychological reparations. It is noteworthy that this study was conducted with a sample in the United States and later re-tested in Colombia. The implications of the instrument for treatment conceptualization and intervention in human trafficking cases are discussed as opportunities for enhancement of victim well-being, restoration engagement and activism. With the idea that what is personal is also political, we believe that the careful observation and data collection in specific cases can inform new areas of human rights activism.

Keywords: exploitation, human trafficking, measurement, vulnerability, screening

Procedia PDF Downloads 330
331 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide

Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu

Abstract:

This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.

Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide

Procedia PDF Downloads 237
330 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico

Authors: Ismene Ithai Bras-Ruiz

Abstract:

Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.

Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise

Procedia PDF Downloads 128
329 A Strategy to Reduce Salt Intake: The Use of a Seasoning Obtained from Wine Pomace

Authors: María Luisa Gonzalez-SanJose, Javier Garcia-Lomillo, Raquel Del Pino, Miriam Ortega-Heras, Maria Dolores Rivero-Perez, Pilar Muñiz-Rodriguez

Abstract:

One of the most preoccupant problems related to the diet of the occidental societies is the high salt intake. In Spain, salt intake is almost twice as recommended by the World Health Organization (WHO). A lot of negative health effects of high sodium intake have been described being the hypertension, cardiovascular and coronary diseases ones of the most important. Due to this fact, government and other institutions are working on the gradual reduction of this consumption. Intake of meat products have been described as the main processed products that bring salt to the diet, followed by snacks and savory crackers. However, fortunately, the food industry has also raised awareness of this problem and is working intensely, and in recent years attempts to reduce the salt content in processed products, and is developing special lines with low sodium content. It is important to consider that processed food are the main source of sodium in occidental countries. One of the possible strategies to reduce the salt content in food is to find substitutes that can emulate their taste properties without adding much sodium or products that mask or substitute salty sensations with other flavors and aromas. In this sense, multiple products have been proposed and used until now. Potassium salts produce similar salty sensations without bring sodium, however their intake should be also limited, by healthy reasons. Furthermore, some potassium salts shows some better notes. Other alternatives are the use of flavor enhancers, spices, aromatic herbs, sea-plant derivate products, etc. The wine pomace is rich in potassium salts, content organic acid and other flavored substances, therefore it could be an interesting raw material to obtain derived products that could be useful as alternative ‘seasonings’. Considering previous comments, the main aim of this study was to evaluate the possible use of a natural seasoning, made from red wine pomace, in two different foods, crackers and burgers. The seasoning was made in the pilot plant of food technology of the University of Burgos, where the studied crackers and patties were also made. Different members of the University, students, docent and administrative personal, taste the products, and a trained panel evaluated salty intensity. The seasoning in addition to potassium contain significant levels of dietary fiber and phenolic compounds, which also makes it interesting as a functional ingredient. Both burgers and crackers made with the seasoning showed better taste that those without salt. Obviously, they showed lower sodium content than normal formulation, and were richer in potassium, antioxidant and fiber. Then, they showed lower values of the relation Na/K. All these facts are correlated with more ‘healthy’ products especially to that people with hypertension and other coronary dysfunctions.

Keywords: healthy foods, low salt, seasoning, wine pomace

Procedia PDF Downloads 274
328 Promoting Incubation Support to Youth Led Enterprises: A Case Study from Bangladesh to Eradicate Hazardous Child Labour through Microfinance

Authors: Md Maruf Hossain Koli

Abstract:

The issue of child labor is enormous and cannot be ignored in Bangladesh. The problem of child exploitation is a socio-economic reality of Bangladesh. This paper will indicate the causes, consequences, and possibilities of using microfinance as remedies of hazardous child labor in Bangladesh. Poverty is one of the main reasons for children to become child laborers. It is an indication of economic vulnerability, inadequate law, and enforcement system and cultural and social inequities along with the inaccessible and low-quality educational system. An attempt will be made in this paper to explore and analyze child labor scenario in Bangladesh and will explain holistic intervention of BRAC, the largest nongovernmental organization in the world to address child labor through promoting incubation support to youth-led enterprises. A combination of research methods were used to write this paper. These include non-reactive observation in the form of literature review, desk studies as well as reactive observation like site visits and, semi-structured interviews. Hazardous Child labor is a multi-dimensional and complex issue. This paper was guided by the answer following research questions to better understand the current context of hazardous child labor in Bangladesh, especially in Dhaka city. The author attempted to figure out why child labor should be considered as a development issue? Further, it also encountered why child labor in Bangladesh is not being reduced at an expected pace? And finally what could be a sustainable solution to eradicate this situation. One of the most challenging characteristics of child labor is that it interrupts a child’s education and cognitive development hence limiting the building of human capital and fostering intergenerational reproduction of poverty and social exclusion. Children who are working full-time and do not attend school, cannot develop the necessary skills. This leads them and their future generation to remain in poor socio-economic condition as they do not get a better paying job. The vicious cycle of poverty will be reproduced and will slow down sustainable development. The outcome of the research suggests that most of the parents send their children to work to help them to increase family income. In addition, most of the youth engaged in hazardous work want to get training, mentoring and easy access to finance to start their own business. The intervention of BRAC that includes classroom and on the job training, tailored mentoring, health support, access to microfinance and insurance help them to establish startup. This intervention is working in developing business and management capacity through public-private partnerships and technical consulting. Supporting entrepreneurs, improving working conditions with micro, small and medium enterprises and strengthening value chains focusing on youth and children engaged with hazardous child labor.

Keywords: child labour, enterprise development, microfinance, youth entrepreneurship

Procedia PDF Downloads 128
327 Mathematical Modelling of Bacterial Growth in Products of Animal Origin in Storage and Transport: Effects of Temperature, Use of Bacteriocins and pH Level

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cordova

Abstract:

The pathogen growth in animal source foods is a common problem in the food industry, causing monetary losses due to the spoiling of products or food intoxication outbreaks in the community. In this sense, the quality of the product is reflected by the population of deteriorating agents present in it, which are mainly bacteria. The factors which are likely associated with freshness in animal source foods are temperature and processing, storage, and transport times. However, the level of deterioration of products depends, in turn, on the characteristics of the bacterial population, causing the decomposition or spoiling, such as pH level and toxins. Knowing the growth dynamics of the agents that are involved in product contamination allows the monitoring for more efficient processing. This means better quality and reasonable costs, along with a better estimation of necessary time and temperature intervals for transport and storage in order to preserve product quality. The objective of this project is to design a secondary model that allows measuring the impact on temperature bacterial growth and the competition for pH adequacy and release of bacteriocins in order to describe such phenomenon and, thus, estimate food product half-life with the least possible risk of deterioration or spoiling. In order to achieve this objective, the authors propose an analysis of a three-dimensional ordinary differential which includes; logistic bacterial growth extended by the inhibitory action of bacteriocins including the effect of the medium pH; change in the medium pH levels through an adaptation of the Luedeking-Piret kinetic model; Bacteriocin concentration modeled similarly to pH levels. These three dimensions are being influenced by the temperature at all times. Then, this differential system is expanded, taking into consideration the variable temperature and the concentration of pulsed bacteriocins, which represent characteristics inherent of the modeling, such as transport and storage, as well as the incorporation of substances that inhibit bacterial growth. The main results lead to the fact that temperature changes in an early stage of transport increased the bacterial population significantly more than if it had increased during the final stage. On the other hand, the incorporation of bacteriocins, as in other investigations, proved to be efficient in the short and medium-term since, although the population of bacteria decreased, once the bacteriocins were depleted or degraded over time, the bacteria eventually returned to their regular growth rate. The efficacy of the bacteriocins at low temperatures decreased slightly, which equates with the fact that their natural degradation rate also decreased. In summary, the implementation of the mathematical model allowed the simulation of a set of possible bacteria present in animal based products, along with their properties, in various transport and storage situations, which led us to state that for inhibiting bacterial growth, the optimum is complementary low constant temperatures and the initial use of bacteriocins.

Keywords: bacterial growth, bacteriocins, mathematical modelling, temperature

Procedia PDF Downloads 135
326 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration

Authors: Danny Barash

Abstract:

Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.

Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods

Procedia PDF Downloads 234
325 Interface Fracture of Sandwich Composite Influenced by Multiwalled Carbon Nanotube

Authors: Alak Kumar Patra, Nilanjan Mitra

Abstract:

Higher strength to weight ratio is the main advantage of sandwich composite structures. Interfacial delamination between the face sheet and core is a major problem in these structures. Many research works are devoted to improve the interfacial fracture toughness of composites majorities of which are on nano and laminated composites. Work on influence of multiwalled carbon nano-tubes (MWCNT) dispersed resin system on interface fracture of glass-epoxy PVC core sandwich composite is extremely limited. Finite element study is followed by experimental investigation on interface fracture toughness of glass-epoxy (G/E) PVC core sandwich composite with and without MWCNT. Results demonstrate an improvement in interface fracture toughness values (Gc) of samples with a certain percentages of MWCNT. In addition, dispersion of MWCNT in epoxy resin through sonication followed by mixing of hardener and vacuum resin infusion (VRI) technology used in this study is an easy and cost effective methodology in comparison to previously adopted other methods limited to laminated composites. The study also identifies the optimum weight percentage of MWCNT addition in the resin system for maximum performance gain in interfacial fracture toughness. The results agree with finite element study, high-resolution transmission electron microscope (HRTEM) analysis and fracture micrograph of field emission scanning electron microscope (FESEM) investigation. Interface fracture toughness (GC) of the DCB sandwich samples is calculated using the compliance calibration (CC) method considering the modification due to shear. Compliance (C) vs. crack length (a) data of modified sandwich DCB specimen is fitted to a power function of crack length. The calculated mean value of the exponent n from the plots of experimental results is 2.22 and is different from the value (n=3) prescribed in ASTM D5528-01for mode 1 fracture toughness of laminate composites (which is the basis for modified compliance calibration method). Differentiating C with respect to crack length (a) and substituting it in the expression GC provides its value. The research demonstrates improvement of 14.4% in peak load carrying capacity and 34.34% in interface fracture toughness GC for samples with 1.5 wt% MWCNT (weight % being taken with respect to weight of resin) in comparison to samples without MWCNT. The paper focuses on significant improvement in experimentally determined interface fracture toughness of sandwich samples with MWCNT over the samples without MWCNT using much simpler method of sonication. Good dispersion of MWCNT was observed in HRTEM with 1.5 wt% MWCNT addition in comparison to other percentages of MWCNT. FESEM studies have also demonstrated good dispersion and fiber bridging of MWCNT in resin system. Ductility is also observed to be higher for samples with MWCNT in comparison to samples without.

Keywords: carbon nanotube, epoxy resin, foam, glass fibers, interfacial fracture, sandwich composite

Procedia PDF Downloads 303
324 Developing a Tissue-Engineered Aortic Heart Valve Based on an Electrospun Scaffold

Authors: Sara R. Knigge, Sugat R. Tuladhar, Alexander Becker, Tobias Schilling, Birgit Glasmacher

Abstract:

Commercially available mechanical or biological heart valve prostheses both tend to fail long-term due to thrombosis, calcific degeneration, infection, or immunogenic rejection. Moreover, these prostheses are non-viable and do not grow with the patients, which is a problem for young patients. As a result, patients often need to undergo redo-operations. Tissue-engineered (TE) heart valves based on degradable electrospun fiber scaffolds represent a promising approach to overcome these limitations. Such scaffolds need sufficient mechanical properties to withstand the hydrodynamic stress of intracardiac hemodynamics. Additionally, the scaffolds should be colonized by autologous or homologous cells to facilitate the in vivo remodeling of the scaffolds to a viable structure. This study investigates how process parameters of electrospinning and degradation affect the mechanical properties of electrospun scaffolds made of FDA-approved, biodegradable polymer polycaprolactone (PCL). Fiber mats were produced from a PCL/tetrafluoroethylene solution by electrospinning. The e-spinning process was varied in terms of scaffold thickness, fiber diameter, fiber orientation, and fiber interconnectivity. The morphology of the fiber mats was characterized with a scanning electron microscope (SEM). The mats were degraded in different solutions (cell culture media, SBF, PBS and 10 M NaOH-Solution). At different time points of degradation (2, 4 and 6 weeks), tensile and cyclic loading tests were performed. Fresh porcine pericardium and heart valves served as a control for the mechanical assessment. The progression of polymer degradation was quantified by SEM and differential scanning calorimetry (DSC). Primary Human aortic endothelial cells (HAECs) and Human induced pluripotent stem cell-derived endothelial cells (iPSC-ECs) were seeded on the fiber mats to investigate the cell colonization potential. The results showed that both the electrospinning parameters and the degradation significantly influenced the mechanical properties. Especially the fiber orientation has a considerable impact and leads to a pronounced anisotropic behavior of the scaffold. Preliminary results showed that the polymer became strongly more brittle over time. However, the embrittlement can initially only be detected in the mechanical test. In the SEM and DSC investigations, neither morphological nor thermodynamic changes are significantly detectable. Live/Dead staining and SEM imaging of the cell-seeded scaffolds showed that HAECs and iPSC-ECs were able to grow on the surface of the polymer. In summary, this study's results indicate a promising approach to the development of a TE aortic heart valve based on an electrospun scaffold.

Keywords: electrospun scaffolds, long-term polymer degradation, mechanical behavior of electrospun PCL, tissue engineered aortic heart valve

Procedia PDF Downloads 144
323 Subcutan Isosulfan Blue Administration May Interfere with Pulse Oximetry

Authors: Esra Yuksel, Dilek Duman, Levent Yeniay, Sezgin Ulukaya

Abstract:

Sentinel lymph node biopsy (SLNB) is a minimal invasive technique with lower morbidity in axillary staging of breast cancer. Isosulfan blue stain is frequently used in SLNB and regarded as safe. The present case report aimed to report severe decrement in SpO2 following isosulfan blue administration, as well as skin and urine signs and inconsistency with clinical picture in a 67-year-old ,77 kg, ASA II female case that underwent SLNB under general anesthesia. Ten minutes after subcutaneous administration of 10 ml 1% isosulfan blue by the surgeons into the patient, who were hemodynamically stable, SpO2 first reduced to 87% from 99%, and then to 75% in minutes despite 100% oxygen support. Meanwhile, blood pressure and EtCO2 monitoring was unremarkable. After specifying that anesthesia device worked normally, airway pressure did not increase and the endotracheal tube has been placed accurately, the blood sample was taken from the patient for arterial gas analysis. A severe increase was thought in MetHb concentration since SpO2 persisted to be 75% although the concentration of inspired oxygen was 100%, and solution of 2500 mg ascorbic acid in 500 ml 5% Dextrose was given to the patient via intravenous route until the results of arterial blood gas were obtained. However, arterial blood gas results were as follows: pH: 7.54, PaCO2: 23.3 mmHg, PaO2: 281 mmHg, SaO2: %99, and MetHb: %2.7. Biochemical analysis revealed a blood MetHb concentration of 2%.However, since arterial blood gas parameters were good, hemodynamics of the patient was stable and methemoglobin concentration was not so high, the patient was extubated after surgery when she was relaxed, cooperated and had adequate respiration. Despite the absence of respiratory or neurological distress, SpO2 value was increased only up to 85% within 2 hours with 5 L/min oxygen support via face mask in the surgery room as the patient was extubated. At that time, the skin of particularly the upper part of her body has turned into blue, more remarkable on the face. The color of plasma of the blood taken from the patient for biochemical analysis was blue. The color of urine coming throughout the urinary catheter placed in intensive care unit was also blue. Twelve hours after 5 L/min. oxygen inhalation via a mask, the SpO2 reached to 90%. During monitoring in intensive care unit on the postoperative 1st day, facial color and urine color of the patient was still blue, SpO2 was 92%, and arterial blood gas levels were as follows: pH: 7.44, PaO2: 76.1 mmHg, PaCO2: 38.2 mmHg, SaO2: 99%, and MetHb 1%. During monitoring in clinic on the postoperative 2nd day, SpO2 was 95% without oxygen support and her facial and urine color turned into normal. The patient was discharged on the 3rd day without any problem.In conclusion, SLNB is a less invasive alternative to axillary dissection. However, false pulse oximeter reading due to pigment interference is a rare complication of this procedure. Arterial blood gas analysis should be used to confirm any fall in SpO2 reading during monitoring.

Keywords: isosulfan blue, pulse oximetry, SLNB, methemoglobinemia

Procedia PDF Downloads 315
322 The Rite of Jihadification in ISIS Modified Video Games: Mass Deception and Dialectic of Religious Regression in Technological Progression

Authors: Venus Torabi

Abstract:

ISIS, the terrorist organization, modified two videogames, ARMA III and Grand Theft Auto 5 (2013) as means of online recruitment and ideological propaganda. The urge to study the mechanism at work, whether it has been successful or not, derives (Digital) Humanities experts to explore how codes of terror, Islamic ideology and recruitment strategies are incorporated into the ludic mechanics of videogames. Another aspect of the significance lies in the fact that this is a latent problem that has not been fully addressed in an interdisciplinary framework prior to this study, to the best of the researcher’s knowledge. Therefore, due to the complexity of the subject, the present paper entangles with game studies, philosophical and religious poles to form the methodology of conducting the research. As a contextualized epistemology of such exploitation of videogames, the core argument is building on the notion of “Culture Industry” proposed by Theodore W. Adorno and Max Horkheimer in Dialectic of Enlightenment (2002). This article posits that the ideological underpinnings of ISIS’s cause corroborated by the action-bound mechanics of the videogames are in line with adhering to the Islamic Eschatology as a furnishing ground and an excuse in exercising terrorism. It is an account of ISIS’s modification of the videogames, a tool of technological progression to practice online radicalization. Dialectically, this practice is packed up in rhetoric for recognizing a religious myth (the advent of a savior), as a hallmark of regression. The study puts forth that ISIS’s wreaking havoc on the world, both in reality and within action videogames, is negotiating the process of self-assertion in the players of such videogames (by assuming one’s self a member of terrorists) that leads to self-annihilation. It tries to unfold how ludic Mod videogames are misused as tools of mass deception towards ethnic cleansing in reality and line with the distorted Eschatological myth. To conclude, this study posits videogames to be a new avenue of mass deception in the framework of the Culture Industry. Yet, this emerges as a two-edged sword of mass deception in ISIS’s modification of videogames. It shows that ISIS is not only trying to hijack the minds through online/ludic recruitment, it potentially deceives the Muslim communities or those prone to radicalization into believing that it's terrorist practices are preparing the world for the advent of a religious savior based on Islamic Eschatology. This is to claim that the harsh actions of the videogames are potentially breeding minds by seeds of terrorist propaganda and numbing them to violence. The real world becomes an extension of that harsh virtual environment in a ludic/actual continuum, the extension that is contributing to the mass deception mechanism of the terrorists, in a clandestine trend.

Keywords: culture industry, dialectic, ISIS, islamic eschatology, mass deception, video games

Procedia PDF Downloads 137
321 Fodder Production and Livestock Rearing in Relation to Climate Change and Possible Adaptation Measures in Manaslu Conservation Area, Nepal

Authors: Bhojan Dhakal, Naba Raj Devkota, Chet Raj Upreti, Maheshwar Sapkota

Abstract:

A study was conducted to find out the production potential, nutrient composition, and the variability of the most commonly available fodder trees along with the varying altitude to help optimize the dry matter requirement during winter lean period. The study was carried out from March to June, 2012 in Lho and Prok Village Development Committee of Manaslu Conservation Area (MCA), located in Gorkha district of Nepal. The other objective of the research was to learn the impact of climate change on livestock production linking it with feed availability. The study was conducted in two parts: social and biological. Accordingly, a households (HHs) survey was conducted to collect primary data from 70 HHs, focusing on the perception of respondents on impacts of climatic variability on the feeding management. The next part consisted of understanding yield potential and nutrient composition of the four most commonly available fodder trees (M. azedirach, M. alba, F. roxburghii, F. nemoralis), within two altitudes range: (1500-2000 masl and 2000-2500 masl) by using a RCB design in 2*4 factorial combination of treatments, each replicated four times. Results revealed that majority of the farmers perceived the change in climatic phenomenon more severely within the past five years. Farmers were using different adaptation technologies such as collection of forage from jungle, reducing unproductive animals, fodder trees utilization, and crop by product feeding at feed scarcity period. Ranking of the different fodder trees on the basis of indigenous knowledge and experiences revealed that F. roxburghii was the best-preferred fodder tree species (index value 0.72) in terms overall preferability whereas M. azedirach had highest growth and productivity (index value 0.77), F. roxburghii had highest adoptability (index value 0.69) and palatability (index value 0.69) as well. Similarly, fresh yield and dry matter yield of the each fodder trees was significant (P < 0.01) between the altitude and within species. Fodder trees yield analysis revealed that the highest dry matter (DM) yield (28 kg/tree) was obtained for F. roxburghii but that remained statistically similar (P > 0.05) to the other treatment. On the other hand, most of the parameters: ether extract (EE), acid detergent lignin (ADL), acid detergent fibre (ADF), cell wall digestibility (CWD), relative digestibility (RD), digestible nutrient (TDN), and Calcium (Ca) among the treatments were highly significant (P < 0.01). This indicates the scope of introducing productive and nutritive fodder trees species even at the high altitude to help reduce fodder scarcity problem during winter. The finding also revealed the scope of promoting all available local fodder trees species as crude protein content of these species were similar.

Keywords: fodder trees, yield potential, climate change, nutrient composition

Procedia PDF Downloads 310
320 The Dilemma of Translanguaging Pedagogy in a Multilingual University in South Africa

Authors: Zakhile Somlata

Abstract:

In the context of international linguistic and cultural diversity, all languages can be used for all purposes. Africa in general and South Africa, in particular, is not an exception to multilingual and multicultural society. The multilingual and multicultural nature of South African society has a direct bearing to the heterogeneity of South African Universities in general. Universities as the centers of research, innovation, and transformation of the entire society should be at the forefront in leading multilingualism. The universities in South Africa had been using English and to a certain extent Afrikaans as the only academic languages during colonialism and apartheid regime. The democratic breakthrough of 1994 brought linguistic relief in South Africa. The Constitution of the Republic of South Africa recognizes 11 official languages that should enjoy parity of esteem for the realization of multilingualism. The elevation of the nine previously marginalized indigenous African languages as academic languages in higher education is central to multilingualism. It is high time that Afrocentric model instead of Eurocentric model should be the one which underpins education system in South Africa at all levels. Almost all South African universities have their language policies that seek to promote access and success of students through multilingualism, but the main dilemma is the implementation of language policies. This study is significant to respond to two objectives: (i) To evaluate how selected institutions use language policies for accessibility and success of students. (ii) To study how selected universities integrate African languages for both academic and administrative purposes. This paper reflects the language policy practices in one selected University of Technology (UoT) in South Africa. The UoT has its own language policy which depicts linguistic diversity of the institution and its commitment to promote multilingualism. Translanguaging pedagogy which accommodates minority languages' usage in the teaching and learning process plays a pivotal role in promoting multilingualism. This research paper employs mixed methods (quantitative and qualitative research) approach. Qualitative data has been collected from the key informants (insiders and experts), while quantitative data has been collected from a cohort of third-year students. A mixed methods approach with its convergent parallel design allows the data to be collected separately, analysed separately but with the comparison of the results. Language development initiatives have been discussed within the framework of language policy and policy implementation strategies. Theoretically, this paper is rooted in language as a problem, language as a right and language as a resource. The findings demonstrate that despite being a multilingual institution, there is a perpetuation of marginalization of African languages to be used as academic languages. Findings further display the hegemony of English. The promotion of status quo compromises the promotion of multilingualism, Africanization of Higher Education and intellectualization of indigenous African languages in South Africa under a democratic dispensation.

Keywords: afro-centric model, hegemony of English, language as a resource, translanguaging pedagogy

Procedia PDF Downloads 193
319 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ

Authors: Lalita, Niladri Sarkar, Subhasis Ghosh

Abstract:

Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.

Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal

Procedia PDF Downloads 60
318 Teachers' and Learners' Experiences of Learners' Writing in English First Additional Language

Authors: Jane-Francis A. Abongdia, Thandiswa Mpiti

Abstract:

There is an international concern to develop children’s literacy skills. In many parts of the world, the need to become fluent in a second language is essential for gaining meaningful access to education, the labour market and broader social functioning. In spite of these efforts, the problem still continues. The level of English language proficiency is far from satisfactory and these goals are unattainable by others. The issue is more complex in South Africa as learners are immersed in a second language (L2) curriculum. South Africa is a prime example of a country facing the dilemma of how to effectively equip a majority of its population with English as a second language or first additional language (FAL). Given the multilingual nature of South Africa with eleven official languages, and the position and power of English, the study investigates teachers’ and learners’ experiences on isiXhosa and Afrikaans background learners’ writing in English First Additional Language (EFAL). Moreover, possible causes of writing difficulties and teacher’s practices for writing are explored. The theoretical and conceptual framework for the study is provided by studies on constructivist theories and sociocultural theories. In exploring these issues, a qualitative approach through semi-structured interviews, classroom observations, and document analysis were adopted. This data is analysed by critical discourse analysis (CDA). The study identified a weak correlation between teachers’ beliefs and their actual teaching practices. Although the teachers believe that writing is as important as listening, speaking, reading, grammar and vocabulary, and that it needs regular practice, the data reveal that they fail to put their beliefs into practice. Moreover, the data revealed that learners were disturbed by their home language because when they do not know a word they would write either the isiXhosa or the Afrikaans equivalent. Code-switching seems to have instilled a sense of “dependence on translations” where some learners would not even try to answer English questions but would wait for the teacher to translate the questions into isiXhosa or Afrikaans before they could attempt to give answers. The findings of the study show a marked improvement in the writing performance of learners who used the process approach in writing. These findings demonstrate the need for assisting teachers to shift away from focusing only on learners’ performance (testing and grading) towards a stronger emphasis on the process of writing. The study concludes that the process approach to writing could enable teachers to focus on the various parts of the writing process which can give more freedom to learners to experiment their language proficiency. It would require that teachers develop a deeper understanding of the process/genre approaches to teaching writing advocated by CAPS. All in all, the study shows that both learners and teachers face numerous challenges relating to writing. This means that more work still needs to be done in this area. The present study argues that teachers teaching EFAL learners should approach writing as a critical and core aspect of learners’ education. Learners should be exposed to intensive writing activities throughout their school years.

Keywords: constructivism, English second language, language of learning and teaching, writing

Procedia PDF Downloads 218
317 Spatio-Temporal Variation of Gaseous Pollutants and the Contribution of Particulate Matters in Chao Phraya River Basin, Thailand

Authors: Samart Porncharoen, Nisa Pakvilai

Abstract:

The elevated levels of air pollutants in regional atmospheric environments is a significant problem that affects human health in Thailand, particularly in the Chao Phraya River Basin. Of concern are issues surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the river. Therefore, the spatio-temporal study of air pollution in this real environment can gain more accurate air quality data for making formalized environmental policy in river basins. In order to inform such a policy, a study was conducted over a period of January –December, 2015 to continually collect measurements of various pollutants in both urban and regional locations in the Chao Phraya River Basin. This study investigated the air pollutants in many diverse environments along the Chao Phraya River Basin, Thailand in 2015. Multivariate Analysis Techniques such as Principle Component Analysis (PCA) and Path analysis were utilised to classify air pollution in the surveyed location. Measurements were collected in both urban and rural areas to see if significant differences existed between the two locations in terms of air pollution levels. The meteorological parameters of various particulates were collected continually from a Thai pollution control department monitoring station over a period of January –December, 2015. Of interest to this study were the readings of SO2, CO, NOx, O3, and PM10. Results showed a daily arithmetic mean concentration of SO2, CO, NOx, O3, PM10 reading at 3±1 ppb, 0.5± 0.5 ppm, 30±21 ppb, 19±16 ppb, and 40±20 ug/m3 in urban locations (Bangkok). During the same time period, the readings for the same measurements in rural areas, Ayutthaya (were 1±0.5 ppb, 0.1± 0.05 ppm, 25±17 ppb, 30±21 ppb, and 35±10 ug/m3respectively. This show that Bangkok were located in highly polluted environments that are dominated source emitted from vehicles. Further, results were analysed to ascertain if significant seasonal variation existed in the measurements. It was found that levels of both gaseous pollutants and particle matter in dry season were higher than the wet season. More broadly, the results show that levels of pollutants were measured highest in locations along the Chao Phraya. River Basin known to have a large number of vehicles and biomass burning. This correlation suggests that the principle pollutants were from these anthropogenic sources. This study contributes to the body of knowledge surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the Chao Phraya River Basin. Further, this study is one of the first to utilise continuous mobile monitoring along a river in order to gain accurate measurements during a data collection period. Overall, the results of this study can be used for making formalized environmental policy in river basins in order to reduce the physical effects on human health.

Keywords: air pollution, Chao Phraya river basin, meteorology, seasonal variation, principal component analysis

Procedia PDF Downloads 285
316 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 185
315 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂

Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine

Abstract:

Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).

Keywords: devulcanization, recycling, rubber, waste

Procedia PDF Downloads 385
314 The Joy of Painless Maternity: The Reproductive Policy of the Bolsheviks in the 1930s

Authors: Almira Sharafeeva

Abstract:

In the Soviet Union of the 1930s, motherhood was seen as a natural need of women. The masculine Bolshevik state did not see the emancipated woman as free from her maternal burden. In order to support the idea of "joyful motherhood," a medical discourse on the anesthesia of childbirth emerges. In March 1935 at the IX Congress of obstetricians and gynecologists the People's Commissar of Public Health of the RSFSR G.N. Kaminsky raised the issue of anesthesia of childbirth. It was also from that year that medical, literary and artistic editions with enviable frequency began to publish articles, studies devoted to the issue, the goal - to anesthetize all childbirths in the USSR - was proclaimed. These publications were often filled with anti-German and anti-capitalist propaganda, through which the advantages of socialism over Capitalism and Nazism were demonstrated. At congresses, in journals, and at institute meetings, doctors' discussions around obstetric anesthesia were accompanied by discussions of shortening the duration of the childbirth process, the prevention and prevention of disease, the admission of nurses to the procedure, and the proper behavior of women during the childbirth process. With the help of articles from medical periodicals of the 1930s., brochures, as well as documents from the funds of the Institute of Obstetrics and Gynecology of the Academy of Medical Sciences of the USSR (TsGANTD SPb) and the Department of Obstetrics and Gynecology of the NKZ USSR (GARF) in this paper we will show, how the advantages of the Soviet system and the socialist way of life were constructed through the problem of childbirth pain relief, and we will also show how childbirth pain relief in the USSR was related to the foreign policy situation and how projects of labor pain relief were related to the anti-abortion policy of the state. This study also attempts to answer the question of why anesthesia of childbirth in the USSR did not become widespread and how, through this medical procedure, the Soviet authorities tried to take control of a female function (childbirth) that was not available to men. Considering this subject from the perspective of gender studies and the social history of medicine, it is productive to use the term "biopolitics. Michel Foucault and Antonio Negri, wrote that biopolitics takes under its wing the control and management of hygiene, nutrition, fertility, sexuality, contraception. The central issue of biopolitics is population reproduction. It includes strategies for intervening in collective existence in the name of life and health, ways of subjectivation by which individuals are forced to work on themselves. The Soviet state, through intervention in the reproductive lives of its citizens, sought to realize its goals of population growth, which was necessary to demonstrate the benefits of living in the Soviet Union and to train a pool of builders of socialism. The woman's body was seen as the object over which the socialist experiment of reproductive policy was being conducted.

Keywords: labor anesthesia, biopolitics of stalinism, childbirth pain relief, reproductive policy

Procedia PDF Downloads 70