Search results for: state space description
491 Symphony of Healing: Exploring Music and Art Therapy’s Impact on Chemotherapy Patients with Cancer
Authors: Sunidhi Sood, Drashti Narendrakumar Shah, Aakarsh Sharma, Nirali Harsh Panchal, Maria Karizhenskaia
Abstract:
Cancer is a global health concern, causing a significant number of deaths, with chemotherapy being a standard treatment method. However, chemotherapy often induces side effects that profoundly impact the physical and emotional well-being of patients, lowering their overall quality of life (QoL). This research aims to investigate the potential of music and art therapy as holistic adjunctive therapy for cancer patients undergoing chemotherapy, offering non-pharmacological support. This is achieved through a comprehensive review of existing literature with a focus on the following themes, including stress and anxiety alleviation, emotional expression and coping skill development, transformative changes, and pain management with mood upliftment. A systematic search was conducted using Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 2014 to 2023. The review solely incorporated studies focusing on the impact of music and art therapy on the health and overall well-being of cancer patients undergoing chemotherapy in North America. The findings from 16 studies involving pediatric oncology patients, females affected by breast cancer, and general oncology patients show that music and art therapies significantly reduce anxiety (standardized mean difference: -1.10) and improve perceived stress (median change: -4.0) and overall quality of life in cancer patients undergoing chemotherapy. Furthermore, music therapy has demonstrated the potential to decrease anxiety, depression, and pain during infusion treatments (average changes in resilience scale: 3.4 and 4.83 for instrumental and vocal music therapy, respectively). This data calls for consideration of the integration of music and art therapy into supportive care programs for cancer patients undergoing chemotherapy. Moreover, it provides guidance to healthcare professionals and policymakers, facilitating the development of patient-centered strategies for cancer care in Canada. Further research is needed in collaboration with qualified therapists to examine its applicability and explore and evaluate patients' perceptions and expectations in order to optimize the therapeutic benefits and overall patient experience. In conclusion, integrating music and art therapy in cancer care promises to substantially enhance the well-being and psychosocial state of patients undergoing chemotherapy. However, due to the small population size considered in existing studies, further research is needed to bridge the knowledge gap and ensure a comprehensive, patient-centered approach, ultimately enhancing the quality of life (QoL) for individuals facing the challenges of cancer treatment.Keywords: anxiety, cancer, chemotherapy, depression, music and art therapy, pain management, quality of life
Procedia PDF Downloads 76490 Conditions That Brought Bounce-Back in Southern Europe: An Inter-Temporal and Cross-National Analysis on Female Labour Force Participation with Fuzzy Set Qualitative Comparative Analysis
Authors: A. Onur Kutlu, H. Tolga Bolukbasi
Abstract:
Since the 1990s, governments, international organizations and scholars have drawn increasing attention to the significance of women in the labour force. While advanced industrial countries in North Western Europe and North America have managed to increase female labour force participation (FLFP) in the early post world war two period, emerging economies of the 1970s have only been able to increase FLFP only a decade later. Among these areas, Southern Europe features a wave of remarkable bounce backs in FLFP. However, despite striking similarities between the features in Southern Europe and those in Turkey, Turkey has not been able to pull women into the labour force. Despite a host of institutional similarities, Turkey has failed to reach to the level of her Southern European neighbours. This paper addresses the puzzle why Turkey lag behind in FLFP in comparison to her Southern European neighbours. There are signs showing that FLFP is currently reaching a critical threshold at a time when structural factors may allow a trend. It is not known, however, the constellation of conditions which may bring rising FLFP in Turkey. In order to gain analytical leverage from similar transitions in countries that share similar labour market and welfare state regime characteristics, this paper identifies the conditions in Southern Europe that brought rising FLFP to be able to explore the prospects for Turkey. Second, this paper takes these variables in the fuzzy set Qualitative Comparative Analysis (fsQCA) as conditions which can potentially explain the outcome of rising FLFP in Portugal, Spain, Italy, Greece and Turkey. The purpose here is to identify any causal pathway there may exist that lead to rising FLFP in Southern Europe. In order to do so, this study analyses two time periods in all cases, which represent different periods for different countries. The first period is identified on the basis of low FLFP and the second period on the basis of the transition to significantly higher FLFP. Third, the conditions are treated following the standard procedures in fsQCA, which provide equifinal: two distinct paths to higher levels of FLFP in Southern Europe, each of which may potentially increase FLFP in Turkey. Based on this analysis, this paper proposes that there exist two distinct paths leading to higher levels of FLFP in Southern Europe. Among these paths, salience of left parties emerges as a sufficient condition. In cases where this condition was not present, a second path combining enlarging service sector employment, increased tertiary education among women and increased childcare enrolment rates led to increasing FLFP.Keywords: female labour force participation, fsQCA, Southern Europe, Turkey
Procedia PDF Downloads 327489 Numerical Investigation of Flow Boiling within Micro-Channels in the Slug-Plug Flow Regime
Authors: Anastasios Georgoulas, Manolia Andredaki, Marco Marengo
Abstract:
The present paper investigates the hydrodynamics and heat transfer characteristics of slug-plug flows under saturated flow boiling conditions within circular micro-channels. Numerical simulations are carried out, using an enhanced version of the open-source CFD-based solver ‘interFoam’ of OpenFOAM CFD Toolbox. The proposed user-defined solver is based in the Volume Of Fluid (VOF) method for interface advection, and the mentioned enhancements include the implementation of a smoothing process for spurious current reduction, the coupling with heat transfer and phase change as well as the incorporation of conjugate heat transfer to account for transient solid conduction. In all of the considered cases in the present paper, a single phase simulation is initially conducted until a quasi-steady state is reached with respect to the hydrodynamic and thermal boundary layer development. Then, a predefined and constant frequency of successive vapour bubbles is patched upstream at a certain distance from the channel inlet. The proposed numerical simulation set-up can capture the main hydrodynamic and heat transfer characteristics of slug-plug flow regimes within circular micro-channels. In more detail, the present investigation is focused on exploring the interaction between subsequent vapour slugs with respect to their generation frequency, the hydrodynamic characteristics of the liquid film between the generated vapour slugs and the channel wall as well as of the liquid plug between two subsequent vapour slugs. The proposed investigation is carried out for the 3 different working fluids and three different values of applied heat flux in the heated part of the considered microchannel. The post-processing and analysis of the results indicate that the dynamics of the evolving bubbles in each case are influenced by both the upstream and downstream bubbles in the generated sequence. In each case a slip velocity between the vapour bubbles and the liquid slugs is evident. In most cases interfacial waves appear close to the bubble tail that significantly reduce the liquid film thickness. Finally, in accordance with previous investigations vortices that are identified in the liquid slugs between two subsequent vapour bubbles can significantly enhance the convection heat transfer between the liquid regions and the heated channel walls. The overall results of the present investigation can be used to enhance the present understanding by providing better insight of the complex, underpinned heat transfer mechanisms in saturated boiling within micro-channels in the slug-plug flow regime.Keywords: slug-plug flow regime, micro-channels, VOF method, OpenFOAM
Procedia PDF Downloads 267488 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images
Authors: Ravija Gunawardana, Banuka Athuraliya
Abstract:
Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine
Procedia PDF Downloads 157487 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways
Authors: Anirudh Lahiri
Abstract:
Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.
Procedia PDF Downloads 48486 Correlation of Unsuited and Suited 5ᵗʰ Female Hybrid III Anthropometric Test Device Model under Multi-Axial Simulated Orion Abort and Landing Conditions
Authors: Christian J. Kennett, Mark A. Baldwin
Abstract:
As several companies are working towards returning American astronauts back to space on US-made spacecraft, NASA developed a human flight certification-by-test and analysis approach due to the cost-prohibitive nature of extensive testing. This process relies heavily on the quality of analytical models to accurately predict crew injury potential specific to each spacecraft and under dynamic environments not tested. As the prime contractor on the Orion spacecraft, Lockheed Martin was tasked with quantifying the correlation of analytical anthropometric test devices (ATDs), also known as crash test dummies, against test measurements under representative impact conditions. Multiple dynamic impact sled tests were conducted to characterize Hybrid III 5th ATD lumbar, head, and neck responses with and without a modified shuttle-era advanced crew escape suit (ACES) under simulated Orion landing and abort conditions. Each ATD was restrained via a 5-point harness in a mockup Orion seat fixed to a dynamic impact sled at the Wright Patterson Air Force Base (WPAFB) Biodynamics Laboratory in the horizontal impact accelerator (HIA). ATDs were subject to multiple impact magnitudes, half-sine pulse rise times, and XZ - ‘eyeballs out/down’ or Z-axis ‘eyeballs down’ orientations for landing or an X-axis ‘eyeballs in’ orientation for abort. Several helmet constraint devices were evaluated during suited testing. Unique finite element models (FEMs) were developed of the unsuited and suited sled test configurations using an analytical 5th ATD model developed by LSTC (Livermore, CA) and deformable representations of the seat, suit, helmet constraint countermeasures, and body restraints. Explicit FE analyses were conducted using the non-linear solver LS-DYNA. Head linear and rotational acceleration, head rotational velocity, upper neck force and moment, and lumbar force time histories were compared between test and analysis using the enhanced error assessment of response time histories (EEARTH) composite score index. The EEARTH rating paired with the correlation and analysis (CORA) corridor rating provided a composite ISO score that was used to asses model correlation accuracy. NASA occupant protection subject matter experts established an ISO score of 0.5 or greater as the minimum expectation for correlating analytical and experimental ATD responses. Unsuited 5th ATD head X, Z, and resultant linear accelerations, head Y rotational accelerations and velocities, neck X and Z forces, and lumbar Z forces all showed consistent ISO scores above 0.5 in the XZ impact orientation, regardless of peak g-level or rise time. Upper neck Y moments were near or above the 0.5 score for most of the XZ cases. Similar trends were found in the XZ and Z-axis suited tests despite the addition of several different countermeasures for restraining the helmet. For the X-axis ‘eyeballs in’ loading direction, only resultant head linear acceleration and lumbar Z-axis force produced ISO scores above 0.5 whether unsuited or suited. The analytical LSTC 5th ATD model showed good correlation across multiple head, neck, and lumbar responses in both the unsuited and suited configurations when loaded in the XZ ‘eyeballs out/down’ direction. Upper neck moments were consistently the most difficult to predict, regardless of impact direction or test configuration.Keywords: impact biomechanics, manned spaceflight, model correlation, multi-axial loading
Procedia PDF Downloads 114485 Quantum Mechanics as A Limiting Case of Relativistic Mechanics
Authors: Ahmad Almajid
Abstract:
The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics
Procedia PDF Downloads 79484 Missed Opportunities for Immunization of under Five Children in Calabar South County Cros River State, Nigeria, the Way Forward
Authors: Celestine Odigwe, Epoke Lincoln, Rhoda-Dara Ephraim
Abstract:
Background; Immunization against the childhood killer diseases is the cardinal strategy for the prevention of these diseases all over the world in under five children, these diseases include; Tuberculosis, Measles, Polio, Tetanus, Diphthria, Pertusis, Yellow Fever, Hepatitis B, Haemophilus Influenza type B. 6.9 million children die before their fifth birthday , 80% of the worlds death in children under 5 years occur in 25 countries most in Africa and Asia and 2 million children can be saved each year with routine immunization Therefore failure to achieve total immunization coverage puts several children at risk. Aim; The aim of the study was to ascertain the prevalence, Investigate the various reasons and causes why several under five children in a suburb of calabar municipal county fail to get the required immunizations as at and when due and possibly the consequences, so that efforts can be re-directed towards the solution of the problems so identified. Methods; the study was a community based cross sectional study. The respondents were the mothers/guardians of the sampled children who were all aged 0-59 months. To be eligible for recruitment into the study, the parent or guardian was required to give an informed consent, reside within the Calabar South County with his/her children aged 0-59 months. We calculated our sample size using the Leslie-Kish formula and we used a two-staged sampling method, first to ballot for the wards to be involved and then to select four of the most populated ones in the wards chosen. Data collection was by interviewer administered structured questionnaire (Appendix I), Data collected was entered and analyzed using Statistical Package for the Social Sciences (SPSS) Version 20. Percentages were calculated and represented using charts and tables Results; The number of children sampled was 159. We found that 150 were fully immunized and 9 were not, the prevalence of missed opportunity was 32% from the study. The reasons for missed opportunities were varied, ranging from false contraindications, logistical problems resulting in very poor access roads to health facilities and poor organization of health centers together with negative health worker attitudes. Some of the consequences of these missed opportunities were increased susceptibility to vaccine preventable diseases, resurgence of the above diseases and increased morbidity and mortality of children aged less than 5 years. Conclusion; We found that ignorance on the part of both parents/guardians and health care staff together with infrastructural inadequacies in the county such as- roads, poor electric power supply for storage of vaccines were hugely responsible for most missed opportunities for immunization. The details of these and suggestions for improvement and the way forward are discussed.Keywords: missed opportunity, immunization, under five, Calabar south
Procedia PDF Downloads 326483 Innovative Food Related Modification of the Day-Night Task Demonstrates Impaired Inhibitory Control among Patients with Binge-Purge Eating Disorder
Authors: Sigal Gat-Lazer, Ronny Geva, Dan Ramon, Eitan Gur, Daniel Stein
Abstract:
Introduction: Eating disorders (ED) are common psychopathologies which involve distorted body image and eating disturbances. Binge-purge eating disorders (B/P ED) are characterized by repetitive events of binge eating followed by purges. Patients with B/P ED behavior may be seen as impulsive especially when relate to food stimulation and affective conditions. The current study included innovative modification of the day-night task targeted to assess inhibitory control among patients with B/P ED. Methods: This prospective study included 50 patients with B/P ED during acute phase of illness (T1) upon their admission to specialized ED department in tertiary center. 34 patients repeated the study towards discharge to ambulatory care (T2). Treatment effect was evaluated by BMI and emotional questionnaires regarding depression and anxiety by the Beck Depression Inventory and State Trait Anxiety Inventory questionnaires. Control group included 36 healthy controls with matched demographic parameters who performed both T1 and T2 assessments. The current modification is based on the emotional day-night task (EDNT) which involves five emotional stimulation added to the sun and moon pictures presented to participants. In the current study, we designed the food-emotional modification day night task (F-EDNT) food stimulations of egg and banana which resemble the sun and moon, respectively, in five emotional states (angry, sad, happy, scrambled and neutral). During this computerized task, participants were instructed to push on “day” bottom in response to moon and banana stimulations and on “night” bottom when sun and egg were presented. Accuracy (A) and reaction time (RT) were evaluated and compared between EDNT and F-EDNT as a reflection of participants’ inhibitory control. Results: Patients with B/P ED had significantly improved BMI, depression and anxiety scores on T2 compared to T1 (all p<0.001). Task performance was similar among patients and controls in the EDNT without significant A or RT differences in both T1 and T2. On F-EDNT during T1, B/P ED patients had significantly reduced accuracy in 4/5 emotional stimulation compared to controls: angry (73±25% vs. 84±15%, respectively), sad (69±25% vs. 80±18%, respectively), happy (73±24% vs. 82±18%, respectively) and scrambled (74±24% vs. 84±13%, respectively, all p<0.05). Additionally, patients’ RT to food stimuli was significantly faster compared to neutral ones, in both cry and neutral emotional stimulations (356±146 vs. 400±141 and 378±124 vs. 412±116 msec, respectively, p<0.05). These significant differences between groups as a function of stimulus type were diminished on T2. Conclusion: Having to process food related content, in particular in emotional context seems to be impaired in patients with B/P ED during the acute phase of their illness and elicits greater impulsivity. Innovative modification using such procedures seem to be sensitive to patients’ illness phase and thus may be implemented during screening and follow up through the clinical management of these patients.Keywords: binge purge eating disorders, day night task modification, eating disorders, food related stimulations
Procedia PDF Downloads 382482 Overcoming Reading Barriers in an Inclusive Mathematics Classroom with Linguistic and Visual Support
Authors: A. Noll, J. Roth, M. Scholz
Abstract:
The importance of written language in a democratic society is non-controversial. Students with physical, learning, cognitive or developmental disabilities often have difficulties in understanding information which is presented in written language only. These students suffer from obstacles in diverse domains. In order to reduce such barriers in educational as well as in out-of-school areas, access to written information must be facilitated. Readability can be enhanced by linguistic simplifications like the application of easy-to-read language. Easy-to-read language shall help people with disabilities to participate socially and politically in society. The authors state, for example, that only short simple words should be used, whereas the occurrence of complex sentences should be avoided. So far, these guidelines were not empirically proved. Another way to reduce reading barriers is the use of visual support, for example, symbols. A symbol conveys, in contrast to a photo, a single idea or concept. Little empirical data about the use of symbols to foster the readability of texts exist. Nevertheless, a positive influence can be assumed, e.g., because of the multimedia principle. It indicates that people learn better from words and pictures than from words alone. A qualitative Interview and Eye-Tracking-Study, which was conducted by the authors, gives cause for the assumption that besides the illustration of single words, the visualization of complete sentences may be helpful. Thus, the effect of photos, which illustrate the content of complete sentences, is also investigated in this study. This leads us to the main research question which was focused on: Does the use of easy-to-read language and/or enriching text with symbols or photos facilitate pupils’ comprehension of learning tasks? The sample consisted of students with learning difficulties (N = 144) and students without SEN (N = 159). The students worked on the tasks, which dealt with introducing fractions, individually. While experimental group 1 received a linguistically simplified version of the tasks, experimental group 2 worked with a variation which was linguistically simplified and furthermore, the keywords of the tasks were visualized by symbols. Experimental group 3 worked on exercises which were simplified by easy-to-read-language and the content of the whole sentences was illustrated by photos. Experimental group 4 received a not simplified version. The participants’ reading ability and their IQ was elevated beforehand to build four comparable groups. There is a significant effect of the different setting on the students’ results F(3,140) = 2,932; p = 0,036*. A post-hoc-analyses with multiple comparisons shows that this significance results from the difference between experimental group 3 and 4. The students in the group easy-to-read language plus photos worked on the exercises significantly more successfully than the students who worked in the group with no simplifications. Further results which refer, among others, to the influence of the students reading ability will be presented at the ICERI 2018.Keywords: inclusive education, mathematics education, easy-to-read language, photos, symbols, special educational needs
Procedia PDF Downloads 155481 Analyzing Spatio-Structural Impediments in the Urban Trafficscape of Kolkata, India
Authors: Teesta Dey
Abstract:
Integrated Transport development with proper traffic management leads to sustainable growth of any urban sphere. Appropriate mass transport planning is essential for the populous cities in third world countries like India. The exponential growth of motor vehicles with unplanned road network is now the common feature of major urban centres in India. Kolkata, the third largest mega city in India, is not an exception of it. The imbalance between demand and supply of unplanned transport services in this city is manifested in the high economic and environmental costs borne by the associated society. With the passage of time, the growth and extent of passenger demand for rapid urban transport has outstripped proper infrastructural planning and causes severe transport problems in the overall urban realm. Hence Kolkata stands out in the world as one of the most crisis-ridden metropolises. The urban transport crisis of this city involves severe traffic congestion, the disparity in mass transport services on changing peripheral land uses, route overlapping, lowering of travel speed and faulty implementation of governmental plans as mostly induced by rapid growth of private vehicles on limited road space with huge carbon footprint. Therefore the paper will critically analyze the extant road network pattern for improving regional connectivity and accessibility, assess the degree of congestion, identify the deviation from demand and supply balance and finally evaluate the emerging alternate transport options as promoted by the government. For this purpose, linear, nodal and spatial transport network have been assessed based on certain selected indices viz. Road Degree, Traffic Volume, Shimbel Index, Direct Bus Connectivity, Average Travel and Waiting Tine Indices, Route Variety, Service Frequency, Bus Intensity, Concentration Analysis, Delay Rate, Quality of Traffic Transmission, Lane Length Duration Index and Modal Mix. Total 20 Traffic Intersection Points (TIPs) have been selected for the measurement of nodal accessibility. Critical Congestion Zones (CCZs) are delineated based on one km buffer zones of each TIP for congestion pattern analysis. A total of 480 bus routes are assessed for identifying the deficiency in network planning. Apart from bus services, the combined effects of other mass and para transit modes, containing metro rail, auto, cab and ferry services, are also analyzed. Based on systematic random sampling method, a total of 1500 daily urban passengers’ perceptions were studied for checking the ground realities. The outcome of this research identifies the spatial disparity among the 15 boroughs of the city with severe route overlapping and congestion problem. North and Central Kolkata-based mass transport services exceed the transport strength of south and peripheral Kolkata. Faulty infrastructural condition, service inadequacy, economic loss and workers’ inefficiency are the most dominant reasons behind the defective mass transport network plan. Hence there is an urgent need to revive the extant road based mass transport system of this city by implementing a holistic management approach by upgrading traffic infrastructure, designing new roads, better cooperation among different mass transport agencies, better coordination of transport and changing land use policies, large increase in funding and finally general passengers’ awareness.Keywords: carbon footprint, critical congestion zones, direct bus connectivity, integrated transport development
Procedia PDF Downloads 273480 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 103479 Vortex Flows under Effects of Buoyant-Thermocapillary Convection
Authors: Malika Imoula, Rachid Saci, Renee Gatignol
Abstract:
A numerical investigation is carried out to analyze vortex flows in a free surface cylinder, driven by the independent rotation and differentially heated boundaries. As a basic uncontrolled isothermal flow, we consider configurations which exhibit steady axisymmetric toroidal type vortices which occur at the free surface; under given rates of the bottom disk uniform rotation and for selected aspect ratios of the enclosure. In the isothermal case, we show that sidewall differential rotation constitutes an effective kinematic means of flow control: the reverse flow regions may be suppressed under very weak co-rotation rates, while an enhancement of the vortex patterns is remarked under weak counter-rotation. However, in this latter case, high rates of counter-rotation reduce considerably the strength of the meridian flow and cause its confinement to a narrow layer on the bottom disk, while the remaining bulk flow is diffusion dominated and controlled by the sidewall rotation. The main control parameters in this case are the rotational Reynolds number, the cavity aspect ratio and the rotation rate ratio defined. Then, the study proceeded to consider the sensitivity of the vortex pattern, within the Boussinesq approximation, to a small temperature gradient set between the ambient fluid and an axial thin rod mounted on the cavity axis. Two additional parameters are introduced; namely, the Richardson number Ri and the Marangoni number Ma (or the thermocapillary Reynolds number). Results revealed that reducing the rod length induces the formation of on-axis bubbles instead of toroidal structures. Besides, the stagnation characteristics are significantly altered under the combined effects of buoyant-thermocapillary convection. Buoyancy, induced under sufficiently high Ri, was shown to predominate over the thermocapillay motion; causing the enhancement (suppression) of breakdown when the rod is warmer (cooler) than the ambient fluid. However, over small ranges of Ri, the sensitivity of the flow to surface tension gradients was clearly evidenced and results showed its full control over the occurrence and location of breakdown. In particular, detailed timewise evolution of the flow indicated that weak thermocapillary motion was sufficient to prevent the formation of toroidal patterns. These latter detach from the surface and undergo considerable size reduction while moving towards the bulk flow before vanishing. Further calculations revealed that the pattern reappears with increasing time as steady bubble type on the rod. However, in the absence of the central rod and also in the case of small rod length l, the flow evolved into steady state without any breakdown.Keywords: buoyancy, cylinder, surface tension, toroidal vortex
Procedia PDF Downloads 360478 Environmental Literacy of Teacher Educators in Colleges of Teacher Education in Israel
Authors: Tzipi Eshet
Abstract:
The importance of environmental education as part of a national strategy to promote the environment is recognized around the world. Lecturers at colleges of teacher education have considerable responsibility, directly and indirectly, for the environmental literacy of students who will end up teaching in the school system. This study examined whether lecturers in colleges of teacher education and teacher training in Israel, are able and willing to develop among the students, environmental literacy. Capability and readiness is assessed by evaluating the level of environmental literacy dimensions that include knowledge on environmental issues, positions related to the environmental agenda and "green" patterns of behavior in everyday life. The survey included 230 lecturers from 22 state colleges coming from various sectors (secular, religious, and Arab), from different academic fields and different personal backgrounds. Firstly, the results show that the higher the commitment to environmental issues, the lower the satisfaction with the current situation. In general, the respondents show positive environmental attitudes in all categories examined, they feel that they can personally influence responsible environmental behavior of others and are able to internalize environmental education in schools and colleges; they also report positive environmental behavior. There are no significant differences between teachers of different background characteristics when it comes to behavior patterns that generate personal income funds (e.g. returning bottles for deposit). Women show a more responsible environmental behavior than men. Jewish lecturers, in most categories, show more responsible behavior than Druze and Arab lecturers; however, when referring to positions, Arabs and Druze have a better sense in their ability to influence the environmental agenda. The Knowledge test, which included 15 questions, was mostly based on basic environmental issues. The average score was adequate - 83.6. Science lecturers' environmental literacy is higher than the other lecturers significantly. The larger the environmental knowledge base is, they are more environmental in their attitudes, and they feel more responsible toward the environment. It can be concluded from the research findings, that knowledge is a fundamental basis for developing environmental literacy. Environmental knowledge has a positive effect on the development of environmental commitment that is reflected in attitudes and behavior. This conclusion is probably also true of the general public. Hence, there is a great importance to the expansion of knowledge among the general public and teacher educators in particular on environmental. From the open questions in the survey, it is evident that most of the lecturers are interested in the subject and understand the need to integrate environmental issues in the colleges, either directly by teaching courses on the environment or indirectly by integrating environmental issues in different professions as well as asking the students to set an example (such as, avoid unnecessary printing, keeping the environment clean). The curriculum at colleges should include a variety of options for the development and enhancement of environmental literacy of student teachers, but first there must be a focus on bringing their teachers to a high literacy level so they can meet the difficult and important task they face.Keywords: colleges of teacher education, environmental literacy, environmental education, teacher's teachers
Procedia PDF Downloads 285477 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials
Authors: Luciana S. Almeida
Abstract:
Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.Keywords: environment; management; nanotecnology; politics
Procedia PDF Downloads 123476 Effect of Particle Size Variations on the Tribological Properties of Porcelain Waste Added Epoxy Composites
Authors: B. Yaman, G. Acikbas, N. Calis Acikbas
Abstract:
Epoxy based materials have advantages in tribological applications due to their unique properties such as light weight, self-lubrication capacity and wear resistance. On the other hand, their usage is often limited by their low load bearing capacity and low thermal conductivity values. In this study, it is aimed to improve tribological and also mechanical properties of epoxy by reinforcing with ceramic based porcelain waste. It is well-known that the reuse or recycling of waste materials leads to reduction in production costs, ease of manufacturing, saving energy, etc. From this perspective, epoxy and epoxy matrix composites containing 60wt% porcelain waste with different particle size in the range of below 90µm and 150-250µm were fabricated, and the effect of filler particle size on the mechanical and tribological properties was investigated. The microstructural characterization was carried out by scanning electron microscopy (SEM), and phase analysis was determined by X-ray diffraction (XRD). The Archimedes principle was used to measure the density and porosity of the samples. The hardness values were measured using Shore-D hardness, and bending tests were performed. Microstructural investigations indicated that porcelain particles were homogeneously distributed and no agglomerations were encountered in the epoxy resin. Mechanical test results showed that the hardness and bending strength were increased with increasing particle size related to low porosity content and well embedding to the matrix. Tribological behavior of these composites was evaluated in terms of friction, wear rates and wear mechanisms by ball-on-disk contact with dry and rotational sliding at room temperature against WC ball with a diameter of 3mm. Wear tests were carried out at room temperature (23–25°C) with a humidity of 40 ± 5% under dry-sliding conditions. The contact radius of cycles was set to 5 mm at linear speed of 30 cm/s for the geometry used in this study. In all the experiments, 3N of constant test load was applied at a frequency of 8 Hz and prolonged to 400m wear distance. The friction coefficient of samples was recorded online by the variation in the tangential force. The steady-state CoFs were changed in between 0,29-0,32. The dimensions of the wear tracks (depth and width) were measured as two-dimensional profiles by a stylus profilometer. The wear volumes were calculated by integrating these 2D surface areas over the diameter. Specific wear rates were computed by dividing the wear volume by the applied load and sliding distance. According to the experimental results, the use of porcelain waste in the fabrication of epoxy resin composites can be suggested to be potential materials due to allowing improved mechanical and tribological properties and also providing reduction in production cost.Keywords: epoxy composites, mechanical properties, porcelain waste, tribological properties
Procedia PDF Downloads 195475 Monitoring and Evaluation of Web-Services Quality and Medium-Term Impact on E-Government Agencies' Efficiency
Authors: A. F. Huseynov, N. T. Mardanov, J. Y. Nakhchivanski
Abstract:
This practical research is aimed to improve the management quality and efficiency of public administration agencies providing e-services. The monitoring system developed will provide continuous review of the websites compliance with the selected indicators, their evaluation based on the selected indicators and ranking of services according to the quality criteria. The responsible departments in the government agencies were surveyed; the questionnaire includes issues of management and feedback, e-services provided, and the application of information systems. By analyzing the main affecting factors and barriers, the recommendations will be given that lead to the relevant decisions to strengthen the state agencies competencies for the management and the provision of their services. Component 1. E-services monitoring system. Three separate monitoring activities are proposed to be executed in parallel: Continuous tracing of e-government sites using built-in web-monitoring program; this program generates several quantitative values which are basically related to the technical characteristics and the performance of websites. The expert assessment of e-government sites in accordance with the two general criteria. Criterion 1. Technical quality of the site. Criterion 2. Usability/accessibility (load, see, use). Each high-level criterion is in turn subdivided into several sub-criteria, such as: the fonts and the color of the background (Is it readable?), W3C coding standards, availability of the Robots.txt and the site map, the search engine, the feedback/contact and the security mechanisms. The on-line survey of the users/citizens – a small group of questions embedded in the e-service websites. The questionnaires comprise of the information concerning navigation, users’ experience with the website (whether it was positive or negative), etc. Automated monitoring of web-sites by its own could not capture the whole evaluation process, and should therefore be seen as a complement to expert’s manual web evaluations. All of the separate results were integrated to provide the complete evaluation picture. Component 2. Assessment of the agencies/departments efficiency in providing e-government services. - the relevant indicators to evaluate the efficiency and the effectiveness of e-services were identified; - the survey was conducted in all the governmental organizations (ministries, committees and agencies) that provide electronic services for the citizens or the businesses; - the quantitative and qualitative measures are covering the following sections of activities: e-governance, e-services, the feedback from the users, the information systems at the agencies’ disposal. Main results: 1. The software program and the set of indicators for internet sites evaluation has been developed and the results of pilot monitoring have been presented. 2. The evaluation of the (internal) efficiency of the e-government agencies based on the survey results with the practical recommendations related to the human potential, the information systems used and e-services provided.Keywords: e-government, web-sites monitoring, survey, internal efficiency
Procedia PDF Downloads 305474 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 314473 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends
Authors: Reena Aggarwal, Neha Kestwal
Abstract:
The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.Keywords: Himalayan nettle, sustainable, shrinkage, blending
Procedia PDF Downloads 242472 Disaster Capitalism, Charter Schools, and the Reproduction of Inequality in Poor, Disabled Students: An Ethnographic Case Study
Authors: Sylvia Mac
Abstract:
This ethnographic case study examines disaster capitalism, neoliberal market-based school reforms, and disability through the lens of Disability Studies in Education. More specifically, it explores neoliberalism and special education at a small, urban charter school in a large city in California and the (re)production of social inequality. The study uses Sociology of Special Education to examine the ways in which special education is used to sort and stratify disabled students. At a time when rhetoric surrounding public schools is framed in catastrophic and dismal language in order to justify the privatization of public education, small urban charter schools must be examined to learn if they are living up to their promise or acting as another way to maintain economic and racial segregation. The study concludes that neoliberal contexts threaten successful inclusive education and normalize poor, disabled students’ continued low achievement and poor post-secondary outcomes. This ethnographic case study took place at a small urban charter school in a large city in California. Participants included three special education students, the special education teacher, the special education assistant, a regular education teacher, and the two founders and charter writers. The school claimed to have a push-in model of special education where all special education students were fully included in the general education classroom. Although presented as fully inclusive, some special education students also attended a pull-out class called Study Skills. The study found that inclusion and neoliberalism are differing ideologies that cannot co-exist. Successful inclusive environments cannot thrive while under the influences of neoliberal education policies such as efficiency and cost-cutting. Additionally, the push for students to join the global knowledge economy means that more and more low attainers are further marginalized and kept in poverty. At this school, neoliberal ideology eclipsed the promise of inclusive education for special education students. This case study has shown the need for inclusive education to be interrogated through lenses that consider macro factors, such as neoliberal ideology in public education, as well as the emerging global knowledge economy and increasing income inequality. Barriers to inclusion inside the school, such as teachers’ attitudes, teacher preparedness, and school infrastructure paint only part of the picture. Inclusive education is also threatened by neoliberal ideology that shifts the responsibility from the state to the individual. This ideology is dangerous because it reifies the stereotypes of disabled students as lazy, needs drains on already dwindling budgets. If these stereotypes persist, inclusive education will have a difficult time succeeding. In order to more fully examine the ways in which inclusive education can become truly emancipatory, we need more analysis on the relationship between neoliberalism, disability, and special education.Keywords: case study, disaster capitalism, inclusive education, neoliberalism
Procedia PDF Downloads 223471 The Community Stakeholders’ Perspectives on Sexual Health Education for Young Adolescents in Western New York, USA: A Qualitative Descriptive Study
Authors: Sadandaula Rose Muheriwa Matemba, Alexander Glazier, Natalie M. LeBlanc
Abstract:
In the United States, up to 10% of girls and 22 % of boys 10-14 years have had sex, 5% of them had their first sex before 11 years, and the age of first sexual encounter is reported to be 8 years. Over 4,000 adolescent girls, 10-14 years, become pregnant every year, and 2.6% of the abortions in 2019 were among adolescents below 15 years. Despite these negative outcomes, little research has been conducted to understand the sexual health education offered to young adolescents ages 10-14. Early sexual health education is one of the most effective strategies to help lower the rate of early pregnancies, HIV infections, and other sexually transmitted. Such knowledge is necessary to inform best practices for supporting the healthy sexual development of young adolescents and prevent adverse outcomes. This qualitative descriptive study was conducted to explore the community stakeholders’ experiences in sexual health education for young adolescents ages 10-14 and ascertain the young adolescents’ sexual health support needs. Maximum variation purposive sampling was used to recruit a total sample of 13 community stakeholders, including health education teachers, members of youth-based organizations, and Adolescent Clinic providers in Rochester, New York State, in the United States of America from April to June 2022. Data were collected through semi-structured individual in-depth interviews and were analyzed using MAXQDA following a conventional content analysis approach. Triangulation, team analysis, and respondent validation to enhance rigor were also employed to enhance study rigor. The participants were predominantly female (92.3%) and comprised of Caucasians (53.8%), Black/African Americans (38.5%), and Indian-American (7.7%), with ages ranging from 23-59. Four themes emerged: the perceived need for early sexual health education, preferred timing to initiate sexual health conversations, perceived age-appropriate content for young adolescents, and initiating sexual health conversations with young adolescents. The participants described encouraging and concerning experiences. Most participants were concerned that young adolescents are living in a sexually driven environment and are not given the sexual health education they need, even though they are open to learning sexual health materials. There was consensus on the need to initiate sexual health conversations early at 4 years or younger, standardize sexual health education in schools and make age-appropriate sexual health education progressive. These results show that early sexual health education is essential if young adolescents are to delay sexual debut, prevent early pregnancies, and if the goal of ending the HIV epidemic is to be achieved. However, research is needed on a larger scale to understand how best to implement sexual health education among young adolescents and to inform interventions for implementing contextually-relevant sexuality education for this population. These findings call for increased multidisciplinary efforts in promoting early sexual health education for young adolescents.Keywords: community stakeholders’ perspectives, sexual development, sexual health education, young adolescents
Procedia PDF Downloads 80470 The Effects of the New Silk Road Initiatives and the Eurasian Union to the East-Central-Europe’s East Opening Policies
Authors: Tamas Dani
Abstract:
The author’s research explores the geo-economical role and importance of some small and medium sized states, reviews their adaption strategies in foreign trade and also in foreign affairs in the course of changing into a multipolar world, uses international background. With these, the paper analyses the recent years and the future of ‘Opening towards Eastern foreign economic policies’ from East-Central Europe and parallel with that the ‘Western foreign economy policies’ from Asia, as the Chinese One Belt One Road new silk route plans (so far its huge part is an infrastructural development plan to reach international trade and investment aims). It can be today’s question whether these ideas will reshape the global trade or not. How does the new silk road initiatives and the Eurasian Union reflect the effect of globalization? It is worth to analyse that how did Central and Eastern European countries open to Asia; why does China have the focus of the opening policies in many countries and why could China be seen as the ‘winner’ of the world economic crisis after 2008. The research is based on the following methodologies: national and international literature, policy documents and related design documents, complemented by processing of international databases, statistics and live interviews with leaders from East-Central European countries’ companies and public administration, diplomats and international traders. The results also illustrated by mapping and graphs. The research will find out as major findings whether the state decision-makers have enough margin for manoeuvres to strengthen foreign economic relations. This work has a hypothesis that countries in East-Central Europe have real chance to diversify their relations in foreign trade, focus beyond their traditional partners. This essay focuses on the opportunities of East-Central-European countries in diversification of foreign trade relations towards China and Russia in terms of ‘Eastern Openings’. The effects of the new silk road initiatives and the Eurasian Union to Hungary’s economy with a comparing outlook on East-Central European countries and exploring common regional cooperation opportunities in this area. The essay concentrate on the changing trade relations between East-Central-Europe and China as well as Russia, try to analyse the effects of the new silk road initiatives and the Eurasian Union also. In the conclusion part, it shows how the cooperation is necessary for the East-Central European countries if they want to have a non-asymmetric trade with Russia, China or some Chinese regions (Pearl River Delta, Hainan, …). The form of the cooperation for the East-Central European nations can be Visegrad 4 Cooperation (V4), Central and Eastern European Countries (CEEC16), 3 SEAS Cooperation (or BABS – Baltic, Adriatic, Black Seas Initiative).Keywords: China, East-Central Europe, foreign trade relations, geoeconomics, geopolitics, Russia
Procedia PDF Downloads 183469 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago
Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu
Abstract:
Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago
Procedia PDF Downloads 52468 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model
Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini
Abstract:
The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures
Procedia PDF Downloads 124467 Domestic Violence Indictors and Coping Styles among Iranian, Pakistan and Turkish Married Women: A Cultural Study
Authors: Afsaneh Ghanbari Panah, Elyaz Bornak, Shiva Ghadiri Karizi, Amna Ahmad, Burcu Yildirim
Abstract:
This study explores domestic violence (DV) and coping strategies among married women in Iran, Pakistan, and Turkey. DV is a universal issue characterized by physical, psychological, or economic abuse by male family members towards female partners. The study aims to examine the prevalence of DV and the coping mechanisms employed by women in these three countries. The research highlights the significant impact of DV globally, transcending cultural, social, and economic boundaries. Despite the lack of comprehensive state-sponsored reports on Violence Against Women (VAW) in South Asia, fragmented reports by non-governmental agencies indicate high rates of self-reported intimate partner violence (IPV), including sexual violence, across these regions. The study emphasizes the urgent need for effective measures to address VAW, as existing laws often exclude unregistered and unmarried intimate partners. Coping mechanisms play a crucial role in responding to and managing the consequences of DV. The study defines coping as cognitive and behavioral responses to environmental stressors. Common coping strategies identified in the literature include spirituality, temporary or permanent separation, silence, submission, minimizing violence, denial, and seeking external support. Understanding these coping mechanisms is crucial for developing effective prevention and management strategies. The study presents findings from Iran, Pakistan, and Turkey, indicating varying prevalence rates of different forms of violence. Turkish respondents reported higher rates of emotional, physical, economic, and sexual violence, while Iranian respondents reported high levels of psychological, physical, and sexual violence. In Karachi, Pakistan, physical, sexual, and psychological violence were prevalent among women. The study highlights the importance of cross-cultural research and the need to consider individual and collective coping mechanisms in different societal contexts. Factors such as personal ideologies, political agendas, and economic stability influence societal support and cultural acceptance of IPV. To develop sustainable strategies, an in-depth exploration of coping mechanisms is necessary. In conclusion, this comparative study provides insights into DV and coping strategies among married women in Iran, Pakistan, and Turkey. The findings underscore the urgent need for comprehensive measures to address VAW, considering cultural, social, and economic factors. By understanding the prevalence and coping mechanisms employed by women, policymakers can develop effective interventions to support DV survivors and prevent further violence.Keywords: domestic violence, coping styles, cultural study, violence against women
Procedia PDF Downloads 55466 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk
Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda
Abstract:
Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.Keywords: cheese fraud, milk, pasteurisation, TD-NMR
Procedia PDF Downloads 243465 Spexin and Fetuin A in Morbid Obese Children
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Spexin, expressed in central nervous system, has attracted much interest in feeding behavior, obesity, diabetes, energy metabolism and cardiovascular functions. Fetuin A is known as negative acute phase reactant synthesized in the liver. So far, it has become a major concern of many studies in numerous clinical states. The relationship between the concentrations of spexin as well as fetuin A and the risk for cardiovascular diseases (CVDs) were also investigated. Eosinophils, suggested to be associated with the development of CVDs, are introduced as early indicators of cardiometabolic complications. Patients with elevated platelet count, associated with hypercoagulable state in the body, are also more liable to CVDs. In this study, the aim is to examine the profiles of spexin and fetuin A concomitant with the course of variations detected in eosinophil as well as platelet counts in morbid obese children. Thirty-four children with normal-body mass index (N-BMI) and fifty-one morbid obese (MO) children participated in the study. Written-informed consent forms were obtained prior to the study. Institutional ethics committee approved the study protocol. Age- and sex-adjusted BMI percentile tables prepared by World Health Organization were used to classify healthy and obese children. Mean age ± SEM of the children were 9.3 ± 0.6 years and 10.7 ± 0.5 years in N-BMI and MO groups, respectively. Anthropometric measurements of the children were taken. Body mass index values were calculated from weight and height values. Blood samples were obtained after an overnight fasting. Routine hematologic and biochemical tests were performed. Within this context, fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein-cholesterol (HDL-C) concentrations were measured. Homeostatic model assessment for insulin resistance (HOMA-IR) values were calculated. Spexin and fetuin A levels were determined by enzyme-linked immunosorbent assay. Data were evaluated from the statistical point of view. Statistically significant differences were found between groups in terms of BMI, fat mass index, INS, HOMA-IR and HDL-C. In MO group, all parameters increased as HDL-C decreased. Elevated concentrations in MO group were detected in eosinophils (p<0.05) and platelets (p>0.05). Fetuin A levels decreased in MO group (p>0.05). However, decrease was statistically significant in spexin levels for this group (p<0.05). In conclusion, these results have suggested that increases in eosinophils and platelets exhibit behavior as cardiovascular risk factors. Decreased fetuin A behaved as a risk factor suitable to increased risk for cardiovascular problems associated with the severity of obesity. Along with increased eosinophils, increased platelets and decreased fetuin A, decreased spexin was the parameter, which reflects best its possible participation in the early development of CVD risk in MO children.Keywords: cardiovascular diseases , eosinophils , fetuin A , pediatric morbid obesity , platelets , spexin
Procedia PDF Downloads 193464 Destination Nollywood: A Newspaper Analysis of the Connections between Film and Tourism in Nigeria, 2012-2022
Authors: E. S. Martens, E. E. Onwuliri
Abstract:
Over the past three decades, Nigeria’s film industry has become a global powerhouse, releasing hundreds of films annually and even monthly. Nollywood, a portmanteau of Nigeria and Hollywood as well as Bollywood that was coined by New York Times journalist Norimitsu Onishi in 2002, came to mark the plenitude of filmmaking happening in Lagos from the early 1990s onwards. Following the success of the 1992 straight-to-VHS film Living in Bondage, the Nigerian film industry experienced a popular home video boom that gained a huge following in Nigeria, across Africa, and among the global African diaspora. In fact, with an estimated worth of $6.4 billion as of 2021, Nollywood is nowadays considered the world’s second-largest film industry and even the largest in terms of output and popularity. Producing about 2,500 films annually and reaching an estimated audience of over 200 million people worldwide, Nollywood has not only seemingly surpassed Hollywood but also Bollywood with regard to production and consumption size. Due to its commercial success and cultural impact from the early 2010s, Nollywood has often been heralded as a potential driver of Africa’s tourism industry. In its 2012 Global Trends Report, the World Travel Market forecasted an increase in GDP in Africa due to tourism in Nollywood filming locations. Additionally, it was expected that the rising popularity of Nollywood would significantly contribute to growth in the leisure sector, drawing both film enthusiasts and business travelers intrigued by the expanding significance of the Nigerian film industry. Still, despite much talk about the potential impact of Nollywood on Nigerian tourism in the past 10 years or so, relatively little is known about Nollywood’s association with film tourism and the existing connections between Nigeria’s film and tourism industries more generally. Already well over a decade ago, it was observed that there is still a lack of research examining the extent to which film tourism related to Nollywood in Africa has been generated – and to date, this is still largely the case. This paper, then, seeks to discuss the reported connections between Nollywood and tourism and to review the efforts and opportunities related to Nollywood film tourism as suggested in Nigeria’s public domain. Based on a content analysis of over 50 newspaper articles and other online available materials, such as websites, blogs and forums, this paper explores the practices and discourses surrounding Nollywood connections with tourism in Nigeria and across Africa over the past ten years. The analysis shows that, despite these high expectations, film tourism related to Nollywood has remained limited. Despite growing government attention and support to Nollywood and its potential for tourism, most state initiatives in this direction have not (yet) materialize – and it very much remains to be seen to what extent ‘Destination Nollywood’ is really able to come to fruition as long as the structural issues underlying the development of Nigerian film (and) tourism are not sufficiently addressed.Keywords: film tourism, Nigerian cinema, Nollywood, tourist destination
Procedia PDF Downloads 52463 Financial Analysis of the Foreign Direct in Mexico
Authors: Juan Peña Aguilar, Lilia Villasana, Rodrigo Valencia, Alberto Pastrana, Martin Vivanco, Juan Peña C
Abstract:
Each year a growing number of companies entering Mexico in search of the domestic market share. These activities, including stores, telephone long distance and local raw materials and energy, and particularly the financial sector, have managed to significantly increase its weight in the flows of FDI in Mexico , however, you should consider whether these trends FDI are positive for the Mexican economy and these activities increase Mexican exports in the medium term , and its share in GDP , gross fixed capital formation and employment. In general stresses that these activities, by far, have been unable to significantly generate linkages with the rest of the economy, a process that has not favored with competitiveness policies and activities aimed at these neutral or horizontal. Since the nineties foreign direct investment (FDI) has shown a remarkable dynamism, both internationally and in Latin America and in Mexico. Only in Mexico the first recipient of FDI in importance in Latin America during 1990-1995 and was displaced by Brazil since FDI increased from levels below 1 % of GDP during the eighties to around 3 % of GDP during the nineties. Its impact has been significant not only from a macroeconomic perspective , it has also allowed the generation of a new industrial production structure and organization, parallel to a significant modernization of a segment of the economy. The case of Mexico also is particularly interesting and relevant because the destination of FDI until 1993 had focused on the purchase of state assets during privatization process. This paper aims to present FDI flows in Mexico and analyze the different business strategies that have been touched and encouraged by the FDI. On the one hand, looking briefly discuss regulatory issues and source and recipient of FDI sectors. Furthermore, the paper presents in more detail the impacts and changes that generated the FDI contribution of FDI in the Mexican economy , besides the macroeconomic context and later legislative changes that resulted in the current regulations is examined around FDI in Mexico, including aspects of the Free Trade Agreement (NAFTA). It is worth noting that foreign investment can not only be considered from the perspective of the receiving economic units. Instead, these flows also reflect the strategic interests of transnational corporations (TNCs) and other companies seeking access to markets and increased competitiveness of their production networks and global distribution, among other reasons. Similarly it is important to note that foreign investment in its various forms is critically dependent on historical and temporal aspects. Thus, the same functionality can vary significantly depending on the specific characteristics of both receptor units as sources of FDI, including macroeconomic, institutional, industrial organization, and social aspects, among others.Keywords: foreign direct investment (FDI), competitiveness, neoliberal regime, globalization, gross domestic product (GDP), NAFTA, macroeconomic
Procedia PDF Downloads 451462 Dynamic EEG Desynchronization in Response to Vicarious Pain
Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy
Abstract:
The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition
Procedia PDF Downloads 284