Search results for: precise leveling
138 Importance of Different Spatial Parameters in Water Quality Analysis within Intensive Agricultural Area
Authors: Marina Bubalo, Davor Romić, Stjepan Husnjak, Helena Bakić
Abstract:
Even though European Council Directive 91/676/EEC known as Nitrates Directive was adopted in 1991, the issue of water quality preservation in areas of intensive agricultural production still persist all over Europe. High nitrate nitrogen concentrations in surface and groundwater originating from diffuse sources are one of the most important environmental problems in modern intensive agriculture. The fate of nitrogen in soil, surface and groundwater in agricultural area is mostly affected by anthropogenic activity (i.e. agricultural practice) and hydrological and climatological conditions. The aim of this study was to identify impact of land use, soil type, soil vulnerability to pollutant percolation, and natural aquifer vulnerability to nitrate occurrence in surface and groundwater within an intensive agricultural area. The study was set in Varaždin County (northern Croatia), which is under significant influence of the large rivers Drava and Mura and due to that entire area is dominated by alluvial soil with shallow active profile mainly on gravel base. Negative agricultural impact on water quality in this area is evident therefore the half of selected county is a part of delineated nitrate vulnerable zones (NVZ). Data on water quality were collected from 7 surface and 8 groundwater monitoring stations in the County. Also, recent study of the area implied detailed inventory of agricultural production and fertilizers use with the aim to produce new agricultural land use database as one of dominant parameters. The analysis of this database done using ArcGIS 10.1 showed that 52,7% of total County area is agricultural land and 59,2% of agricultural land is used for intensive agricultural production. On the other hand, 56% of soil within the county is classified as soil vulnerable to pollutant percolation. The situation is similar with natural aquifer vulnerability; northern part of the county ranges from high to very high aquifer vulnerability. Statistical analysis of water quality data is done using SPSS 13.0. Cluster analysis group both surface and groundwater stations in two groups according to nitrate nitrogen concentrations. Mean nitrate nitrogen concentration in surface water – group 1 ranges from 4,2 to 5,5 mg/l and in surface water – group 2 from 24 to 42 mg/l. The results are similar, but evidently higher, in groundwater samples; mean nitrate nitrogen concentration in group 1 ranges from 3,9 to 17 mg/l and in group 2 from 36 to 96 mg/l. ANOVA analysis confirmed statistical significance between stations that are classified in the same group. The previously listed parameters (land use, soil type, etc.) were used in factorial correspondence analysis (FCA) to detect importance of each stated parameter in local water quality. Since stated parameters mostly cannot be altered, there is obvious necessity for more precise and more adapted land management in such conditions.Keywords: agricultural area, nitrate, factorial correspondence analysis, water quality
Procedia PDF Downloads 258137 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context
Authors: Andrea Fiorista
Abstract:
The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL
Procedia PDF Downloads 86136 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School
Authors: Shofiayuningtyas Luftiani
Abstract:
Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis
Procedia PDF Downloads 180135 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 311134 Analyzing the Participation of Young People in Politics: An Exploratory Study Applied on Motivation in Croatia
Authors: Valentina Piric, Maja Martinovic, Zoran Barac
Abstract:
The application of marketing to the domain of politics has become relevant in recent times. With this article the authors wanted to explore the issue of the current political engagement among young people in Croatia. The question is what makes young people (age 18-30) politically active in young democracies such as that of the Republic of Croatia. Therefore, the objective of this study was to discover the real or hidden motivations behind the decision to actively participate in politics among young members of the two largest political parties in the country – the Croatian Democratic Union and the Social Democratic Party of Croatia. The study expected to find that the motivation for political engagement of young people is often connected with a possible achievement of individual goals and egoistic needs such as: self-acceptance, social success, financial success, prestige, reputation, status, recognition from the others etc. It was also expected that, due to the poor economic and social situation in the country, young people feel an increasing disconnection from politics. Additionally, the authors expected to find that there is a huge potential to engage young people in the political life of the country through a proper and more interactive use of marketing communication campaigns and social media platforms, with an emphasis on highly ethical motives of political activity and their benefits to society. All respondents included in the quantitative survey (sample size [N=100]) are active in one of the two largest political parties in Croatia. The sampling and distribution of the survey occurred in the field in September 2016. The results of the survey demonstrate that in Croatia, the way young people feel about politics and act accordingly, are in fact similar to what the theory describes. The research findings reveal that young people are politically active; however, the challenge is to find a way to motivate even more young people in Croatia to actively participate in the political and democratic processes in the country and to encourage them to see additional benefits out of this practice, not only related to their individual motives, but related more to the well-being of Croatia as a country and of every member of society. The research also discovered a huge potential for political marketing communication possibilities, especially related to interactive social media. It is possible that the social media channels have a stronger influence on the decision-making process among young people when compared to groups of reference. The level of interest in politics among young Croatians varies; some of them are almost indifferent, whilst others express a serious interest in different ways to actively contribute to the political life of the country, defining a participation in the political life of their country almost as their moral obligation. However, additional observations and further research need to be conducted to get a clearer and more precise picture about the interest in politics among young people in Croatia and their social potential.Keywords: Croatia, marketing communication, motivation, politics, young people
Procedia PDF Downloads 332133 Attention and Memory in the Music Learning Process in Individuals with Visual Impairments
Authors: Lana Burmistrova
Abstract:
Introduction: The influence of visual impairments on several cognitive processes used in the music learning process is an increasingly important area in special education and cognitive musicology. Many children have several visual impairments due to the refractive errors and irreversible inhibitors. However, based on the compensatory neuroplasticity and functional reorganization, congenitally blind (CB) and early blind (EB) individuals use several areas of the occipital lobe to perceive and process auditory and tactile information. CB individuals have greater memory capacity, memory reliability, and less false memory mechanisms are used while executing several tasks, they have better working memory (WM) and short-term memory (STM). Blind individuals use several strategies while executing tactile and working memory n-back tasks: verbalization strategy (mental recall), tactile strategy (tactile recall) and combined strategies. Methods and design: The aim of the pilot study was to substantiate similar tendencies while executing attention, memory and combined auditory tasks in blind and sighted individuals constructed for this study, and to investigate attention, memory and combined mechanisms used in the music learning process. For this study eight (n=8) blind and eight (n=8) sighted individuals aged 13-20 were chosen. All respondents had more than five years music performance and music learning experience. In the attention task, all respondents had to identify pitch changes in tonal and randomized melodic pairs. The memory task was based on the mismatch negativity (MMN) proportion theory: 80 percent standard (not changed) and 20 percent deviant (changed) stimuli (sequences). Every sequence was named (na-na, ra-ra, za-za) and several items (pencil, spoon, tealight) were assigned for each sequence. Respondents had to recall the sequences, to associate them with the item and to detect possible changes. While executing the combined task, all respondents had to focus attention on the pitch changes and had to detect and describe these during the recall. Results and conclusion: The results support specific features in CB and EB, and similarities between late blind (LB) and sighted individuals. While executing attention and memory tasks, it was possible to observe the tendency in CB and EB by using more precise execution tactics and usage of more advanced periodic memory, while focusing on auditory and tactile stimuli. While executing memory and combined tasks, CB and EB individuals used passive working memory to recall standard sequences, active working memory to recall deviant sequences and combined strategies. Based on the observation results, assessment of blind respondents and recording specifics, following attention and memory correlations were identified: reflective attention and STM, reflective attention and periodic memory, auditory attention and WM, tactile attention and WM, auditory tactile attention and STM. The results and the summary of findings highlight the attention and memory features used in the music learning process in the context of blindness, and the tendency of the several attention and memory types correlated based on the task, strategy and individual features.Keywords: attention, blindness, memory, music learning, strategy
Procedia PDF Downloads 183132 Applicability of Polyisobutylene-Based Polyurethane Structures in Biomedical Disciplines: Some Calcification and Protein Adsorption Studies
Authors: Nihan Nugay, Nur Cicek Kekec, Kalman Toth, Turgut Nugay, Joseph P. Kennedy
Abstract:
In recent years, polyurethane structures are paving the way for elastomer usage in biology, human medicine, and biomedical application areas. Polyurethanes having a combination of high oxidative and hydrolytic stability and excellent mechanical properties are focused due to enhancing the usage of PUs especially for implantable medical device application such as cardiac-assist. Currently, unique polyurethanes consisting of polyisobutylenes as soft segments and conventional hard segments, named as PIB-based PUs, are developed with precise NCO/OH stoichiometry (∽1.05) for obtaining PIB-based PUs with enhanced properties (i.e., tensile stress increased from ∽11 to ∽26 MPa and elongation from ∽350 to ∽500%). Static and dynamic mechanical properties were optimized by examining stress-strain graphs, self-organization and crystallinity (XRD) traces, rheological (DMA, creep) profiles and thermal (TGA, DSC) responses. Annealing procedure was applied for PIB-based PUs. Annealed PIB-based PU shows ∽26 MPa tensile strength, ∽500% elongation, and ∽77 Microshore hardness with excellent hydrolytic and oxidative stability. The surface characters of them were examined with AFM and contact angle measurements. Annealed PIB-based PU exhibits the higher segregation of individual segments and surface hydrophobicity thus annealing significantly enhances hydrolytic and oxidative stability by shielding carbamate bonds by inert PIB chains. According to improved surface and microstructure characters, greater efforts are focused on analyzing protein adsorption and calcification profiles. In biomedical applications especially for cardiological implantations, protein adsorption inclination on polymeric heart valves is undesirable hence protein adsorption from blood serum is followed by platelet adhesion and subsequent thrombus formation. The protein adsorption character of PIB-based PU examines by applying Bradford assay in fibrinogen and bovine serum albumin solutions. Like protein adsorption, calcium deposition on heart valves is very harmful because vascular calcification has been proposed activation of osteogenic mechanism in the vascular wall, loss of inhibitory factors, enhance bone turnover and irregularities in mineral metabolism. The calcium deposition on films are characterized by incubating samples in simulated body fluid solution and examining SEM images and XPS profiles. PIB-based PUs are significantly more resistant to hydrolytic-oxidative degradation, protein adsorption and calcium deposition than ElastEonTM E2A, a commercially available PDMS-based PU, widely used for biomedical applications.Keywords: biomedical application, calcification, polyisobutylene, polyurethane, protein adsorption
Procedia PDF Downloads 257131 Analysis of Superconducting and Optical Properties in Atomic Layer Deposition and Sputtered Thin Films for Next-Generation Single-Photon Detectors
Authors: Nidhi Choudhary, Silke A. Peeters, Ciaran T. Lennon, Dmytro Besprozvannyy, Harm C. M. Knoops, Robert H. Hadfield
Abstract:
Superconducting Nanowire Single Photon Detectors (SNSPDs) have become leading devices in quantum optics and photonics, known for their exceptional efficiency in detecting single photons from ultraviolet to mid-infrared wavelengths with minimal dark counts, low noise, and reduced timing jitter. Recent advancements in materials science focus attention on refractory metal thin films such as NbN and NbTiN to enhance the optical properties and superconducting performance of SNSPDs, opening the way for next-generation detectors. These films have been deposited by several different techniques, such as atomic layer deposition (ALD), plasma pro-advanced plasma processing (ASP) and magnetron sputtering. The fabrication flexibility of these films enables precise control over morphology, crystallinity, stoichiometry and optical properties, which is crucial for optimising the SNSPD performance. Hence, it is imperative to study the optical and superconducting properties of these materials across a wide range of wavelengths. This study provides a comprehensive analysis of the optical and superconducting properties of some important materials in this category (NbN, NbTiN) by different deposition methods. Using Variable angle ellipsometry spectroscopy (VASE), we measured the refractive index, extinction, and absorption coefficient across a wide wavelength range (200-1700 nm) to enhance light confinement for optical communication devices. The critical temperature and sheet resistance were measured using a four-probe method in a custom-built, cryogen-free cooling system with a Sumitomo RDK-101D cold head and CNA-11C compressor. Our results indicate that ALD-deposited NbN shows a higher refractive index and extinction coefficient in the near-infrared region (~1500 nm) than sputtered NbN of the same thickness. Further, the analysis of the optical properties of plasma pro-ASP deposited NbTiN was performed at different substrate bias voltages and different thicknesses. The analysis of substrate bias voltage indicates that the maximum value of the refractive index and extinction coefficient observed for the substrate biasing of 50-80 V across a substrate bias range of (0 V - 150 V). The optical properties of sputtered NbN films are also investigated in terms of the different substrate temperatures during deposition (100 °C-500 °C). We find the higher the substrate temperature during deposition, the higher the value of the refractive index and extinction coefficient has been observed. In all our superconducting thin films ALD-deposited NbN films possess the highest critical temperature (~12 K) compared to sputtered (~8 K) and plasma pro-ASP (~5 K).Keywords: optical communication, thin films, superconductivity, atomic layer deposition (ALD), niobium nitride (NbN), niobium titanium nitride (NbTiN), SNSPD, superconducting detector, photon-counting.
Procedia PDF Downloads 28130 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 75129 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 202128 Validation of Global Ratings in Clinical Performance Assessment
Authors: S. J. Yune, S. Y. Lee, S. J. Im, B. S. Kam, S. Y. Baek
Abstract:
This study aimed to determine the reliability of clinical performance assessments, having been emphasized by ability-based education, and professors overall assessment methods. We addressed the following problems: First, we try to find out whether there is a difference in what we consider to be the main variables affecting the clinical performance test according to the evaluator’s working period and the number of evaluation experience. Second, we examined the relationship among the global rating score (G), analytic global rating score (Gc), and the sum of the analytical checklists (C). What are the main factors affecting clinical performance assessments in relation to the numbers of times the evaluator had administered evaluations and the length of their working period service? What is the relationship between overall assessment score and analytic checklist score? How does analytic global rating with 6 components in OSCE and 4 components in sub-domains (Gc) CPX: aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude overall assessment score and task-specific analytic checklist score sum (C) affect the professor’s overall global rating assessment score (G)? We studied 75 professors who attended a 2016 Bugyeoung Consortium clinical skills performances test evaluating third and fourth year medical students at the Pusan National University Medical school in South Korea (39 prof. in OSCE, 36 prof. in CPX; all consented to participate in our study). Each evaluator used 3 forms; a task-specific analytic checklist, subsequent analytic global rating scale with sub-6 domains, and overall global scale. After the evaluation, the professors responded to the questionnaire on the important factors of clinical performance assessment. The data were analyzed by frequency analysis, correlation analysis, and hierarchical regression analysis using SPSS 21.0. Their understanding of overall assessment was analyzed by dividing the subjects into groups based on experiences. As a result, they considered ‘precision’ most important in overall OSCE assessment, and ‘precise accuracy physical examination’, ‘systemic approaches to taking patient history’, and ‘diagnostic skill capability’ in overall CPX assessment. For OSCE, there was no clear difference of opinion about the main factors, but there was for CPX. Analytic global rating scale score, overall rating scale score, and analytic checklist score had meaningful mutual correlations. According to the regression analysis results, task-specific checklist score sum had the greatest effect on overall global rating. professors regarded task-specific analytic checklist total score sum as best reflecting overall OSCE test score, followed by aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude on a subsequent analytic global rating scale. For CPX, subsequent analytic global rating scale score, overall global rating scale score, and task-specific checklist score had meaningful mutual correlations. These findings support explanations for validity of professors’ global rating in clinical performance assessment.Keywords: global rating, clinical performance assessment, medical education, analytic checklist
Procedia PDF Downloads 233127 Development and Validation of a Turbidimetric Bioassay to Determine the Potency of Ertapenem Sodium
Authors: Tahisa M. Pedroso, Hérida R. N. Salgado
Abstract:
The microbiological turbidimetric assay allows the determination of potency of the drug, by measuring the turbidity (absorbance), caused by inhibition of microorganisms by ertapenem sodium. Ertapenem sodium (ERTM), a synthetic antimicrobial agent of the class of carbapenems, shows action against Gram-negative, Gram-positive, aerobic and anaerobic microorganisms. Turbidimetric assays are described in the literature for some antibiotics, but this method is not described for ertapenem. The objective of the present study was to develop and validate a simple, sensitive, precise and accurate microbiological assay by turbidimetry to quantify ertapenem sodium injectable as an alternative to the physicochemical methods described in the literature. Several preliminary tests were performed to choose the following parameters: Staphylococcus aureus ATCC 25923, IAL 1851, 8 % of inoculum, BHI culture medium, and aqueous solution of ertapenem sodium. 10.0 mL of sterile BHI culture medium were distributed in 20 tubes. 0.2 mL of solutions (standard and test), were added in tube, respectively S1, S2 and S3, and T1, T2 and T3, 0.8 mL of culture medium inoculated were transferred to each tube, according parallel lines 3 x 3 test. The tubes were incubated in shaker Marconi MA 420 at a temperature of 35.0 °C ± 2.0 °C for 4 hours. After this period, the growth of microorganisms was inhibited by addition of 0.5 mL of 12% formaldehyde solution in each tube. The absorbance was determined in Quimis Q-798DRM spectrophotometer at a wavelength of 530 nm. An analytical curve was constructed to obtain the equation of the line by the least-squares method and the linearity and parallelism was detected by ANOVA. The specificity of the method was proven by comparing the response obtained for the standard and the finished product. The precision was checked by testing the determination of ertapenem sodium in three days. The accuracy was determined by recovery test. The robustness was determined by comparing the results obtained by varying wavelength, brand of culture medium and volume of culture medium in the tubes. Statistical analysis showed that there is no deviation from linearity in the analytical curves of standard and test samples. The correlation coefficients were 0.9996 and 0.9998 for the standard and test samples, respectively. The specificity was confirmed by comparing the absorbance of the reference substance and test samples. The values obtained for intraday, interday and between analyst precision were 1.25%; 0.26%, 0.15% respectively. The amount of ertapenem sodium present in the samples analyzed, 99.87%, is consistent. The accuracy was proven by the recovery test, with value of 98.20%. The parameters varied did not affect the analysis of ertapenem sodium, confirming the robustness of this method. The turbidimetric assay is more versatile, faster and easier to apply than agar diffusion assay. The method is simple, rapid and accurate and can be used in routine analysis of quality control of formulations containing ertapenem sodium.Keywords: ertapenem sodium, turbidimetric assay, quality control, validation
Procedia PDF Downloads 392126 Development and Validation of a Rapid Turbidimetric Assay to Determine the Potency of Cefepime Hydrochloride in Powder Injectable Solution
Authors: Danilo F. Rodrigues, Hérida Regina N. Salgado
Abstract:
Introduction: The emergence of resistant microorganisms to a large number of clinically approved antimicrobials has been increasing, which restrict the options for the treatment of bacterial infections. As a strategy, drugs with high antimicrobial activities are in evidence. Stands out a class of antimicrobial, the cephalosporins, having as fourth generation cefepime (CEF) a semi-synthetic product which has activity against various Gram-positive bacteria (e.g. oxacillin resistant Staphylococcus aureus) and Gram-negative (e.g. Pseudomonas aeruginosa) aerobic. There are few studies in the literature regarding the development of microbiological methodologies for the analysis of this antimicrobial, so researches in this area are highly relevant to optimize the analysis of this drug in the industry and ensure the quality of the marketed product. The development of microbiological methods for the analysis of antimicrobials has gained strength in recent years and has been highlighted in relation to physicochemical methods, especially because they make possible to determine the bioactivity of the drug against a microorganism. In this context, the aim of this work was the development and validation of a microbiological method for quantitative analysis of CEF in powder lyophilized for injectable solution by turbidimetric assay. Method: For performing the method, Staphylococcus aureus ATCC 6538 IAL 2082 was used as the test microorganism and the culture medium chosen was the Casoy broth. The test was performed using temperature control (35.0 °C ± 2.0 °C) and incubated for 4 hours in shaker. The readings of the results were made at a wavelength of 530 nm through a spectrophotometer. The turbidimetric microbiological method was validated by determining the following parameters: linearity, precision (repeatability and intermediate precision), accuracy and robustness, according to ICH guidelines. Results and discussion: Among the parameters evaluated for method validation, the linearity showed results suitable for both statistical analyses as the correlation coefficients (r) that went 0.9990 for CEF reference standard and 0.9997 for CEF sample. The precision presented the following values 1.86% (intraday), 0.84% (interday) and 0.71% (between analyst). The accuracy of the method has been proven through the recovery test where the mean value obtained was 99.92%. The robustness was verified by the parameters changing volume of culture medium, brand of culture medium, incubation time in shaker and wavelength. The potency of CEF present in the samples of lyophilized powder for injectable solution was 102.46%. Conclusion: The turbidimetric microbiological method proposed for quantification of CEF in lyophilized powder for solution for injectable showed being fast, linear, precise, accurate and robust, being in accordance with all the requirements, which can be used in routine analysis of quality control in the pharmaceutical industry as an option for microbiological analysis.Keywords: cefepime hydrochloride, quality control, turbidimetric assay, validation
Procedia PDF Downloads 360125 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 57124 Real-Space Mapping of Surface Trap States in Cigse Nanocrystals Using 4D Electron Microscopy
Authors: Riya Bose, Ashok Bera, Manas R. Parida, Anirudhha Adhikari, Basamat S. Shaheen, Erkki Alarousu, Jingya Sun, Tom Wu, Osman M. Bakr, Omar F. Mohammed
Abstract:
This work reports visualization of charge carrier dynamics on the surface of copper indium gallium selenide (CIGSe) nanocrystals in real space and time using four-dimensional scanning ultrafast electron microscopy (4D S-UEM) and correlates it with the optoelectronic properties of the nanocrystals. The surface of the nanocrystals plays a key role in controlling their applicability for light emitting and light harvesting purposes. Typically for quaternary systems like CIGSe, which have many desirable attributes to be used for optoelectronic applications, relative abundance of surface trap states acting as non-radiative recombination centre for charge carriers remains as a major bottleneck preventing further advancements and commercial exploitation of these nanocrystals devices. Though ultrafast spectroscopic techniques allow determining the presence of picosecond carrier trapping channels, because of relative larger penetration depth of the laser beam, only information mainly from the bulk of the nanocrystals is obtained. Selective mapping of such ultrafast dynamical processes on the surfaces of nanocrystals remains as a key challenge, so far out of reach of purely optical probing time-resolved laser techniques. In S-UEM, the optical pulse generated from a femtosecond (fs) laser system is used to generate electron packets from the tip of the scanning electron microscope, instead of the continuous electron beam used in the conventional setup. This pulse is synchronized with another optical excitation pulse that initiates carrier dynamics in the sample. The principle of S-UEM is to detect the secondary electrons (SEs) generated in the sample, which is emitted from the first few nanometers of the top surface. Constructed at different time delays between the optical and electron pulses, these SE images give direct and precise information about the carrier dynamics on the surface of the material of interest. In this work, we report selective mapping of surface dynamics in real space and time of CIGSe nanocrystals applying 4D S-UEM. We show that the trap states can be considerably passivated by ZnS shelling of the nanocrystals, and the carrier dynamics can be significantly slowed down. We also compared and discussed the S-UEM kinetics with the carrier dynamics obtained from conventional ultrafast time-resolved techniques. Additionally, a direct effect of the state trap removal can be observed in the enhanced photoresponse of the nanocrystals after shelling. Direct observation of surface dynamics will not only provide a profound understanding of the photo-physical mechanisms on nanocrystals’ surfaces but also enable to unlock their full potential for light emitting and harvesting applications.Keywords: 4D scanning ultrafast microscopy, charge carrier dynamics, nanocrystals, optoelectronics, surface passivation, trap states
Procedia PDF Downloads 293123 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 307122 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 356121 Synthesis and Characterization of pH-Sensitive Graphene Quantum Dot-Loaded Metal-Organic Frameworks for Targeted Drug Delivery and Fluorescent Imaging
Authors: Sayed Maeen Badshah, Kuen-Song Lin, Abrar Hussain, Jamshid Hussain
Abstract:
Liver cancer is a significant global health issue, ranking fifth in incidence and second in mortality. Effective therapeutic strategies are urgently needed to combat this disease, particularly in regions with high prevalence. This study focuses on developing and characterizing fluorescent organometallic frameworks as distinct drug delivery carriers with potential applications in both the treatment and biological imaging of liver cancer. This work introduces two distinct organometallic frameworks: the cake-shaped GQD@NH₂-MIL-125 and the cross-shaped M8U6/FM8U6. The GQD@NH₂-MIL-125 framework is particularly noteworthy for its high fluorescence, making it an effective tool for biological imaging. X-ray diffraction (XRD) analysis revealed specific diffraction peaks at 6.81ᵒ (011), 9.76ᵒ (002), and 11.69ᵒ (121), with an additional significant peak at 26ᵒ (2θ), corresponding to the carbon material. Morphological analysis using Field Emission Scanning Electron Microscopy (FE-SEM), and Transmission Electron Microscopy (TEM) demonstrated that the framework has a front particle size of 680 nm and a side particle size of 55±5 nm. High-resolution TEM (HR-TEM) images confirmed the successful attachment of graphene quantum dots (GQDs) onto the NH2-MIL-125 framework. Fourier-Transform Infrared (FT-IR) spectroscopy identified crucial functional groups within the GQD@NH₂-MIL-125 structure, including O-Ti-O metal bonds within the 500 to 700 cm⁻¹ range, and N-H and C-N bonds at 1,646 cm⁻¹ and 1,164 cm⁻¹, respectively. BET isotherm analysis further revealed a specific surface area of 338.1 m²/g and an average pore size of 46.86 nm. This framework also demonstrated UV-active properties, as identified by UV-visible light spectra, and its photoluminescence (PL) spectra showed an emission peak around 430 nm when excited at 350 nm, indicating its potential as a fluorescent drug delivery carrier. In parallel, the cross-shaped M8U6/FM8U6 frameworks were synthesized and characterized using X-ray diffraction, which identified distinct peaks at 2θ = 7.4 (111), 8.5 (200), 9.2 (002), 10.8 (002), 12.1 (220), 16.7 (103), and 17.1 (400). FE-SEM, HR-TEM, and TEM analyses revealed particle sizes of 350±50 nm for M8U6 and 200±50 nm for FM8U6. These frameworks, synthesized from terephthalic acid (H₂BDC), displayed notable vibrational bonds, such as C=O at 1,650 cm⁻¹, Fe-O in MIL-88 at 520 cm⁻¹, and Zr-O in UIO-66 at 482 cm⁻¹. BET analysis showed specific surface areas of 740.1 m²/g with a pore size of 22.92 nm for M8U6 and 493.9 m²/g with a pore size of 35.44 nm for FM8U6. Extended X-ray Absorption Fine Structure (EXAFS) spectra confirmed the stability of Ti-O bonds in the frameworks, with bond lengths of 2.026 Å for MIL-125, 1.962 Å for NH₂-MIL-125, and 1.817 Å for GQD@NH₂-MIL-125. These findings highlight the potential of these organometallic frameworks for enhanced liver cancer therapy through precise drug delivery and imaging, representing a significant advancement in nanomaterial applications in biomedical science.Keywords: liver cancer cells, metal organic frameworks, Doxorubicin (DOX), drug release.
Procedia PDF Downloads 5120 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation
Authors: Joanna Gerszon, Aleksandra Rodacka
Abstract:
In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation
Procedia PDF Downloads 164119 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder
Authors: Kseniya Gladun
Abstract:
Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography
Procedia PDF Downloads 250118 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers
Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin
Abstract:
Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.Keywords: anxiety, emotional valence, childhood, lexical access
Procedia PDF Downloads 286117 The Practise of Hand Drawing as a Premier Form of Representation in Architectural Design Teaching: The Case of FAUP
Authors: Rafael Santos, Clara Pimenta Do Vale, Barbara Bogoni, Poul Henning Kirkegaard
Abstract:
In the last decades, the relevance of hand drawing has decreased in the scope of architectural education. However, some schools continue to recognize its decisive role, not only in the architectural design teaching, but in the whole of architectural training. With this paper it is intended to present the results of a research developed on the following problem: the practise of hand drawing as a premier form of representation in architectural design teaching. The research had as its object the educational model of the Faculty of Architecture of the University of Porto (FAUP) and was led by three main objectives: to identify the circumstance that promoted hand drawing as a form of representation in FAUP's model; to characterize the types of hand drawing and their role in that model; to determine the particularities of hand drawing as a premier form of representation in architectural design teaching. Methodologically, the research was conducted according to a qualitative embedded single-case study design. The object – i.e., the educational model – was approached in FAUP case considering its Context and three embedded unities of analysis: the educational Purposes, Principles and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is Assumed; the architectural design classes, expressing how the model is Achieved; and the students, expressing how the model is Acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal that the educational model of FAUP – following the model of the former Porto School – was largely due to the methodological foundations created with the hand drawing teaching-learning processes. In the absence of a culture of explicit theoretical elaboration or systematic research, hand drawing was the support for the continuity of the school, an expression of a unified thought about what should be the reflection and practice of architecture. As a form of representation, hand drawing plays a transversal role in the entire educational model, since its purposes are not limited to the conception of architectural design – it is also a means for perception, analysis and synthesis. Regarding the architectural design teaching, there seems to be an understanding of three complementary dimensions of didactics: the instrumental, methodological and propositional dimension. At FAUP, hand drawing is recognized as the common denominator among these dimensions, according to the idea of "globality of drawing". It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance and valorisation of FAUP’s model; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the hand drawing in architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.Keywords: architectural design teaching, architectural education, forms of representation, hand drawing
Procedia PDF Downloads 130116 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator
Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani
Abstract:
During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA
Procedia PDF Downloads 184115 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 100114 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany
Authors: Michael Mederle, Heinz Bernhardt
Abstract:
The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing
Procedia PDF Downloads 232113 Doctor-Patient Interaction in an L2: Pragmatic Study of a Nigerian Experience
Authors: Ayodele James Akinola
Abstract:
This study investigated the use of English in doctor-patient interaction in a university teaching hospital from a southwestern state in Nigeria with the aim of identifying the role of communication in an L2, patterns of communication, discourse strategies, pragmatic acts, and contexts that shape the interaction. Jacob Mey’s Pragmatic Acts notion complemented with Emanuel and Emanuel’s model of doctor-patient relationship provided the theoretical standpoint. Data comprising 7 audio-recorded doctors-patient interactions were collected from a University Hospital in Oyo state, Nigeria. Interactions involving the use of English language were purposefully selected. These were supplemented with patients’ case notes and interviews conducted with doctors. Transcription was patterned alongside modified Arminen’s notations of conversation analysis. In the study, interaction in English between doctor and patients has the preponderance of direct-translation, code-mixing and switching, Nigerianism and use of cultural worldviews to express medical experience. Irrespective of these, three patterns communication, namely the paternalistic, interpretive, and deliberative were identified. These were exhibited through varying discourse strategies. The paternalistic model reflected slightly casual conversational conventions and registers. These were achieved through the pragmemic activities of situated speech acts, psychological and physical acts, via patients’ quarrel-induced acts, controlled and managed through doctors’ shared situation knowledge. All these produced empathising, pacifying, promising and instructing practs. The patients’ practs were explaining, provoking, associating and greeting in the paternalistic model. The informative model reveals the use of adjacency pairs, formal turn-taking, precise detailing, institutional talks and dialogic strategies. Through the activities of the speech, prosody and physical acts, the practs of declaring, alerting and informing were utilised by doctors, while the patients exploited adapting, requesting and selecting practs. The negotiating conversational strategy of the deliberative model featured in the speech, prosody and physical acts. In this model, practs of suggesting, teaching, persuading and convincing were utilised by the doctors. The patients deployed the practs of questioning, demanding, considering and deciding. The contextual variables revealed that other patterns (such as phatic and informative) are also used and they coalesced in the hospital within the situational and psychological contexts. However, the paternalistic model was predominantly employed by doctors with over six years in practice, while the interpretive, informative and deliberative models were found among registrar and others below six years of medical practice. Doctors’ experience, patients’ peculiarities and shared cultural knowledge influenced doctor-patient communication in the study.Keywords: pragmatics, communication pattern, doctor-patient interaction, Nigerian hospital situation
Procedia PDF Downloads 178112 Influence of Kneading Conditions on the Textural Properties of Alumina Catalysts Supports for Hydrotreating
Authors: Lucie Speyer, Vincent Lecocq, Séverine Humbert, Antoine Hugon
Abstract:
Mesoporous alumina is commonly used as a catalyst support for the hydrotreating of heavy petroleum cuts. The process of fabrication usually involves: the synthesis of the boehmite AlOOH precursor, a kneading-extrusion step, and a calcination in order to obtain the final alumina extrudates. Alumina is described as a complex porous medium, generally agglomerates constituted of aggregated nanocrystallites. Its porous texture directly influences the active phase deposition and mass transfer, and the catalytic properties. Then, it is easy to figure out that each step of the fabrication of the supports has a role on the building of their porous network, and has to be well understood to optimize the process. The synthesis of boehmite by precipitation of aluminum salts was extensively studied in the literature and the effect of various parameters, such as temperature or pH, are known to influence the size and shape of the crystallites and the specific surface area of the support. The calcination step, through the topotactic transition from boehmite to alumina, determines the final properties of the support and can tune the surface area, pore volume and pore diameters from those of boehmite. However, the kneading extrusion step has been subject to a very few studies. It generally consists in two steps: an acid, then a basic kneading, where the boehmite powder is introduced in a mixer and successively added with an acid and a base solution to form an extrudable paste. During the acid kneading, the induced positive charges on the hydroxyl surface groups of boehmite create an electrostatic repulsion which tends to separate the aggregates and even, following the conditions, the crystallites. The basic kneading, by reducing the surface charges, leads to a flocculation phenomenon and can control the reforming of the overall structure. The separation and reassembling of the particles constituting the boehmite paste have a quite obvious influence on the textural properties of the material. In this work, we are focused on the influence of the kneading step on the alumina catalysts supports. Starting from an industrial boehmite, extrudates are prepared using various kneading conditions. The samples are studied by nitrogen physisorption in order to analyze the evolution of the textural properties, and by synchrotron small-angle X-ray scattering (SAXS), a more original method which brings information about agglomeration and aggregation of the samples. The coupling of physisorption and SAXS enables a precise description of the samples, as same as an accurate monitoring of their evolution as a function of the kneading conditions. These ones are found to have a strong influence of the pore volume and pore size distribution of the supports. A mechanism of evolution of the texture during the kneading step is proposed and could be attractive in order to optimize the texture of the supports and then, their catalytic performances.Keywords: alumina catalyst support, kneading, nitrogen physisorption, small-angle X-ray scattering
Procedia PDF Downloads 252111 A Case Study on Problems Originated from Critical Path Method Application in a Governmental Construction Project
Authors: Mohammad Lemar Zalmai, Osman Hurol Turkakin, Cemil Akcay, Ekrem Manisali
Abstract:
In public construction projects, determining the contract period in the award phase is one of the most important factors. The contract period establishes the baseline for creating the cash flow curve and progress payment planning in the post-award phase. If overestimated, project duration causes losses for both the owner and the contractor. Therefore, it is essential to base construction project duration on reliable forecasting. In Turkey, schedules are usually built using the bar chart (Gantt) schedule, especially for governmental construction agencies. The usage of these schedules is limited for bidding purposes. Although the bar-chart schedule is useful in some cases, it lacks logical connections between activities; it would be harder to obtain the activities that have more effects than others on the project's total duration, especially in large complex projects. In this study, a construction schedule is prepared with Critical Path Method (CPM) that addresses the above-mentioned discrepancies. CPM is a simple and effective method that displays project time and critical paths, showing results of forward and backward calculations with considering the logic relationships between activities; it is a powerful tool for planning and managing all kinds of construction projects and is a very convenient method for the construction industry. CPM provides a much more useful and precise approach than traditional bar-chart diagrams that form the basis of construction planning and control. CPM has two main application utilities in the construction field; the first one is obtaining project duration, which is called an as-planned schedule that includes as-planned activity durations with relationships between subsequent activities. Another utility is during the project execution; each activity is tracked, and their durations are recorded in order to obtain as-built schedule, which is named as a black box of the project. The latter is more useful for delay analysis, and conflict resolutions. These features of CPM have been popular around the world. However, it has not been yet extensively used in Turkey. In this study, a real construction project is investigated as a case study; CPM-based scheduling is used for establishing both of as-built and as-planned schedules. Problems that emerged during the construction phase are identified and categorized. Subsequently, solutions are suggested. Two scenarios were considered. In the first scenario, project progress was monitored based as CPM was used to track and manage progress; this was carried out based on real-time data. In the second scenario, project progress was supposedly tracked based on the assumption that the Gantt chart was used. The S-curves of the two scenarios are plotted and interpreted. Comparing the results, possible faults of the latter scenario are highlighted, and solutions are suggested. The importance of CPM implementation has been emphasized and it has been proposed to make it mandatory for preparation of construction schedule based on CPM for public construction projects contracts.Keywords: as-built, case-study, critical path method, Turkish government sector projects
Procedia PDF Downloads 117110 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 131109 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes
Authors: Alicia Ettlin
Abstract:
Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.Keywords: dualism, embodiment, mind, neoliberalism
Procedia PDF Downloads 162