Search results for: applied chemistry
7819 Landslide Vulnerability Assessment in Context with Indian Himalayan
Authors: Neha Gupta
Abstract:
Landslide vulnerability is considered as the crucial parameter for the assessment of landslide risk. The term vulnerability defined as the damage or degree of elements at risk of different dimensions, i.e., physical, social, economic, and environmental dimensions. Himalaya region is very prone to multi-hazard such as floods, forest fires, earthquakes, and landslides. With the increases in fatalities rates, loss of infrastructure, and economy due to landslide in the Himalaya region, leads to the assessment of vulnerability. In this study, a methodology to measure the combination of vulnerability dimension, i.e., social vulnerability, physical vulnerability, and environmental vulnerability in one framework. A combined result of these vulnerabilities has rarely been carried out. But no such approach was applied in the Indian Scenario. The methodology was applied in an area of east Sikkim Himalaya, India. The physical vulnerability comprises of building footprint layer extracted from remote sensing data and Google Earth imaginary. The social vulnerability was assessed by using population density based on land use. The land use map was derived from a high-resolution satellite image, and for environment vulnerability assessment NDVI, forest, agriculture land, distance from the river were assessed from remote sensing and DEM. The classes of social vulnerability, physical vulnerability, and environment vulnerability were normalized at the scale of 0 (no loss) to 1 (loss) to get the homogenous dataset. Then the Multi-Criteria Analysis (MCA) was used to assign individual weights to each dimension and then integrate it into one frame. The final vulnerability was further classified into four classes from very low to very high.Keywords: landslide, multi-criteria analysis, MCA, physical vulnerability, social vulnerability
Procedia PDF Downloads 3017818 Validation of SWAT Model for Prediction of Water Yield and Water Balance: Case Study of Upstream Catchment of Jebba Dam in Nigeria
Authors: Adeniyi G. Adeogun, Bolaji F. Sule, Adebayo W. Salami, Michael O. Daramola
Abstract:
Estimation of water yield and water balance in a river catchment is critical to the sustainable management of water resources at watershed level in any country. Therefore, in the present study, Soil and Water Assessment Tool (SWAT) interfaced with Geographical Information System (GIS) was applied as a tool to predict water balance and water yield of a catchment area in Nigeria. The catchment area, which was 12,992km2, is located upstream Jebba hydropower dam in North central part of Nigeria. In this study, data on the observed flow were collected and compared with simulated flow using SWAT. The correlation between the two data sets was evaluated using statistical measures, such as, Nasch-Sucliffe Efficiency (NSE) and coefficient of determination (R2). The model output shows a good agreement between the observed flow and simulated flow as indicated by NSE and R2, which were greater than 0.7 for both calibration and validation period. A total of 42,733 mm of water was predicted by the calibrated model as the water yield potential of the basin for a simulation period 1985 to 2010. This interesting performance obtained with SWAT model suggests that SWAT model could be a promising tool to predict water balance and water yield in sustainable management of water resources. In addition, SWAT could be applied to other water resources in other basins in Nigeria as a decision support tool for sustainable water management in Nigeria.Keywords: GIS, modeling, sensitivity analysis, SWAT, water yield, watershed level
Procedia PDF Downloads 4397817 DWT-SATS Based Detection of Image Region Cloning
Authors: Michael Zimba
Abstract:
A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.Keywords: affine transformation, discrete wavelet transform, radix sort, SATS
Procedia PDF Downloads 2307816 Analyzing Microblogs: Exploring the Psychology of Political Leanings
Authors: Meaghan Bowman
Abstract:
Microblogging has become increasingly popular for commenting on current events, spreading gossip, and encouraging individualism--which favors its low-context communication channel. These social media (SM) platforms allow users to express opinions while interacting with a wide range of populations. Hashtags allow immediate identification of like-minded individuals worldwide on a vast array of topics. The output of the analytic tool, Linguistic Inquiry and Word Count (LIWC)--a program that associates psychological meaning with the frequency of use of specific words--may suggest the nature of individuals’ internal states and general sentiments. When applied to groupings of SM posts unified by a hashtag, such information can be helpful to community leaders during periods in which the forming of public opinion happens in parallel with the unfolding of political, economic, or social events. This is especially true when outcomes stand to impact the well-being of the group. Here, we applied the online tools, Google Translate and the University of Texas’s LIWC, to a 90-posting sample from a corpus of Colombian Spanish microblogs. On translated disjoint sets, identified by hashtag as being authored by advocates of voting “No,” advocates voting “Yes,” and entities refraining from hashtag use, we observed the value of LIWC’s Tone feature as distinguishing among the categories and the word “peace,” as carrying particular significance, due to its frequency of use in the data.Keywords: Colombia peace referendum, FARC, hashtags, linguistics, microblogging, social media
Procedia PDF Downloads 1077815 Role of Chloride Ions on The Properties of Electrodeposited ZnO Nanostructures
Authors: L. Mentar, O. Baka, M. R. Khelladi, A. Azizi
Abstract:
Zinc oxide (ZnO), as a transparent semiconductor with a wide band gap of 3.4 eV and a large exciton binding energy of 60 meV at room temperature, is one of the most promising materials for a wide range of modern applications. With the development of film growth technologies and intense recent interest in nanotechnology, several varieties of ZnO nanostructured materials have been synthesized almost exclusively by thermal evaporation methods, particularly chemical vapor deposition (CVD), which generally require a high growth temperature above 550 °C. In contrast, wet chemistry techniques such as hydrothermal synthesis and electro-deposition are promising alternatives to synthesize ZnO nanostructures, especially at a significantly lower temperature (below 200°C). In this study, the electro-deposition method was used to produce zinc oxide (ZnO) nanostructures on fluorine-doped tin oxide (FTO)-coated conducting glass substrate from chloride bath. We present the influence of KCl concentrations on the electro-deposition process, morphological, structural and optical properties of ZnO nanostructures. The potentials of electro-deposition of ZnO were determined using the cyclic voltammetry. From the Mott-Schottky measurements, the flat-band potential and the donor density for the ZnO nanostructure are determined. Field emission scanning electron microscopy (FESEM) images showed different sizes and morphologies of the nanostructures which depends on the concentrations of Cl-. Very netted hexagonal grains are observed for the nanostructures deposited at 0.1M of KCl. X-ray diffraction (XRD) study confirms the Wurtzite phase of the ZnO nanostructures with a preferred oriented along (002) plane normal to the substrate surface. UV-Visible spectra showed a significant optical transmission (~80%), which decreased with low Cl-1 concentrations. The energy band gap values have been estimated to be between 3.52 and 3.80 eV.Keywords: Cl-, electro-deposition, FESEM, Mott-Schottky, XRD, ZnO
Procedia PDF Downloads 2897814 Correlation Results Based on Magnetic Susceptibility Measurements by in-situ and Ex-Situ Measurements as Indicators of Environmental Changes Due to the Fertilizer Industry
Authors: Nurin Amalina Widityani, Adinda Syifa Azhari, Twin Aji Kusumagiani, Eleonora Agustine
Abstract:
Fertilizer industry activities contribute to environmental changes. Changes to the environment became one of a few problems in this era of globalization. Parameters that can be seen as criteria to identify changes in the environment can be seen from the aspects of physics, chemistry, and biology. One aspect that can be assessed quickly and efficiently to describe environmental change is the aspect of physics, one of which is the value of magnetic susceptibility (χ). The rock magnetism method can be used as a proxy indicator of environmental changes, seen from the value of magnetic susceptibility. The rock magnetism method is based on magnetic susceptibility studies to measure and classify the degree of pollutant elements that cause changes in the environment. This research was conducted in the area around the fertilizer plant, with five coring points on each track, each coring point a depth of 15 cm. Magnetic susceptibility measurements were performed by in-situ and ex-situ. In-situ measurements were carried out directly by using the SM30 tool by putting the tools on the soil surface at each measurement point and by that obtaining the value of the magnetic susceptibility. Meanwhile, ex-situ measurements are performed in the laboratory by using the Bartington MS2B tool’s susceptibility, which is done on a coring sample which is taken every 5 cm. In-situ measurement shows results that the value of magnetic susceptibility at the surface varies, with the lowest score on the second and fifth points with the -0.81 value and the highest value at the third point, with the score of 0,345. Ex-situ measurements can find out the variations of magnetic susceptibility values at each depth point of coring. At a depth of 0-5 cm, the value of the highest XLF = 494.8 (x10-8m³/kg) is at the third point, while the value of the lowest XLF = 187.1 (x10-8m³/kg) at first. At a depth of 6-10 cm, the highest value of the XLF was at the second point, which was 832.7 (x10-8m³/kg) while the lowest XLF is at the first point, at 211 (x10-8m³/kg). At a depth of 11-15 cm, the XLF’s highest value = 857.7 (x10-8m³/kg) is at the second point, whereas the value of the lowest XLF = 83.3 (x10-8m³/kg) is at the fifth point. Based on the in situ and exsit measurements, it can be seen that the highest magnetic susceptibility values from the surface samples are at the third point.Keywords: magnetic susceptibility, fertilizer plant, Bartington MS2B, SM30
Procedia PDF Downloads 3427813 Determination of Optimum Conditions for the Leaching of Oxidized Copper Ores with Ammonium Nitrate
Authors: Javier Paul Montalvo Andia, Adriana Larrea Valdivia, Adolfo Pillihuaman Zambrano
Abstract:
The most common lixiviant in the leaching process of copper minerals is H₂SO₄, however, the current situation requires more environmentally friendly reagents and in certain situations that have a lower consumption due to the presence of undesirable gangue as muscovite or kaolinite that can make the process unfeasible. The present work studied the leaching of an oxidized copper mineral in an aqueous solution of ammonium nitrate, in order to obtain the optimum leaching conditions of the copper contained in the malachite mineral from Peru. The copper ore studied comes from a deposit in southern Peru and was characterized by X-ray diffractometer, inductively coupled-plasma emission spectrometer (ICP-OES) and atomic absorption spectrophotometry (AAS). The experiments were developed in batch reactor of 600 mL where the parameters as; temperature, pH, ammonium nitrate concentration, particle size and stirring speed were controlled according to experimental planning. The sample solution was analyzed for copper by atomic absorption spectrophotometry (AAS). A simulation in the HSC Chemistry 6.0 program showed that the predominance of the copper compounds of a Cu-H₂O aqueous system is altered by the presence in the system of ammonium complexes, the compound being thermodynamically more stable Cu(NH3)₄²⁺, which predominates in pH ranges from 8.5 to 10 at a temperature of 25 °C. The optimum conditions for copper leaching of the malachite mineral were a stirring speed of 600 rpm, an ammonium nitrate concentration of 4M, a particle diameter of 53 um and temperature of 62 °C. These results showed that the leaching of copper increases with increasing concentration of the ammonium solution, increasing the stirring rate, increasing the temperature and decreasing the particle diameter. Finally, the recovery of copper in optimum conditions was above 80%.Keywords: ammonium nitrate, malachite, copper oxide, leaching
Procedia PDF Downloads 1897812 Simulation of a Three-Link, Six-Muscle Musculoskeletal Arm Activated by Hill Muscle Model
Authors: Nafiseh Ebrahimi, Amir Jafari
Abstract:
The study of humanoid character is of great interest to researchers in the field of robotics and biomechanics. One might want to know the forces and torques required to move a limb from an initial position to the desired destination position. Inverse dynamics is a helpful method to compute the force and torques for an articulated body limb. It enables us to know the joint torques required to rotate a link between two positions. Our goal in this study was to control a human-like articulated manipulator for a specific task of path tracking. For this purpose, the human arm was modeled with a three-link planar manipulator activated by Hill muscle model. Applying a proportional controller, values of force and torques applied to the joints were calculated by inverse dynamics, and then joints and muscle forces trajectories were computed and presented. To be more accurate to say, the kinematics of the muscle-joint space was formulated by which we defined the relationship between the muscle lengths and the geometry of the links and joints. Secondary, the kinematic of the links was introduced to calculate the position of the end-effector in terms of geometry. Then, we considered the modeling of Hill muscle dynamics, and after calculation of joint torques, finally, we applied them to the dynamics of the three-link manipulator obtained from the inverse dynamics to calculate the joint states, find and control the location of manipulator’s end-effector. The results show that the human arm model was successfully controlled to take the designated path of an ellipse precisely.Keywords: arm manipulator, hill muscle model, six-muscle model, three-link lodel
Procedia PDF Downloads 1427811 A Basic Modeling Approach for the 3D Protein Structure of Insulin
Authors: Daniel Zarzo Montes, Manuel Zarzo Castelló
Abstract:
Proteins play a fundamental role in biology, but their structure is complex, and it is a challenge for teachers to conceptually explain the differences between their primary, secondary, tertiary, and quaternary structures. On the other hand, there are currently many computer programs to visualize the 3D structure of proteins, but they require advanced training and knowledge. Moreover, it becomes difficult to visualize the sequence of amino acids in these models, and how the protein conformation is reached. Given this drawback, a simple and instructive procedure is proposed in order to teach the protein structure to undergraduate and graduate students. For this purpose, insulin has been chosen because it is a protein that consists of 51 amino acids, a relatively small number. The methodology has consisted of the use of plastic atom models, which are frequently used in organic chemistry and biochemistry to explain the chirality of biomolecules. For didactic purposes, when the aim is to teach the biochemical foundations of proteins, a manipulative system seems convenient, starting from the chemical structure of amino acids. It has the advantage that the bonds between amino acids can be conveniently rotated, following the pattern marked by the 3D models. First, the 51 amino acids were modeled, and then they were linked according to the sequence of this protein. Next, the three disulfide bonds that characterize the stability of insulin have been established, and then the alpha-helix structure has been formed. In order to reach the tertiary 3D conformation of this protein, different interactive models available on the Internet have been visualized. In conclusion, the proposed methodology seems very suitable for biology and biochemistry students because they can learn the fundamentals of protein modeling by means of a manipulative procedure as a basis for understanding the functionality of proteins. This methodology would be conveniently useful for a biology or biochemistry laboratory practice, either at the pre-graduate or university level.Keywords: protein structure, 3D model, insulin, biomolecule
Procedia PDF Downloads 557810 A Theoretical Analysis of Air Cooling System Using Thermal Ejector under Variable Generator Pressure
Authors: Mohamed Ouzzane, Mahmoud Bady
Abstract:
Due to energy and environment context, research is looking for the use of clean and energy efficient system in cooling industry. In this regard, the ejector represents one of the promising solutions. The thermal ejector is a passive component used for thermal compression in refrigeration and cooling systems, usually activated by heat either waste or solar. The present study introduces a theoretical analysis of the cooling system which uses a gas ejector thermal compression. A theoretical model is developed and applied for the design and simulation of the ejector, as well as the whole cooling system. Besides the conservation equations of mass, energy and momentum, the gas dynamic equations, state equations, isentropic relations as well as some appropriate assumptions are applied to simulate the flow and mixing in the ejector. This model coupled with the equations of the other components (condenser, evaporator, pump, and generator) is used to analyze profiles of pressure and velocity (Mach number), as well as evaluation of the cycle cooling capacity. A FORTRAN program is developed to carry out the investigation. Properties of refrigerant R134a are calculated using real gas equations. Among many parameters, it is thought that the generator pressure is the cornerstone in the cycle, and hence considered as the key parameter in this investigation. Results show that the generator pressure has a great effect on the ejector and on the whole cooling system. At high generator pressures, strong shock waves inside the ejector are created, which lead to significant condenser pressure at the ejector exit. Additionally, at higher generator pressures, the designed system can deliver cooling capacity for high condensing pressure (hot season).Keywords: air cooling system, refrigeration, thermal ejector, thermal compression
Procedia PDF Downloads 1607809 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic
Abstract:
The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences
Procedia PDF Downloads 3197808 Applied of LAWA Classification for Assessment of the Water by Nutrients Elements: Case Oran Sebkha Basin
Authors: Boualla Nabila
Abstract:
The increasing demand on water, either for the drinkable water supply, or for the agricultural and industrial custom, requires a very thorough hydrochemical study to protect better and manage this resource. Oran is relatively a city with the worst quality of the water. Recently, the growing populations may put stress on natural waters by impairing the quality of the water. Campaign of water sampling of 55 points capturing different levels of the aquifer system was done for chemical analyzes of nutriments elements. The results allowed us to approach the problem of contamination based on the largely uniform nationwide approach LAWA (LänderarbeitsgruppeWasser), based on the EU CIS guidance, has been applied for the identification of pressures and impacts, allowing for easy comparison. Groundwater samples were analyzed, also, for physico-chemical parameters such as pH, sodium, potassium, calcium, magnesium, chloride, sulphate, carbonate and bicarbonate. The analytical results obtained in this hydrochemistry study were interpreted using Durov diagram. Based on these representations, the anomaly of high groundwater salinity observed in Oran Sebkha basin was explained by the high chloride concentration and to the presence of inverse cation exchange reaction. Durov diagram plot revealed that the groundwater has been evolved from Ca-HCO3 recharge water through mixing with the pre-existing groundwater to give mixed water of Mg-SO4 and Mg-Cl types that eventually reached a final stage of evolution represented by a Na-Cl water type.Keywords: contamination, water quality, nutrients elements, approach LAWA, durov diagram
Procedia PDF Downloads 2767807 Digitalisation of the Railway Industry: Recent Advances in the Field of Dialogue Systems: Systematic Review
Authors: Andrei Nosov
Abstract:
This paper discusses the development directions of dialogue systems within the digitalisation of the railway industry, where technologies based on conversational AI are already potentially applied or will be applied. Conversational AI is one of the popular natural language processing (NLP) tasks, as it has great prospects for real-world applications today. At the same time, it is a challenging task as it involves many areas of NLP based on complex computations and deep insights from linguistics and psychology. In this review, we focus on dialogue systems and their implementation in the railway domain. We comprehensively review the state-of-the-art research results on dialogue systems and analyse them from three perspectives: type of problem to be solved, type of model, and type of system. In particular, from the perspective of the type of tasks to be solved, we discuss characteristics and applications. This will help to understand how to prioritise tasks. In terms of the type of models, we give an overview that will allow researchers to become familiar with how to apply them in dialogue systems. By analysing the types of dialogue systems, we propose an unconventional approach in contrast to colleagues who traditionally contrast goal-oriented dialogue systems with open-domain systems. Our view focuses on considering retrieval and generative approaches. Furthermore, the work comprehensively presents evaluation methods and datasets for dialogue systems in the railway domain to pave the way for future research. Finally, some possible directions for future research are identified based on recent research results.Keywords: digitalisation, railway, dialogue systems, conversational AI, natural language processing, natural language understanding, natural language generation
Procedia PDF Downloads 637806 Design of Digital IIR Filter Using Opposition Learning and Artificial Bee Colony Algorithm
Authors: J. S. Dhillon, K. K. Dhaliwal
Abstract:
In almost all the digital filtering applications the digital infinite impulse response (IIR) filters are preferred over finite impulse response (FIR) filters because they provide much better performance, less computational cost and have smaller memory requirements for similar magnitude specifications. However, the digital IIR filters are generally multimodal with respect to the filter coefficients and therefore, reliable methods that can provide global optimal solutions are required. The artificial bee colony (ABC) algorithm is one such recently introduced meta-heuristic optimization algorithm. But in some cases it shows insufficiency while searching the solution space resulting in a weak exchange of information and hence is not able to return better solutions. To overcome this deficiency, the opposition based learning strategy is incorporated in ABC and hence a modified version called oppositional artificial bee colony (OABC) algorithm is proposed in this paper. Duplication of members is avoided during the run which also augments the exploration ability. The developed algorithm is then applied for the design of optimal and stable digital IIR filter structure where design of low-pass (LP) and high-pass (HP) filters is carried out. Fuzzy theory is applied to achieve maximize satisfaction of minimum magnitude error and stability constraints. To check the effectiveness of OABC, the results are compared with some well established filter design techniques and it is observed that in most cases OABC returns better or atleast comparable results.Keywords: digital infinite impulse response filter, artificial bee colony optimization, opposition based learning, digital filter design, multi-parameter optimization
Procedia PDF Downloads 4787805 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics
Authors: L. Freeborn
Abstract:
Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.Keywords: neuroimaging studies, research design, second language acquisition, task validity
Procedia PDF Downloads 1387804 Efficiency and Equity in Italian Secondary School
Authors: Giorgia Zotti
Abstract:
This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.Keywords: inequality, education, efficiency, DEA approach
Procedia PDF Downloads 757803 The Selective Reduction of a Morita-baylis-hillman Adduct-derived Ketones Using Various Ketoreductase Enzyme Preparations
Authors: Nompumelelo P. Mathebula, Roger A. Sheldon, Daniel P. Pienaar, Moira L. Bode
Abstract:
The preparation of enantiopure Morita-Baylis-Hillman (MBH) adducts remains a challenge in organic chemistry. MBH adducts are highly functionalised compounds which act as key intermediates in the preparation of compounds of medicinal importance. MBH adducts are prepared in racemic form by reacting various aldehydes and activated alkenes in the presence of DABCO. Enantiopure MBH adducts can be obtained by employing Enzymatic kinetic resolution (EKR). This technique has been successfully demonstrated in our group, amongst others, using lipases in either hydrolysis or transesterification reactions. As these methods only allow 50% of each enantiomer to be obtained, our interest grew in exploring other enzymatic methods for the synthesis of enantiopure MBH adducts where, theoretically, 100% of the desired enantiomer could be obtained.Dehydrogenase enzymes can be employed on prochiral substrates to obtain optically pure compounds by reducing carbon-carbon double bonds or carbonyl groups of ketones. Ketoreductases have been used historically to obtain enantiopure secondary alcohols on an industrial scale. Ketoreductases are NAD(P)H-dependent enzymes and thus require nicotinamide as a cofactor. This project focuses on employing ketoreductase enzymes to selectively reduce ketones derived from Morita-Baylis-Hillman (MBH) adducts in order to obtain these adducts in enantiopure form.Results obtained from this study will be reported. Good enantioselectivity was observed using a range of different ketoreductases, however, reactions were complicated by the formation of an unexpected by-product, which was characterised employing single crystal x-ray crystallography techniques. Methods to minimise by-product formation are currently being investigated.Keywords: ketoreductase, morita-baylis-hillman, selective reduction, x-ray crystallography
Procedia PDF Downloads 667802 Pressure-Controlled Dynamic Equations of the PFC Model: A Mathematical Formulation
Authors: Jatupon Em-Udom, Nirand Pisutha-Arnond
Abstract:
The phase-field-crystal, PFC, approach is a density-functional-type material model with an atomic resolution on a diffusive timescale. Spatially, the model incorporates periodic nature of crystal lattices and can naturally exhibit elasticity, plasticity and crystal defects such as grain boundaries and dislocations. Temporally, the model operates on a diffusive timescale which bypasses the need to resolve prohibitively small atomic-vibration time steps. The PFC model has been used to study many material phenomena such as grain growth, elastic and plastic deformations and solid-solid phase transformations. In this study, the pressure-controlled dynamic equation for the PFC model was developed to simulate a single-component system under externally applied pressure; these coupled equations are important for studies of deformable systems such as those under constant pressure. The formulation is based on the non-equilibrium thermodynamics and the thermodynamics of crystalline solids. To obtain the equations, the entropy variation around the equilibrium point was derived. Then the resulting driving forces and flux around the equilibrium were obtained and rewritten as conventional thermodynamic quantities. These dynamics equations are different from the recently-proposed equations; the equations in this study should provide more rigorous descriptions of the system dynamics under externally applied pressure.Keywords: driving forces and flux, evolution equation, non equilibrium thermodynamics, Onsager’s reciprocal relation, phase field crystal model, thermodynamics of single-component solid
Procedia PDF Downloads 3057801 Testing Nature Based Solutions for Air Quality Improvement: Aveiro Case Study
Authors: A. Ascenso, C. Silveira, B. Augusto, S. Rafael, S. Coelho, J. Ferreira, A. Monteiro, P. Roebeling, A. I. Miranda
Abstract:
Innovative nature-based solutions (NBSs) can provide answers to the challenges that urban areas are currently facing due to urban densification and extreme weather conditions. The effects of NBSs are recognized and include improved quality of life, mental and physical health and improvement of air quality, among others. Part of the work developed in the scope of the UNaLab project, which aims to guide cities in developing and implementing their own co-creative NBSs, intends to assess the impacts of NBSs on air quality, using Eindhoven city as a case study. The state-of-the-art online air quality modelling system WRF-CHEM was applied to simulate meteorological and concentration fields over the study area with a spatial resolution of 1 km2 for the year 2015. The baseline simulation (without NBSs) was validated by comparing the model results with monitored data retrieved from the Eindhoven air quality database, showing an adequate model performance. In addition, land use changes were applied in a set of simulations to assess the effects of different types of NBSs. Finally, these simulations were compared with the baseline scenario and the impacts of the NBSs were assessed. Reductions on pollutant concentrations, namely for NOx and PM, were found after the application of the NBSs in the Eindhoven study area. The present work is particularly important to support public planners and decision makers in understanding the effects of their actions and planning more sustainable cities for the future.Keywords: air quality, modelling approach, nature based solutions, urban area
Procedia PDF Downloads 2387800 Assessment of Mediation of Community-Based Disputes in Selected Barangays of Batangas City
Authors: Daisyree S. Arrieta
Abstract:
The purpose of this study was to assess the mediation process applied on community-based disputes in the selected barangays of Batangas City, namely: Barangay Sta. Rita Karsada, Barangay Bolbok, and Barangay Alangilan. The researcher initially speculated that the required procedures under Republic Act No. 7160 were not religiously followed and satisfied by the Lupong Tagapamayapa members in most of the barangays in the subject locality and this prompted the researcher to conduct an investigation about this research topic. In this study, the subject barangays and their Lupon members still resorted to mediation processes to amicably settle conflicts among community members. It can also be appreciated among the Lupon Tagapamayapa members that they are aware of the purpose and processes required in the mediation of cases brought before them. However, the manner in which they conduct this mediation processes seems to be dependent on the general characteristics of their respective barangays and of the people situated therein. It also very noticeable that the strategies applied by the Lupon members on these cases depend on the ways and means the parties in dispute may arrive into agreements and conciliations. It is concluded by the researcher that the Lupong Tagapamayapa members in Barangay Sta. Rita Karsada, Barangay Bolbok, and Barangay Alangilan are aware and are applying the objectives and procedures of mediation. Also, the success and failure of the mediation processes applied by the Lupong Tagapamayapa members of the subject barangays on community-based disputes brought before them are generally attributed on the attitude and perspective of the parties in dispute towards the entire process of mediation and not on the capacity or capability of the Lupon members to subject them into amicable settlements. In view of the above, the researcher humbly recommends the following: (1) that the composition of the Lupong Tagapamayapa should include individuals from various sectors of the barangay; (2) that the Lupong Tagapamayapa members should undergo various trainings that may enhance their capability to mediate any type of community-based disputes at the expense of the barangay fund or budget; (3) that the Punong Barangay and the Sangguniang Pambarangay, in their own discretion, should allocate budget that will consistently provide regular honoraria for the Lupong Tagapamayapa members; (4) that the Punong Barangay and the Sangguniang Pambarangay should provide an ideal venue for the hearing of community-based disputes; (5) that the City/ Municipal Governments should allocate necessary financial assistance to the barangays under their jurisdiction in honing eligible Lupong Tagapamayapa members; and (6) that the Punong Barangay and other officials should initiate series of information campaigns for their constituents to be informed on the objectives, advantages, and procedures of mediation.Keywords: amicable settlement, community-based disputes, dispute resolution, mediation
Procedia PDF Downloads 3807799 Thermally Stable Nanocrystalline Aluminum Alloys Processed by Mechanical Alloying and High Frequency Induction Heat Sintering
Authors: Hany R. Ammar, Khalil A. Khalil, El-Sayed M. Sherif
Abstract:
The as-received metal powders were used to synthesis bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys using mechanical alloying and high frequency induction heat sintering (HFIHS). The current study investigated the influence of milling time and ball-to-powder (BPR) weight ratio on the microstructural constituents and mechanical properties of the processed materials. Powder consolidation was carried out using a high frequency induction heat sintering where the processed metal powders were sintered into a dense and strong bulk material. The sintering conditions applied in this process were as follow: heating rate of 350°C/min; sintering time of 4 minutes; sintering temperature of 400°C; applied pressure of 750 Kgf/cm2 (100 MPa); cooling rate of 400°C/min and the process was carried out under vacuum of 10-3 Torr. The powders and the bulk samples were characterized using XRD and FEGSEM techniques. The mechanical properties were evaluated at various temperatures of 25°C, 100°C, 200°C, 300°C and 400°C to study the thermal stability of the processed alloys. The bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys displayed extremely high hardness values even at elevated temperatures. The Al-10%Cu-5%Ti alloy displayed the highest hardness values at room and elevated temperatures which are related to the presence of Ti-containing phases such as Al3Ti and AlCu2Ti, these phases are thermally stable and retain the high hardness values at elevated temperatures up to 400ºC.Keywords: nanocrystalline aluminum alloys, mechanical alloying, hardness, elevated temperatures
Procedia PDF Downloads 4547798 Fluoranthene Removal in Wastewater Using Biological and Physico-Chemical Methods
Authors: Angelica Salmeron Alcocer, Deifilia Ahuatzi Chacon, Felipe Rodriguez Casasola
Abstract:
Polycyclic aromatic hydrocarbons (PAHs) are produced naturally (forest fires, volcanic eruptions) and human activity (burning fossil fuels). Concern for PAHs is due to their toxic, mutagenic and carcinogenic effects and so pose a potential risk to human health and ecology. Therefore these are considered the most toxic components of oil, they are highly hydrophobic, making them easily depositable on the floor, air and water. One method of removing PAHs of contaminated soil used surfactants such as Tween 80, which it has been reported as less toxic and also increases the solubility of the PAH compared to other surfactants, fluoranthene is a PAH with molecular formula C16H10, its name derives from the fluorescence which presents to UV light. In this paper, a study of the fluoranthene removal solubilized with Tween 80 in synthetic wastewater using a microbial community (isolated from soil of coffee plantations in the state of Veracruz, Mexico) and Fenton oxidation method was performed. The microbial community was able to use both tween 80 and fluoranthene as carbon sources for growth, when the biological treatment in batch culture was applied, 100% of fluoranthene was mineralized, this only occurred at an initial concentration of 100 ppm, but by increasing the initial concentration of fluoranthene the removal efficiencies decay and degradation time increases due to the accumulation of byproducts more toxic or less biodegradable, however when the Fenton oxidation was previously applied to the biological treatment, it was observed that removal of fluoranthene improved because it is consumed approximately 2.4 times faster.Keywords: fluoranthene, polycyclic aromatic hydrocarbons, biological treatment, fenton oxidation
Procedia PDF Downloads 2397797 Carbon Blacks: A Broad Type of Carbon Materials with Different Electrocatalytic Activity to Produce H₂O₂
Authors: Alvaro Ramírez, Martín Muñoz-Morales, Ester López- Fernández, Javier Llanos, C. Ania
Abstract:
Carbon blacks are value-added materials typically produced through the incomplete combustion or thermal decomposition of hydrocarbons. Traditionally, they have been used as catalysts in many different applications, but in the last decade, their potential in green chemistry has gained significant attention. Among them, the electrochemical production of H₂O₂ has attracted interest because of their properties as high oxidant capacity or their industrial interest as a bleaching agent. Carbon blacks are commonly used in this application in a catalytic ink that is drop-casted on supporting electrodes and acts as catalysts for the electrochemical production of H₂O₂ through oxygen reduction reaction (ORR). However, the different structural and electrochemical behaviors of each type of carbon black influence their applications. In this line, the term ‘carbon black’, has to be considered as a generic name that does not guarantee any physicochemical properties if any further description is mentioned. In fact, different specific surface area (SSA), surface functional groups, porous structure, and electro catalysts effect seem very important for electrochemical applications, and considerable differences were found during the analysis of four types of carbon blacks. Thus, the aim of this work is to evaluate the influence of SSA, porous structure, oxygen functional groups, and structural defects to differentiate among these carbon blacks (e.g. Vulcan XC72, Superior Graphite Co, Printex XE2, and Prolabo) for H₂O₂ production via ORR, using carbon paper as electrode support with improved selectivity and efficiency. Results indicate that the number and size of pores, along with surface functional groups, are key parameters that significantly affect the overall process efficiency.Keywords: carbon blacks, oxygen reduction reaction, hydrogen peroxide, porosity, surface functional groups
Procedia PDF Downloads 447796 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins
Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi
Abstract:
A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid
Procedia PDF Downloads 2657795 Alleviation of Salt Stress Effects on Solanum lycopersicum (L.) Plants Grown in a Saline Soil by Foliar Spray with Salicylic Acid
Authors: Saad Howladar
Abstract:
Salinity stress is one of the major abiotic stresses, restricting plant growth and crop productivity in different world regions, especially in arid and semi-arid regions, including Saudi Arabia. The tomato plant is proven to be moderately sensitive to salt stress. Therefore, two field experiments were conducted using tomato plants (Hybrid 6130) to evaluate the effect of four concentrations of salicylic acid (SA; 0, 20, 40, and 60 µM) applied as foliar spraying in improving plant tolerance to saline soil conditions. Tomato plant growth, yield, osmoprotectants, chloeophyll fluorescence, and ionic contents were determined. The results of this study displayed that growth and yield components and physiological attributes of water-sprayed plants (the control) grown under saline soil conditions were negatively impacted. However, under the adverse conditions of salinity, SA-treated plants had enhanced growth and yield components of tomato plants compared to the control. Free proline, soluble sugars, chlorophyll fluorescence, relative water content, membrane stability index, and nutrients contents (e.g., N, P, K⁺, and Ca²⁺) were also improved significantly, while Na⁺ content was significantly reduced in SA-applied tomato plants. SA at 40 µM was the best treatment, which could be recommended to use for salt-stressed tomato plants to enable them to tolerate the adverse conditions of saline soils.Keywords: tomatoes, salt stress, chlorophyll fluorescence, dehydration tolerance, osmoprotectants
Procedia PDF Downloads 1107794 Emptiness Downlink and Uplink Proposal Using Space-Time Equation Interpretation
Authors: Preecha Yupapin And Somnath
Abstract:
From the emptiness, the vibration induces the fractal, and the strings are formed. From which the first elementary particle groups, known as quarks, were established. The neutrino and electron are created by them. More elementary particles and life are formed by organic and inorganic substances. The universe is constructed, from which the multi-universe has formed in the same way. universe assumes that the intense energy has escaped from the singularity cone from the multi-universes. Initially, the single mass energy is confined, from which it is disturbed by the space-time distortion. It splits into the entangled pair, where the circular motion is established. It will consider one side of the entangled pair, where the fusion energy of the strong coupling force has formed. The growth of the fusion energy has the quantum physic phenomena, where the moving of the particle along the circumference with a speed faster than light. It introduces the wave-particle duality aspect, which will be saturated at the stopping point. It will be re-run again and again without limitation, which can say that the universe has been created and expanded. The Bose-Einstein condensate (BEC) is released through the singularity by the wormhole, which will be condensed to become a mass associated with the Sun's size. It will circulate(orbit) along the Sun. the consideration of the uncertainty principle is applied, from which the breath control is followed by the uncertainty condition ∆p∆x=∆E∆t~ℏ. The flowing in-out air into a body via a nose has applied momentum and energy control respecting the movement and time, in which the target is that the distortion of space-time will have vanished. Finally, the body is clean which can go to the next procedure, where the mind can escape from the body by the speed of light. However, the borderline between contemplation to being an Arahant is a vacuum, which will be explained.Keywords: space-time, relativity, enlightenment, emptiness
Procedia PDF Downloads 677793 Healthcare Social Entrepreneurship: A Positive Theory Applied to the Case of YOU Foundation in Nepal
Authors: Simone Rondelli, Damiano Rondelli, Bishesh Poudyal, Juan Jose Cabrera-Lazarini
Abstract:
One of the main obstacles for Social Entrepreneurship is to find a business model that is financially sustainable. In other words, the captured value generates enough cash flow to ensure business continuity and reinvestment for growth. Providing Health Services in poor countries for the uninsured population affected by a high-cost chronical disease is not the exception for this challenge. As a prime example, cancer has become a high impact on a global disease not only because of the high morbidity but also of the financial impact on both the patient family and health services in underdeveloped countries. Therefore, it is relevant to find a Social Entrepreneurship Model that provides affordable treatment for this disease while maintaining healthy finances not only for the patient but also for the organization providing the treatment. Using the methodology of Constructive Research, this paper applied a Positive Theory and four business models of Social Entrepreneurship to a case of a Private Foundation model whose mission is to address the challenge previously described. It was found that the Foundation analyzed, in this case, is organized as an Embedded Business Model and complies with the four propositions of the Positive Theory considered. It is recommended for this Private Foundation to explore implementing the Integrated Business Model to ensure more robust sustainability in the long term. It evolves as a scalable model that can attract investors interested in contributing to expanding this initiative globally.Keywords: affordable treatment, global healthcare, social entrepreneurship theory, sustainable business model
Procedia PDF Downloads 1457792 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques
Authors: Imed Feki, Faouzi Msahli
Abstract:
Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique
Procedia PDF Downloads 6057791 Festival Gamification: Conceptualization and Scale Development
Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching
Abstract:
Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.Keywords: festival gamification, festival tourism, scale development, self-determination theory
Procedia PDF Downloads 1477790 The Existential in a Practical Phenomenology Research: A Study on the Political Participation of Young Women
Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso
Abstract:
This communication presents proposed questions about the existential in research on the political participation of young women. The study follows a qualitative methodology, in particular, the applied hermeneutic phenomenological (AHP) method, and the general objective of the research is to give an account of the experience of political participation as a young woman. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. The ultimate objective of HP is to gain access to the meaning structures of lived experience by appropriating them, clarifying them, and reflectively making them explicit. Human experiences are always lived through existential: fundamental themes that are useful in exploring meaningful aspects of our life worlds. Everyone experiences the world through the existential of lived relationships, the lived body, lived space, lived time, and lived things. The phenomenological research, then, also tacitly asks about the existential. Existentials are universal themes useful for exploring significant aspects of our life world and of the particular phenomena under study. Four main existentials prove especially helpful as guides for reflection in the research process: relationship, body, space, and time. For example, in our case, we may ask ourselves how can the existentials of relationship, body, space, and time guide us in exploring the structures of meaning in the lived experience of political participation as a woman and a young person. The study is still not finished, as we are currently conducting phenomenological thematic analysis on the collected stories of lived experience. Yet, we have already identified some fragments of texts that show the existential in their experiences, which we will transcribe below. 1) Relationality - The experienced I-Other. It regards how relationships are experienced in our narratives about political participation as young women. One example would be: “As we had known each other for a long time, we understood each other with our eyes; we were all a little bit on the same page, thinking the same thing.” 2) Corporeality - The lived body. It regards how the lived body is experienced in activities of political participation as a young woman. One example would be: “My blood was boiling, but it was not the time to throw anything in their face, we had to look for solutions.”; “I had a lump in my throat and I wanted to cry.”. 3) Spatiality - The lived space. It regards how one experiences the lived space in political participation activities as a young woman. One example would be: “And the feeling I got when I saw [it] it's like watching everybody going into a mousetrap.” 4) Temporality - Lived time. It regards how one experiences the lived time in political participation activities as a young woman. One example would be: “Then, there were also meetings that went on forever…”Keywords: applied hermeneutic phenomenology, existentials, hermeneutics, phenomenology, political participation
Procedia PDF Downloads 94