Search results for: nonlinear phenomena
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2326

Search results for: nonlinear phenomena

136 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator

Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase

Abstract:

In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.

Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging

Procedia PDF Downloads 156
135 Associated Problems with the Open Dump Site and Its Possible Solutions

Authors: Pangkaj Kumar Mahanta, Md. Rafizul Islam

Abstract:

The rapid growth of the population causes a substantial amount of increase in household waste all over the world. Waste management is becoming one of the most challenging phenomena in the present day. The most environmentally friendly final disposal process of waste is sanitary landfilling, which is practiced in most developing countries. However, in Southeast Asia, most of the final disposal point is an open dump site. Due to the ignominy of proper management of waste and monitoring, the surrounding environment gets polluted more by the open dump site in comparison with a sanitary landfill. Khulna is 3rd largest metropolitan city in Bangladesh, having a population of around 1.5 million and producing approximately 450 tons per day of Municipal Solid Waste. The Municipal solid waste of Khulna city is disposed of in Rajbandh open dump site. The surrounding air is being polluted by the gas produced in the open dump site. Also, the open dump site produces leachate, which contains various heavy metals like Cadmium (Cd), Chromium (Cr), Lead (Pb), Manganese (Mn), Mercury (Hg), Strontium (Sr), etc. Leachate pollutes the soil as well as the groundwater of the open dump site and also the surrounding area through seepage. Moreover, during the rainy season, the surface water is polluted by leachate runoff. Also, the plastic waste flowing out from the open dump site through various drivers pollutes the nearby environment. The health risk assessment associated with heavy metals was carried out by computing the chronic daily intake (CDI), hazard quotient (HQ), and hazard index (HI) via different exposure pathways following the USEPA guidelines. For ecological risk, potential contamination index (Cp), Contamination factor (CF), contamination load index (PLI), numerical integrated contamination factor (NICF), enrichment factor (EF), ecological risk index (ER), and potential ecological risk index (PERI) were computed. The health risk and ecological risk assessment results reveal that some heavy metals possess strong health and ecological risk. In addition, the child faces higher harmful health risks from several heavy metals than the adult for all the exposure pathways and media. The conversion of an open dump site into a sanitary landfill and a proper management system can reduce the problems associated with an open dump site. In the sanitary landfill, the produced gas will be managed properly to save the surrounding atmosphere from being polluted. The seepage of leachate can be minimized by installing a compacted clay layer (CCL) as a baseline and leachate collection in a sanitary landfill to save the underlying soil layer and surrounding water bodies from leachate. Another important component of a sanitary landfill is the conversion of plastic waste to energy will minimize the plastic pollution in the landfill area and also the surrounding soil and water bodies. Also, in the sanitary landfill, the bio-waste can be used to make compost to reduce the volume of bio-waste and proper utilization of the landfill area.

Keywords: ecological risk, health risk, open dump site, sanitary landfill

Procedia PDF Downloads 173
134 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination

Authors: Gilberto Goracci, Fabio Curti

Abstract:

This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.

Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field

Procedia PDF Downloads 80
133 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 67
132 Wrestling with Religion: A Theodramatic Exploration of Morality in Popular Culture

Authors: Nicholas Fieseler

Abstract:

The nature of religion implicit in popular culture is relevant both in and out of the university. The traditional rules-based conception of religion and the ethical systems that emerge from them do not necessarily convey the behavior of daily life as it exists apart from spaces deemed sacred. This paper proposes to examine the religion implicit in the popular culture phenomenon of professional wrestling and how that affects the understanding of popular religion. Pro wrestling, while frequently dismissed, offers a unique manner through which to re-examine religion in popular culture. A global phenomenon, pro wrestling occupies a distinct space in numerous countries and presents a legitimate reflection of human behavior cross-culturally on a scale few other phenomena can equal. Given its global viewership of millions, it should be recognized as a significant means of interpreting the human attraction to violence and its association with religion in general. Hans Urs von Balthasar’s theory of Theodrama will be used to interrogate the inchoate religion within pro wrestling. While Balthasar developed theodrama within the confines of Christian theology; theodrama contains remarkable versatility in its potential utility. Since theodrama re-envisions reality as drama, the actions of every human actor on the stage contributes to the play’s development, and all action contains some transcendent value. It is in this sense that even the “low brow” activity of pro wrestling may be understood in religious terms. Moreover, a pro wrestling storyline acts as a play within a play: the struggles in a pro wrestling match reflect the human attitudes toward life as it exists in the sacred and profane realms. The indistinct lines separating traditionally good (face) from traditionally bad (heel)wrestlers mirror the moral ambiguity in which many people interpret life. This blurred distinction between good and bad, and large segments of an audience’s embrace of the heel wrestlers, reveal ethical constraints that guide the everyday values of pro wrestling spectators, a moral ambivalence that is often overlooked by traditional religious systems, and which has hitherto been neglected in the academic literature on pro wrestling. The significance of interpreting the religion implicit in pro wrestling through a the dramatic lens extends beyond pro wrestling specifically and can examine the religion implicit in popular culture in general. The use of theodrama mitigates the rigid separation often ascribed to areas deemed sacred/ profane, ortranscendent / immanent, enabling a re-evaluation of religion and ethical systems as practiced in popular culture. The use of theodrama will be expressed by utilizing the pro wrestling match as a literary text that reflects the society from which it emerges. This analysis will also reveal the complex nature of religion in popular culture and provides new directions for the academic study of religion. This project consciously bridges the academic and popular realms. The goal of the research is not to add only to the academic literature on implicit religion in popular culture but to publish it in a form which speaks to those outside the standard academic audiences for such work.

Keywords: ethics, popular religion, professional wrestling, theodrama

Procedia PDF Downloads 124
131 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 49
130 Molecular Dynamics Study of Ferrocene in Low and Room Temperatures

Authors: Feng Wang, Vladislav Vasilyev

Abstract:

Ferrocene (Fe(C5H5)2, i.e., di-cyclopentadienyle iron (FeCp2) or Fc) is a unique example of ‘wrong but seminal’ in chemistry history. It has significant applications in a number of areas such as homogeneous catalysis, polymer chemistry, molecular sensing, and nonlinear optical materials. However, the ‘molecular carousel’ has been a ‘notoriously difficult example’ and subject to long debate for its conformation and properties. Ferrocene is a dynamic molecule. As a result, understanding of the dynamical properties of ferrocene is very important to understand the conformational properties of Fc. In the present study, molecular dynamic (MD) simulations are performed. In the simulation, we use 5 geometrical parameters to define the overall conformation of Fc and all the rest is a thermal noise. The five parameters are defined as: three parameters d---the distance between two Cp planes, α and δ to define the relative positions of the Cp planes, in which α is the angle of the Cp tilt and δ the angle the two Cp plane rotation like a carousel. Two parameters to position the Fe atom between two Cps, i.e., d1 for Fe-Cp1 and d2 for Fe-Cp2 distances. Our preliminary MD simulation discovered the five parameters behave differently. Distances of Fe to the Cp planes show that they are independent, practically identical without correlation. The relative position of two Cp rings, α, indicates that the two Cp planes are most likely not in a parallel position, rather, they tilt in a small angle α≠ 0°. The mean plane dihedral angle δ ≠ 0°. Moreover, δ is neither 0° nor 36°, indicating under those conditions, Fc is neither in a perfect eclipsed structure nor a perfect staggered structure. The simulations show that when the temperature is above 80K, the conformers are virtually in free rotations, A very interesting result from the MD simulation is the five C-Fe bond distances from the same Cp ring. They are surprisingly not identical but in three groups of 2, 2 and 1. We describe the pentagon formed by five carbon atoms as ‘turtle swimming’ for the motion of the Cp rings of Fc as shown in their dynamical animation video. The Fe- C(1) and Fe-C(2) which are identical as ‘the turtle back legs’, Fe-C(3) and Fe-C(4) which are also identical as turtle front paws’, and Fe-C(5) ---’the turtle head’. Such as ‘turtle swimming’ analog may be able to explain the single substituted derivatives of Fc. Again, the mean Fe-C distance obtained from MD simulation is larger than the quantum mechanically calculated Fe-C distances for eclipsed and staggered Fc, with larger deviation with respect to the eclipsed Fc than the staggered Fc. The same trend is obtained for the five Fe-C-H angles from same Cp ring of Fc. The simulated mean IR spectrum at 7K shows split spectral peaks at approximately 470 cm-1 and 488 cm-1, in excellent agreement with quantum mechanically calculated gas phase IR spectrum for eclipsed Fc. As the temperature increases over 80K, the clearly splitting IR spectrum become a very board single peak. Preliminary MD results will be presented.

Keywords: ferrocene conformation, molecular dynamics simulation, conformer orientation, eclipsed and staggered ferrocene

Procedia PDF Downloads 191
129 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 542
128 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 262
127 The Impact of Economic Status on Health Status in the Context of Bangladesh

Authors: Md. S. Sabuz

Abstract:

Bangladesh, a South Asian developing country, has achieved a remarkable breakthrough in health indicators during the last four decades despite immense income inequality. This phenomenon results in the mystical exclusion of marginalized people from obtaining health care facilities. However, the persistence of exclusion of the disadvantaged remains troubling. Exclusion occurs from occupational inferiority, pay and wage differences, educational backwardness, gender disparity to urban-rural complexity and eliminate the unprivileged from seeking and availing the health services. Evidence from Bangladesh shows that many sick people prefer to die at home without securing medical services because in previous times they were not treated well, not because the medical facilities were inadequate or antediluvian but the socio-economic class allows them to receive obdurate treatment. Furthermore, government and policymakers have given enormous emphasis on infrastructural development and achieving health indicators instead of ensuring quality services and inclusiveness of people from all spheres. Therefore, it is high time to address the issues concerning this and highlight the impact of economic status on health status in a sociological perspective. The objective of this study is to consider ways of assessing and exploring the impact of economic status for instance: occupational status, pay and wage variable, on health status in the context of Bangladesh. The hypotheses are that there are a significant number of factors affecting economic status which are impactful for health status eventually, but acute income inequality is a prominent factor. Illiteracy, gender disparity, remoteness, incredibility on services, superior costs, superstition etc. are the dominant indicators behind the economic factors influencing the health status. The chosen methodologies are a qualitative and quantitative approaches to accomplish the research objectives. Secondary sources of data will be used to conduct the study. Surveys will be conducted on the people who have ever been through the health care facilities and people from the different socio-economic and cultural backgrounds. Focus group discussions will be conducted to acquire the data from different cultural and regional citizens. The findings show that 48% of people who are from disadvantaged communities have been deprived of proper health care facilities. The general reasons behind this are the higher cost of medicines and other equipment. A significant number of people are unaware of the appropriate facilities. It was found that the socio-economic variables are the main influential factors that work as the driving force for both economic dimension and health status. Above all regional variables and gender, dimensions have an enormous effect on determining the health status of an individual or community. Amidst many positive achievements for example decrease in the child mortality rate, an increase in the immunization programs of the child etc., the inclusiveness of all classes of people in health care facilities has been overshadowed in Bangladesh. However, this phenomenon along with the socio-economic and cultural phenomena significantly demolishes the quality and inclusiveness of the health status of people.

Keywords: cultural context of health, economic status, gender and health, rural health care

Procedia PDF Downloads 192
126 Procedure for Monitoring the Process of Behavior of Thermal Cracking in Concrete Gravity Dams: A Case Study

Authors: Adriana de Paula Lacerda Santos, Bruna Godke, Mauro Lacerda Santos Filho

Abstract:

Several dams in the world have already collapsed, causing environmental, social and economic damage. The concern to avoid future disasters has stimulated the creation of a great number of laws and rules in many countries. In Brazil, Law 12.334/2010 was created, which establishes the National Policy on Dam Safety. Overall, this policy requires the dam owners to invest in the maintenance of their structures and to improve its monitoring systems in order to provide faster and straightforward responses in the case of an increase of risks. As monitoring tools, visual inspections has provides comprehensive assessment of the structures performance, while auscultation’s instrumentation has added specific information on operational or behavioral changes, providing an alarm when a performance indicator exceeds the acceptable limits. These limits can be set using statistical methods based on the relationship between instruments measures and other variables, such as reservoir level, time of the year or others instruments measuring. Besides the design parameters (uplift of the foundation, displacements, etc.) the dam instrumentation can also be used to monitor the behavior of defects and damage manifestations. Specifically in concrete gravity dams, one of the main causes for the appearance of cracks, are the concrete volumetric changes generated by the thermal origin phenomena, which are associated with the construction process of these structures. Based on this, the goal of this research is to propose a monitoring process of the thermal cracking behavior in concrete gravity dams, through the instrumentation data analysis and the establishment of control values. Therefore, as a case study was selected the Block B-11 of José Richa Governor Dam Power Plant, that presents a cracking process, which was identified even before filling the reservoir in August’ 1998, and where crack meters and surface thermometers were installed for its monitoring. Although these instruments were installed in May 2004, the research was restricted to study the last 4.5 years (June 2010 to November 2014), when all the instruments were calibrated and producing reliable data. The adopted method is based on simple linear correlations procedures to understand the interactions among the instruments time series, verifying the response times between them. The scatter plots were drafted from the best correlations, which supported the definition of the limit control values. Among the conclusions, it is shown that there is a strong or very strong correlation between ambient temperature and the crack meters and flowmeters measurements. Based on the results of the statistical analysis, it was possible to develop a tool for monitoring the behavior of the case study cracks. Thus it was fulfilled the goal of the research to develop a proposal for a monitoring process of the behavior of thermal cracking in concrete gravity dams.

Keywords: concrete gravity dam, dams safety, instrumentation, simple linear correlation

Procedia PDF Downloads 270
125 Influence of Temperature and Immersion on the Behavior of a Polymer Composite

Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli

Abstract:

This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.

Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical

Procedia PDF Downloads 98
124 Migrant Women English Instructors' Transformative Workplace Learning Experiences in Post-Secondary English Language Programs in Ontario, Canada

Authors: Justine Jun

Abstract:

This study aims to reveal migrant women English instructors' workplace learning experiences in Canadian post-secondary institutions in Ontario. Although many scholars have conducted research studies on internationally educated teachers and their professional and employment challenges, few studies have recorded migrant women English language instructors’ professional learning and support experiences in post-secondary English language programs in Canada. This study employs a qualitative research paradigm. Mezirow’s Transformative Learning Theory is an essential lens for the researcher to explain, analyze, and interpret the research data. It is a collaborative research project. The researcher and participants cooperatively create photographic or other artwork data responding to the research questions. Photovoice and arts-informed data collection methodology are the main methods. Research participants engage in the study as co-researchers and inquire about their own workplace learning experiences, actively utilizing their critical self-reflective and dialogic skills. Co-researchers individually select the forms of artwork they prefer to engage with to represent their transformative workplace learning experiences about the Canadian workplace cultures that they underwent while working with colleagues and administrators in the workplace. Once the co-researchers generate their cultural artifacts as research data, they collaboratively interpret their artworks with the researcher and other volunteer co-researchers. Co-researchers jointly investigate the themes emerging from the artworks. They also interpret the meanings of their own and others’ workplace learning experiences embedded in the artworks through interactive one-on-one or group interviews. The following are the research questions that the migrant women English instructor participants examine and answer: (1) What have they learned about their workplace culture and how do they explain their learning experiences?; (2) How transformative have their learning experiences been at work?; (3) How have their colleagues and administrators influenced their transformative learning?; (4) What kind of support have they received? What supports have been valuable to them and what changes would they like to see?; (5) What have their learning experiences transformed?; (6) What has this arts-informed research process transformed? The study findings implicate English language instructor support currently practiced in post-secondary English language programs in Ontario, Canada, especially for migrant women English instructors. This research is a doctoral empirical study in progress. This research has the urgency to address the research problem that few studies have investigated migrant English instructors’ professional learning and support issues in the workplace, precisely that of English instructors working with adult learners in Canada. While appropriate social and professional support for migrant English instructors is required throughout the country, the present workplace realities in Ontario's English language programs need to be heard soon. For that purpose, the conceptualization of this study is crucial. It makes the investigation of under-represented instructors’ under-researched social phenomena, workplace learning and support, viable and rigorous. This paper demonstrates the robust theorization of English instructors’ workplace experiences using Mezirow’s Transformative Learning Theory in the English language teacher education field.

Keywords: English teacher education, professional learning, transformative learning theory, workplace learning

Procedia PDF Downloads 108
123 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions

Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino

Abstract:

Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.

Keywords: absorption, carbon capture, partial oxy-combustion, solvent

Procedia PDF Downloads 167
122 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies

Authors: Hai L. Tran

Abstract:

In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.

Keywords: evidence, evidence forms, evidence types, taxonomy

Procedia PDF Downloads 40
121 Seismic Behavior of Existing Reinforced Concrete Buildings in California under Mainshock-Aftershock Scenarios

Authors: Ahmed Mantawy, James C. Anderson

Abstract:

Numerous cases of earthquakes (main-shocks) that were followed by aftershocks have been recorded in California. In 1992 a pair of strong earthquakes occurred within three hours of each other in Southern California. The first shock occurred near the community of Landers and was assigned a magnitude of 7.3 then the second shock occurred near the city of Big Bear about 20 miles west of the initial shock and was assigned a magnitude of 6.2. In the same year, a series of three earthquakes occurred over two days in the Cape-Mendocino area of Northern California. The main-shock was assigned a magnitude of 7.0 while the second and the third shocks were both assigned a value of 6.6. This paper investigates the effect of a main-shock accompanied with aftershocks of significant intensity on reinforced concrete (RC) frame buildings to indicate nonlinear behavior using PERFORM-3D software. A 6-story building in San Bruno and a 20-story building in North Hollywood were selected for the study as both of them have RC moment resisting frame systems. The buildings are also instrumented at multiple floor levels as a part of the California Strong Motion Instrumentation Program (CSMIP). Both buildings have recorded responses during past events such as Loma-Prieta and Northridge earthquakes which were used in verifying the response parameters of the numerical models in PERFORM-3D. The verification of the numerical models shows good agreement between the calculated and the recorded response values. Then, different scenarios of a main-shock followed by a series of aftershocks from real cases in California were applied to the building models in order to investigate the structural behavior of the moment-resisting frame system. The behavior was evaluated in terms of the lateral floor displacements, the ductility demands, and the inelastic behavior at critical locations. The analysis results showed that permanent displacements may have happened due to the plastic deformation during the main-shock that can lead to higher displacements during after-shocks. Also, the inelastic response at plastic hinges during the main-shock can change the hysteretic behavior during the aftershocks. Higher ductility demands can also occur when buildings are subjected to trains of ground motions compared to the case of individual ground motions. A general conclusion is that the occurrence of aftershocks following an earthquake can lead to increased damage within the elements of an RC frame buildings. Current code provisions for seismic design do not consider the probability of significant aftershocks when designing a new building in zones of high seismic activity.

Keywords: reinforced concrete, existing buildings, aftershocks, damage accumulation

Procedia PDF Downloads 266
120 Analysis of Tilting Cause of a Residential Building in Durres by the Use of Cptu Test

Authors: Neritan Shkodrani

Abstract:

On November 26, 2019, an earthquake hit the central western part of Albania. It was assessed as Mw 6.4. Its epicenter was located offshore north western Durrës, about 7 km north of the city. In this paper, the consequences of settlements of very soft soils have been discussed for the case of a residential building, mentioned as “K Building”, which was suffering a significant tilting after the earthquake. “KBuilding” is an RC framed building having 12+1 (basement) storiesand a floor area of 21000 m2. The construction of the building was completed in 2012. “KBuilding”, located in Durres city, suffered severe non-structural damage during November 26, 2019, Durrës Earthquake sequences. During the in-site inspections immediately after the earthquake, the general condition of the buildings, the presence of observable settlements on the ground, and the crack situation in the structure were determined, and damage inspection were performed. It was significant to note that the “K Building” presented tilting that might be attributed, as it was believed at the beginning, partially to the failure of the columns of the ground floor and partially to liquefaction phenomena, but it did not collapse. At the first moment was not clear if the foundation had a bearing capacity failure or the foundation failed because of the soil liquefaction. Geotechnical soil investigations by using CPTU test were executed, and their data are usedto evaluatebearing capacity, consolidation settlement of the mat foundation, and soil liquefaction since they were believed to be the main reasons of this building tilting.Geotechnical soil investigation consist in 5 (five) Static Cone Penetration tests with pore pressure measurement (piezocone test). They reached a penetration depth of 20.0 m to 30.0 mand, clearly shown the presence of very soft and organic soils in the soil profile of the site. Geotechnical CPT based analysis of bearing capacity, consolidation, and secondary settlement are applied, and results are reported for each test. These results shown very small values of allowable bearing capacity and very high values of consolidation and secondary settlements. Liquefaction analysis based on the data of CPTU tests and the characteristics of ground shaking of the mentioned earthquake has shown the possibility of liquefaction for some layers of the considered soil profile, but the estimated vertical settlements are at a small range and clearly shown that the main reason of the building tilting was not related to the consequences of liquefaction, but was an existing settlement caused from the applied bearing pressure of this building. All the CPTU tests were carried out on August 2021, almost two years after the November 26, 2019, Durrës Earthquake and when the building itself was demolished. After removing the mat foundation on September 2021, it was possible to carry out CPTU tests even on the footprint of the existing building, which made possible to observe the effects of long time applied of foundation bearing pressure to the consolidation on the considered soil profile.

Keywords: bearing capacity, cone penetration test, consolidation settlement, secondary settlement, soil liquefaction, etc

Procedia PDF Downloads 80
119 Organic Light Emitting Devices Based on Low Symmetry Coordination Structured Lanthanide Complexes

Authors: Zubair Ahmed, Andrea Barbieri

Abstract:

The need to reduce energy consumption has prompted a considerable research effort for developing alternative energy-efficient lighting systems to replace conventional light sources (i.e., incandescent and fluorescent lamps). Organic light emitting device (OLED) technology offers the distinctive possibility to fabricate large area flat devices by vacuum or solution processing. Lanthanide β-diketonates complexes, due to unique photophysical properties of Ln(III) ions, have been explored as emitting layers in OLED displays and in solid-state lighting (SSL) in order to achieve high efficiency and color purity. For such applications, the excellent photoluminescence quantum yield (PLQY) and stability are the two key points that can be achieved simply by selecting the proper organic ligands around the Ln ion in a coordination sphere. Regarding the strategies to enhance the PLQY, the most common is the suppression of the radiationless deactivation pathways due to the presence of high-frequency oscillators (e.g., OH, –CH groups) around the Ln centre. Recently, a different approach to maximize the PLQY of Ln(β-DKs) has been proposed (named 'Escalate Coordination Anisotropy', ECA). It is based on the assumption that coordinating the Ln ion with different ligands will break the centrosymmetry of the molecule leading to less forbidden transitions (loosening the constraints of the Laporte rule). The OLEDs based on such complexes are available, but with low efficiency and stability. In order to get efficient devices, there is a need to develop some new Ln complexes with enhanced PLQYs and stabilities. For this purpose, the Ln complexes, both visible and (NIR) emitting, of variant coordination structures based on the various fluorinated/non-fluorinated β-diketones and O/N-donor neutral ligands were synthesized using a one step in situ method. In this method, the β-diketones, base, LnCl₃.nH₂O and neutral ligands were mixed in a 3:3:1:1 M ratio in ethanol that gave air and moisture stable complexes. Further, they were characterized by means of elemental analysis, NMR spectroscopy and single crystal X-ray diffraction. Thereafter, their photophysical properties were studied to select the best complexes for the fabrication of stable and efficient OLEDs. Finally, the OLEDs were fabricated and investigated using these complexes as emitting layers along with other organic layers like NPB,N,N′-Di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (hole-transporting layer), BCP, 2,9-Dimethyl-4,7-diphenyl-1,10-phenanthroline (hole-blocker) and Alq3 (electron-transporting layer). The layers were sequentially deposited under high vacuum environment by thermal evaporation onto ITO glass substrates. Moreover, co-deposition techniques were used to improve charge transport in the devices and to avoid quenching phenomena. The devices show strong electroluminescence at 612, 998, 1064 and 1534 nm corresponding to ⁵D₀ →⁷F₂(Eu), ²F₅/₂ → ²F₇/₂ (Yb), ⁴F₃/₂→ ⁴I₉/₂ (Nd) and ⁴I1₃/₂→ ⁴I1₅/₂ (Er). All the devices fabricated show good efficiency as well as stability.

Keywords: electroluminescence, lanthanides, paramagnetic NMR, photoluminescence

Procedia PDF Downloads 97
118 Wound Healing Process Studied on DC Non-Homogeneous Electric Fields

Authors: Marisa Rio, Sharanya Bola, Richard H. W. Funk, Gerald Gerlach

Abstract:

Cell migration, wound healing and regeneration are some of the physiological phenomena in which electric fields (EFs) have proven to have an important function. Physiologically, cells experience electrical signals in the form of transmembrane potentials, ion fluxes through protein channels as well as electric fields at their surface. As soon as a wound is created, the disruption of the epithelial layers generates an electric field of ca. 40-200 mV/mm, directing cell migration towards the wound site, starting the healing process. In vitro electrotaxis, experiments have shown cells respond to DC EFs polarizing and migrating towards one of the poles (cathode or anode). A standard electrotaxis experiment consists of an electrotaxis chamber where cells are cultured, a DC power source and agar salt bridges that help delaying toxic products from the electrodes to attain the cell surface. The electric field strengths used in such an experiment are uniform and homogeneous. In contrast, the endogenous electric field strength around a wound tend to be multi-field and non-homogeneous. In this study, we present a custom device that enables electrotaxis experiments in non-homogeneous DC electric fields. Its main feature involves the replacement of conventional metallic electrodes, separated from the electrotaxis channel by agarose gel bridges, through electrolyte-filled microchannels. The connection to the DC source is made by Ag/AgCl electrodes, incased in agarose gel and placed at the end of each microfluidic channel. An SU-8 membrane closes the fluidic channels and simultaneously serves as the single connection from each of them to the central electrotaxis chamber. The electric field distribution and current density were numerically simulated with the steady-state electric conduction module from ANSYS 16.0. Simulation data confirms the application of nonhomogeneous EF of physiological strength. To validate the biocompatibility of the device cellular viability of the photoreceptor-derived 661W cell line was accessed. The cells have not shown any signs of apoptosis, damage or detachment during stimulation. Furthermore, immunofluorescence staining, namely by vinculin and actin labelling, allowed the assessment of adhesion efficiency and orientation of the cytoskeleton, respectively. Cellular motility in the presence and absence of applied DC EFs was verified. The movement of individual cells was tracked for the duration of the experiments, confirming the EF-induced, cathodal-directed motility of the studied cell line. The in vitro monolayer wound assay, or “scratch assay” is a standard protocol to quantitatively access cell migration in vitro. It encompasses the growth of a confluent cell monolayer followed by the mechanic creation of a scratch, representing a wound. Hence, wound dynamics was monitored over time and compared for control and applied the electric field to quantify cellular population motility.

Keywords: DC non-homogeneous electric fields, electrotaxis, microfluidic biochip, wound healing

Procedia PDF Downloads 250
117 Shameful Heroes of Queer Cinema: A Critique of Mumbai Police (2013) and My Life Partner (2014)

Authors: Payal Sudhan

Abstract:

Popular films in India, Bollywood, and other local industries make a range of commercial films that attract vast viewership. Love, Heroism, Action, Adventure, Revenge, etc., are some of the dearest themes chosen by many filmmakers of various popular film Industries across the world. However, sexuality has become an issue to address within the cinema. Such films feature in small numbers compared to other themes. One can easily assume that homosexuality is unlikely to be a favorite theme found in Indian popular cinema. It doesn’t mean that there is absolutely no film made on the issues of homosexuality. There have been several attempts. Earlier, some movies depicted homosexual (gay) characters as comedians, which continued until the beginning of the 21st century. The study aims to explore how modern homophobia and stereotype are represented in the films and how it affects homosexuality in the recent Malayalam Cinema. The study wills primarily focusing on Mumbai Police (2013) and My Life Partner (2014). The study tries to explain social space, the idea of a cure, and criminality. The film that has been selected for the analysis Mumbai Police (2013) is a crime thriller. The nonlinear narration of the movie reveals, towards the end, the murderer of ACP Aryan IPS, who was shot dead in a public meeting. In the end, the culprit is the enquiring officer, ACP Antony Moses, himself a close friend and colleague of the victim. Much to one’s curiosity, the primary cause turns out to be the sexual relation Antony has. My Life Partner generically can be classified as a drama. The movie puts forth male bonding and visibly riddles the notions of love and sex between Kiran and his roommate Richard. Running through the same track, the film deals with a different ‘event.’ The ‘event’ is the exclusive celebration of male bonding. The socio-cultural background of the cinema is heterosexual. The elements of heterosexual social setup meet the ends of diplomacy of the Malayalam queer visual culture. The film reveals the life of two gays who were humiliated by the larger heterosexual society. In the end, Kiran dies because of extreme humiliation. The paper is a comparative and cultural analysis of the two movies, My Life Partner and Mumbai Police. I try to bring all the points of comparison together and explain the similarities and differences, how one movie differs from another. Thus, my attempt here explains how stereotypes and homophobia with other related issues are represented in these two movies.

Keywords: queer cinema, homophobia, malayalam cinema, queer films

Procedia PDF Downloads 201
116 Development of Metal-Organic Frameworks-Type Hybrid Functionalized Materials for Selective Uranium Extraction

Authors: Damien Rinsant, Eugen Andreiadis, Michael Carboni, Daniel Meyer

Abstract:

Different types of materials have been developed for the solid/liquid uranium extraction processes, such as functionalized organic polymers, hybrid silica or inorganic adsorbents. In general, these materials exhibit a moderate affinity for uranyl ions and poor selectivity against impurities like iron, vanadium or molybdenum. Moreover, the structural organization deficiency of these materials generates ion diffusion issues inside the material. Therefore, the aim of our study is to developed efficient and organized materials, stable in the acid media encountered in uranium extraction processes. Metal organic frameworks (MOFs) are hybrid crystalline materials consisting of an inorganic part (cluster or metal ions) and tailored organic linkers connected via coordination bonds. These hierarchical materials have exceptional surface area, thermal stability and a large variety of tunable structures. However, due to the reversibility of constitutive coordination bonds, MOFs have moderate stability in strongly complexing or acidic media. Only few of them are known to be stable in aqueous media and only one example is described in strong acidic media. However, these conditions are very often encountered in the environmental pollution remediation of mine wastewaters. To tackle the challenge of developing MOFs adapted for uranium extraction from acid mine waters, we have investigated the stability of several materials. To ensure a good stability we have synthetized and characterized different materials based on highly coordinated metal clusters, such as LnOFs and Zirconium based materials. Among the latter, the UiO family shows a great stability in sulfuric acid media even in the presence of 1.4 M sodium sulfate at pH 2. However, the stability in phosphoric media is reduced due to the high affinity between zirconium and phosphate ligand. Based on these results, we have developed a tertiary amine functionalized MOF denoted UiO-68-NMe2 particularly adapted for the extraction of anionic uranyl (VI) sulfate complexes mainly present in the acid mine solutions. The adsorption capacity of the material has been determined upon varying total sulfate concentration, contact time and uranium concentration. The extraction tests put in evidence different phenomena due to the complexity of the extraction media and the interaction between the MOF and sulfate anion. Finally, the extraction mechanisms and the interaction between uranyl and the MOF structure have been investigated. The functionalized material UiO-68-NMe2 has been characterized in the presence and absence of uranium by FT-IR, UV and Raman techniques. Moreover, the stability of the protonated amino functionalized MOF has been evaluated. The synthesis, characterization and evaluation of this type of hybrid material, particularly adapted for uranium extraction in sulfuric acid media by an anionic exchange mechanism, paved the way for the development of metal organic frameworks functionalized by different other chelating motifs, such as bifunctional ligands showing an enhanced affinity and selectivity for uranium in acid and complexing media. Work in this direction is currently in progress.

Keywords: extraction, MOF, ligand, uranium

Procedia PDF Downloads 136
115 Welfare Dynamics and Food Prices' Changes: Evidence from Landholding Groups in Rural Pakistan

Authors: Lubna Naz, Munir Ahmad, G. M. Arif

Abstract:

This study analyzes static and dynamic welfare impacts of food price changes for various landholding groups in Pakistan. The study uses three classifications of land ownership, landless, small landowners and large landowners, for analysis. The study uses Panel Survey, Pakistan Rural Household Survey (PRHS) of Pakistan Institute of Development Economics Islamabad, of rural households from two largest provinces (Sindh and Punjab) of Pakistan. The study uses all three waves (2001, 2004 and 2010) of PRHS. This research work makes three important contributions in literature. First, this study uses Quadratic Almost Ideal Demand System (QUAIDS) to estimate demand functions for eight food groups-cereals, meat, milk and milk products, vegetables, cooking oil, pulses and other food. The study estimates food demand functions with Nonlinear Seemingly Unrelated (NLSUR), and employs Lagrange Multiplier and test on the coefficient of squared expenditure term to determine inclusion of squared expenditure term. Test results support the inclusion of squared expenditure term in the food demand model for each of landholding groups (landless, small landowners and large landowners). This study tests for endogeneity and uses control function for its correction. The problem of observed zero expenditure is dealt with a two-step procedure. Second, it creates low price and high price periods, based on literature review. It uses elasticity coefficients from QUAIDS to analyze static and dynamic welfare effects (first and second order Tylor approximation of expenditure function is used) of food price changes across periods. The study estimates compensation variation (CV), money metric loss from food price changes, for landless, small and large landowners. Third, this study compares the findings on welfare implications of food price changes based on QUAIDS with the earlier research in Pakistan, which used other specification of the demand system. The findings indicate that dynamic welfare impacts of food price changes are lower as compared to static welfare impacts for all landholding groups. The static and dynamic welfare impacts of food price changes are highest for landless. The study suggests that government should extend social security nets to landless poor and categorically to vulnerable landless (without livestock) to redress the short-term impact of food price increase. In addition, the government should stabilize food prices and particularly cereal prices in the long- run.

Keywords: QUAIDS, Lagrange multiplier, NLSUR, and Tylor approximation

Procedia PDF Downloads 347
114 Spatial and Temporal Variability of Meteorological Drought Including Atmospheric Circulation in Central Europe

Authors: Andrzej Wałęga, Marta Cebulska, Agnieszka Ziernicka-Wojtaszek, Wojciech Młocek, Agnieszka Wałęga, Tommaso Caloiero

Abstract:

Drought is one of the natural phenomena influencing many aspects of human activities like food production, agriculture, industry, and the ecological conditions of the environment. In the area of the Polish Carpathians, there are periods with a deficit of rainwater and an increasing frequency in dry months, especially in the cold half of the year. The aim of this work is a spatial and temporal analysis of drought, expressed as SPI in a heterogenous area of the Polish Carpathian and of the highland Region in the Central part of Europe based on long-term precipitation data. Also, to our best knowledge, for the first time in this work, drought characteristics analyzed via the SPI were discussed based on the atmospheric circulation calendar. The study region is the Upper Vistula Basin, located in the southern and south-eastern part of Poland. In this work, monthly precipitation from 56 rainfall stations was analysed from 1961 to 2022. The 3-, 6-, 9-, and 12-month Standardized Precipitation Index (SPI) were used as indicators of meteorological drought. For the 3-month SPI, the main climatic mechanisms determining extreme droughts were defined based on the calendar of synoptic circulations. The Mann-Kendall test was used to detect the trend of extreme droughts. Statistically significant trends of SPI were observed on 52.7% of all analyzed stations, and in most cases, a positive trend was observed. Statistically significant trends were more frequently observed in stations located in the western part of the analyzed region. Long-term droughts, represented by the 12-month SPI, occurred in all stations but not in all years. Short-term droughts (3-month SPI) were most frequent in the winter season, 6 and 9-month SPI in winter and spring, and 12-month SPI in winter and autumn, respectively. The spatial distribution of drought was highly diverse. The most intensive drought occurred in 1984, with the 6-month SPI covering 98% of the analyzed region and the 9 and 12-month SPI covering 90% of the entire region. Droughts exhibit a seasonal pattern, with a dominant 10-year periodicity for all analyzed variants of SPI. Additionally, Fourier analysis revealed a 2-year periodicity for the 3-, 6-, and 9-month SPI and a 31-year periodicity for the 12-month SPI. The results provide insights into the typical climatic conditions in Poland, with strong seasonality in precipitation. The study highlighted that short-term extreme droughts, represented by the 3-month SPI, are often caused by anticyclonic situations with high-pressure wedges Ka and Wa, and anticyclonic West as observed in 52.3% of cases. These findings are crucial for understanding the spatial and temporal variability of short and long-term extreme droughts in Central Europe, particularly for the agriculture sector dominant in the northern part of the analyzed region, where drought frequency is highest.

Keywords: atmospheric circulation, drought, precipitation, SPI, the Upper Vistula Basin

Procedia PDF Downloads 49
113 State, Public Policies, and Rights: Public Expenditure and Social and Welfare Policies in America, as Opposed to Argentina

Authors: Mauro Cristeche

Abstract:

This paper approaches the intervention of the American State in the social arena and the modeling of the rights system from the Argentinian experience, by observing the characteristics of its federal budgetary system, the evolution of social public spending and welfare programs in recent years, labor and poverty statistics, and the changes on the labor market structure. The analysis seeks to combine different methodologies and sources: in-depth interviews with specialists, analysis of theoretical and mass-media material, and statistical sources. Among the results, it could be mentioned that the tendency to state interventionism (what has been called ‘nationalization of social life’) is quite evident in the United States, and manifests itself in multiple forms. The bibliography consulted, and the experts interviewed pointed out this increase of the state presence in historical terms (beyond short-term setbacks) in terms of increase of public spending, fiscal pressure, public employment, protective and control mechanisms, the extension of welfare policies to the poor sectors, etc. In fact, despite the significant differences between both countries, the United States and Argentina have common patterns of behavior in terms of the aforementioned phenomena. On the other hand, dissimilarities are also important. Some of them are determined by each country's own political history. The influence of political parties on the economic model seems more decisive in the United States than in Argentina, where the tendency to state interventionism is more stable. The centrality of health spending is evident in America, while in Argentina that discussion is more concentrated in the social security system and public education. The biggest problem of the labor market in the United States is the disqualification as a consequence of the technological development while in Argentina it is a result of its weakness. Another big difference is the huge American public spending on Defense. Then, the more federal character of the American State is also a factor of differential analysis against a centralized Argentine state. American public employment (around 10%) is comparatively quite lower than the Argentinian (around 18%). The social statistics show differences, but inequality and poverty have been growing as a trend in the last decades in both countries. According to public rates, poverty represents 14% in The United States and 33% in Argentina. American public spending is important (welfare spending and total public spending represent around 12% and 34% of GDP, respectively), but a bit lower than Latin-American or European average). In both cases, the tendency to underemployment and disqualification unemployment does not assume a serious gravity. Probably one of the most important aspects of the analysis is that private initiative and public intervention are much more intertwined in the United States, which makes state intervention more ‘fuzzy’, while in Argentina the difference is clearer. Finally, the power of its accumulation of capital and, more specifically, of the industrial and services sectors in the United States, which continues to be the engine of the economy, express great differences with Argentina, supported by its agro-industrial power and its public sector.

Keywords: state intervention, welfare policies, labor market, system of rights, United States of America

Procedia PDF Downloads 111
112 Enhancing Engineering Students Educational Experience: Studying Hydrostatic Pumps Association System in Fluid Mechanics Laboratories

Authors: Alexandre Daliberto Frugoli, Pedro Jose Gabriel Ferreira, Pedro Americo Frugoli, Lucio Leonardo, Thais Cavalheri Santos

Abstract:

Laboratory classes in Engineering courses are essential for students to be able to integrate theory with practical reality, by handling equipment and observing experiments. In the researches of physical phenomena, students can learn about the complexities of science. Over the past years, universities in developing countries have been reducing the course load of engineering courses, in accordance with cutting cost agendas. Quality education is the object of study for researchers and requires educators and educational administrators able to demonstrate that the institutions are able to provide great learning opportunities at reasonable costs. Didactic test benches are indispensable equipment in educational activities related to turbo hydraulic pumps and pumping facilities study, which have a high cost and require long class time due to measurements and equipment adjustment time. In order to overcome the aforementioned obstacles, aligned with the professional objectives of an engineer, GruPEFE - UNIP (Research Group in Physics Education for Engineering - Universidade Paulista) has developed a multi-purpose stand for the discipline of fluid mechanics which allows the study of velocity and flow meters, loads losses and pump association. In this work, results obtained by the association in series and in parallel of hydraulic pumps will be presented and discussed, mainly analyzing the repeatability of experimental procedures and their agreement with the theory. For the association in series two identical pumps were used, consisting of the connection of the discharge of a pump to the suction of the next one, allowing the fluid to receive the power of all machines in the association. The characteristic curve of the set is obtained from the curves of each of the pumps, by adding the heads corresponding to the same flow rates. The same pumps were associated in parallel. In this association, the discharge piping is common to the two machines together. The characteristic curve of the set was obtained by adding to each value of H (head height), the flow rates of each pump. For the tests, the input and output pressure of each pump were measured. For each set there were three sets of measurements, varying the flow rate in range from 6.0 to 8.5 m 3 / h. For the two associations, the results showed an excellent repeatability with variations of less than 10% between sets of measurements and also a good agreement with the theory. This variation agrees with the instrumental uncertainty. Thus, the results validate the use of the fluids bench designed for didactic purposes. As a future work, a digital acquisition system is being developed, using differential sensors of extremely low pressures (2 to 2000 Pa approximately) for the microcontroller Arduino.

Keywords: engineering education, fluid mechanics, hydrostatic pumps association, multi-purpose stand

Procedia PDF Downloads 202
111 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 113
110 Popular Modern Devotional Prints: The Construction of Identity between the Visual and Viewer in Public Interaction Spaces

Authors: Muhammad Asghar, Muhammad Ali, Farwah Batool

Abstract:

Despite the general belief in Islam that figural representations should be avoided, particularly propagated by the Deobandis, a religious group influenced by Salafi and Wahhabi ideas, nevertheless the public interaction spaces such as Shops and offices are decorated with popular, mass-produced, modern devotional prints. This study seeks to focus on popular visual culture, its display in public interaction places such as shops and discusses how people establish relationships with images. The method adopted was basically ethnographic: to describe as precisely and completely as possible the phenomena to be studied, using the language and conceptual categories of the interlocutors themselves. This study has been enriched by ethnographic field research conducted during the months from October to December 2015 in the major cities of Punjab and their brief forays and surroundings where we explored how seeing upon images performs religious identity within the public space. The study examines the pattern of aesthetics and taste in the shops of especially common people whose sensibilities have not been refined or influenced by being exposed to any narrative or fine arts. Furthermore, it is our intention to question the general beliefs and opinions in the context of popular practices, the way in which people relate to these prints. The interpretations and analyses presented in this study illuminate how people create meaning through the display of such items of material culture in the immediate settings of their spaces. This study also seeks to demonstrate how popular Islam is practiced, transformed and understood through the display of popular representations of popular figures of piety like Sufi saints or their shrines are important to many believers and thus occupy important places in their shops. The findings are supported with empirical evidence and based on interviews with the shopkeepers, owners and office employees. Looking upon those popular modern devotional prints keeps people’s reverence of the personages alive. Because of their sacred themes they affect a relationship between the saint and the beholders as well as serve to symbolize and reinforce their belief since they become powerful loci of emotional attachment. Collectively such devotional prints satisfy a local taste to help people establish contact with God through the saints’ intercession in order to receive protection and benediction, and help in spiritual, mental and material problems. By putting all these facets of belief together we gain an insight into both the subjective and cognizant role that icons’ of saints play in the lives of believers. Their veneration through ingeniously contrived modern means of production makes a significant contribution to an understanding of how such imagery promotes a powerful belief in Sufi saints, which ultimately gives indications of how popular Islam is practiced and understood at its gross roots level.

Keywords: ethnographic field research, popular visual culture, protected space, religious identity

Procedia PDF Downloads 207
109 Effects of Bipolar Plate Coating Layer on Performance Degradation of High-Temperature Proton Exchange Membrane Fuel Cell

Authors: Chen-Yu Chen, Ping-Hsueh We, Wei-Mon Yan

Abstract:

Over the past few centuries, human requirements for energy have been met by burning fossil fuels. However, exploiting this resource has led to global warming and innumerable environmental issues. Thus, finding alternative solutions to the growing demands for energy has recently been driving the development of low-carbon and even zero-carbon energy sources. Wind power and solar energy are good options but they have the problem of unstable power output due to unpredictable weather conditions. To overcome this problem, a reliable and efficient energy storage sub-system is required in future distributed-power systems. Among all kinds of energy storage technologies, the fuel cell system with hydrogen storage is a promising option because it is suitable for large-scale and long-term energy storage. The high-temperature proton exchange membrane fuel cell (HT-PEMFC) with metallic bipolar plates is a promising fuel cell system because an HT-PEMFC can tolerate a higher CO concentration and the utilization of metallic bipolar plates can reduce the cost of the fuel cell stack. However, the operating life of metallic bipolar plates is a critical issue because of the corrosion phenomenon. As a result, in this work, we try to apply different coating layer on the metal surface and to investigate the protection performance of the coating layers. The tested bipolar plates include uncoated SS304 bipolar plates, titanium nitride (TiN) coated SS304 bipolar plates and chromium nitride (CrN) coated SS304 bipolar plates. The results show that the TiN coated SS304 bipolar plate has the lowest contact resistance and through-plane resistance and has the best cell performance and operating life among all tested bipolar plates. The long-term in-situ fuel cell tests show that the HT-PEMFC with TiN coated SS304 bipolar plates has the lowest performance decay rate. The second lowest is CrN coated SS304 bipolar plate. The uncoated SS304 bipolar plate has the worst performance decay rate. The performance decay rates with TiN coated SS304, CrN coated SS304 and uncoated SS304 bipolar plates are 5.324×10⁻³ % h⁻¹, 4.513×10⁻² % h⁻¹ and 7.870×10⁻² % h⁻¹, respectively. In addition, the EIS results indicate that the uncoated SS304 bipolar plate has the highest growth rate of ohmic resistance. However, the ohmic resistance with the TiN coated SS304 bipolar plates only increases slightly with time. The growth rate of ohmic resistances with TiN coated SS304, CrN coated SS304 and SS304 bipolar plates are 2.85×10⁻³ h⁻¹, 3.56×10⁻³ h⁻¹, and 4.33×10⁻³ h⁻¹, respectively. On the other hand, the charge transfer resistances with these three bipolar plates all increase with time, but the growth rates are all similar. In addition, the effective catalyst surface areas with all bipolar plates do not change significantly with time. Thus, it is inferred that the major reason for the performance degradation is the elevated ohmic resistance with time, which is associated with the corrosion and oxidation phenomena on the surface of the stainless steel bipolar plates.

Keywords: coating layer, high-temperature proton exchange membrane fuel cell, metallic bipolar plate, performance degradation

Procedia PDF Downloads 261
108 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 58
107 Sustainable Transition of Universal Design for Learning-Based Teachers’ Latent Profiles from Contact to Distance Education

Authors: Alvyra Galkienė, Ona Monkevičienė

Abstract:

The full participation of all pupils in the overall educational process is defined by the concept of inclusive education, which is gradually evolving in education policy and practice. It includes the full participation of all pupils in a shared learning experience and educational practices that address barriers to learning. Inclusive education applying the principles of Universal Design for Learning (UDL), which includes promoting students' involvement in learning processes, guaranteeing a deep understanding of the analysed phenomena, initiating self-directed learning, and using e-tools to create a barrier-free environment, is a prerequisite for the personal success of each pupil. However, the sustainability of quality education is affected by the transformation of education systems. This was particularly evident during the period of the forced transition from contact to distance education in the COVID-19 pandemic. Research Problem: The transformation of the educational environment from real to virtual one and the loss of traditional forms of educational support highlighted the need for new research, revealing the individual profiles of teachers using UDL-based learning and the pathways of sustainable transfer of successful practices to non-conventional learning environments. Research Methods: In order to identify individual latent teacher profiles that encompass the essential components of UDL-based inclusive teaching and direct leadership of students' learning, the quantitative analysis software Mplius was used for latent profile analysis (LPA). In order to reveal proven, i.e., sustainable, pathways for the transit of the components of UDL-based inclusive learning to distance learning, latent profile transit analysis (LPTA) via Mplius was used. An online self-reported questionnaire was used for data collection. It consisted of blocks of questions designed to reveal the experiences of subject teachers in contact and distance learning settings. 1432 Lithuanian, Latvian, and Estonian subject teachers took part in the survey. Research Results: The LPA analysis revealed eight latent teacher profiles with different characteristics of UDL-based inclusive education or traditional teaching in contact teaching conditions. Only 4.1% of the subject teachers had a profile characterised by a sustained UDL approach to teaching: promoting pupils' self-directed learning; empowering pupils' engagement, understanding, independent action, and expression; promoting pupils' e-inclusion; and reducing the teacher's direct supervision of the students. Other teacher profiles were characterised by limited UDL-based inclusive education either due to the lack of one or more of its components or to the predominance of direct teacher guidance. The LPTA analysis allowed us to highlight the following transit paths of teacher profiles in the extreme conditions of the transition from contact to distance education: teachers staying in the same profile of UDL-based inclusive education (sustainable transit) or jumping to other profiles (unsustainable transit in case of barriers), and teachers from other profiles moving to this profile (ongoing transit taking advantage of the changed new possibilities in the teaching process).

Keywords: distance education, latent teacher profiles, sustainable transit, UDL

Procedia PDF Downloads 72