Search results for: ultra-low frequency waves
3021 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 1343020 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 1113019 Milk Protein Genetic Variation and Haplotype Structure in Sudanse Indigenous Dairy Zebu Cattle
Authors: Ammar Said Ahmed, M. Reissmann, R. Bortfeldt, G. A. Brockmann
Abstract:
Milk protein genetic variants are of interest for characterizing domesticated mammalian species and breeds, and for studying associations with economic traits. The aim of this work was to analyze milk protein genetic variation in the Sudanese native cattle breeds, which have been gradually declining in numbers over the last years due to the breed substitution, and indiscriminate crossbreeding. The genetic variation at three milk protein genes αS1-casein (CSN1S1), αS2-casein (CSN1S2) and ƙ-casein (CSN3) was investigated in 250 animals belonging to five Bos indicus cattle breeds of Sudan (Butana, Kenana, White-nile, Erashy and Elgash). Allele specific primers were designed for five SNPs determine the CSN1S1 variants B and C, the CSN1S2 variants A and B, the CSN3 variants A, B and H. Allele, haplotype frequencies and genetic distances (D) were calculated and the phylogenetic tree was constructed. All breeds were found to be polymorphic for the studied genes. The CSN1S1*C variant was found very frequently (>0.63) in all analyzed breeds with highest frequency (0.82) in White-nile cattle. The CSN1S2*A variant (0.77) and CSN3*A variant (0.79) had highest frequency in Kenana cattle. Eleven haplotypes in casein gene cluster were inferred. Six of all haplotypes occurred in all breeds with remarkably deferent frequencies. The estimated D ranged from 0.004 to 0.049. The most distant breeds were White-nile and Kenana (D 0.0479). The results presented contribute to the genetic knowledge of indigenous cattle and can be used for proper definition and classification of the Sudanese cattle breeds as well as breeding, utilization, and potential development of conservation strategies for local breeds.Keywords: milk protein, genetic variation, casein haplotype, Bos indicus
Procedia PDF Downloads 4373018 Interaction with Earth’s Surface in Remote Sensing
Authors: Spoorthi Sripad
Abstract:
Remote sensing is a powerful tool for acquiring information about the Earth's surface without direct contact, relying on the interaction of electromagnetic radiation with various materials and features. This paper explores the fundamental principle of "Interaction with Earth's Surface" in remote sensing, shedding light on the intricate processes that occur when electromagnetic waves encounter different surfaces. The absorption, reflection, and transmission of radiation generate distinct spectral signatures, allowing for the identification and classification of surface materials. The paper delves into the significance of the visible, infrared, and thermal infrared regions of the electromagnetic spectrum, highlighting how their unique interactions contribute to a wealth of applications, from land cover classification to environmental monitoring. The discussion encompasses the types of sensors and platforms used to capture these interactions, including multispectral and hyperspectral imaging systems. By examining real-world applications, such as land cover classification and environmental monitoring, the paper underscores the critical role of understanding the interaction with the Earth's surface for accurate and meaningful interpretation of remote sensing data.Keywords: remote sensing, earth's surface interaction, electromagnetic radiation, spectral signatures, land cover classification, archeology and cultural heritage preservation
Procedia PDF Downloads 593017 The Usage of Negative Emotive Words in Twitter
Authors: Martina Katalin Szabó, István Üveges
Abstract:
In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.Keywords: gender differences, negative emotive words, semantic changes over time, twitter
Procedia PDF Downloads 2053016 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 2903015 A Thermographic and Energy Based Approach to Define High Cycle Fatigue Strength of Flax Fiber Reinforced Thermoset Composites
Authors: Md. Zahirul Islam, Chad A. Ulven
Abstract:
Fiber-reinforced polymer matrix composites have a wide range of applications in the sectors of automotive, aerospace, sports utilities, among others, due to their high specific strength, stiffness as well as reduced weight. In addition to those favorable properties, composites composed of natural fibers and bio-based resins (i.e., biocomposites) have eco-friendliness and biodegradability. However, the applications of biocomposites are limited due to the lack of knowledge about their long-term reliability under fluctuating loads. In order to explore the long-term reliability of flax fiber reinforced composites under fluctuating loads through high cycle fatigue strength (HCFS), fatigue test were conducted on unidirectional flax fiber reinforced thermoset composites at different percentage loads of ultimate tensile strength (UTS) with a loading frequency of 5 Hz. Change of temperature of the sample during cyclic loading was captured using an IR camera. Initially, the temperature increased rapidly, but after a certain time, it stabilized. A mathematical model was developed to predict the fatigue life from the data of stabilized temperature. Stabilized temperature and dissipated energy per cycle were compared with applied stress. Both showed bilinear behavior and the intersection of those curves were used to determine HCFS. HCFS for unidirectional flax fiber reinforced composites is around 45% of UTS for a loading frequency of 5Hz. Unlike fatigue life, stabilized temperature and dissipated energy-based models are convenient to define HCFS as they have little variation from sample to sample.Keywords: energy method, fatigue, flax fiber reinforced composite, HCFS, thermographic approach
Procedia PDF Downloads 1063014 Finite Element Model to Investigate the Dynamic Behavior of Ring-Stiffened Conical Shell Fully and Partially Filled with Fluid
Authors: Mohammadamin Esmaeilzadehazimi, Morteza Shayan Arani, Mohammad Toorani, Aouni Lakis
Abstract:
This study uses a hybrid finite element method to predict the dynamic behavior of both fully and partially-filled truncated conical shells stiffened with ring stiffeners. The method combines classical shell theory and the finite element method, and employs displacement functions derived from exact solutions of Sanders' shell equilibrium equations for conical shells. The shell-fluid interface is analyzed by utilizing the velocity potential, Bernoulli's equation, and impermeability conditions to determine an explicit expression for fluid pressure. The equations of motion presented in this study apply to both conical and cylindrical shells. This study presents the first comparison of the method applied to ring-stiffened shells with other numerical and experimental findings. Vibration frequencies for conical shells with various boundary conditions and geometries in a vacuum and filled with water are compared with experimental and numerical investigations, achieving good agreement. The study thoroughly investigates the influence of geometric parameters, stiffener quantity, semi-vertex cone angle, level of water filled in the cone, and applied boundary conditions on the natural frequency of fluid-loaded ring-stiffened conical shells, and draws some useful conclusions. The primary advantage of the current method is its use of a minimal number of finite elements while achieving highly accurate results.Keywords: finite element method, fluid–structure interaction, conical shell, natural frequency, ring-stiffener
Procedia PDF Downloads 783013 Efficacy of Learning: Digital Sources versus Print
Authors: Rahimah Akbar, Abdullah Al-Hashemi, Hanan Taqi, Taiba Sadeq
Abstract:
As technology continues to develop, teaching curriculums in both schools and universities have begun adopting a more computer/digital based approach to the transmission of knowledge and information, as opposed to the more old-fashioned use of textbooks. This gives rise to the question: Are there any differences in learning from a digital source over learning from a printed source, as in from a textbook? More specifically, which medium of information results in better long-term retention? A review of the confounding factors implicated in understanding the relationship between learning from the two different mediums was done. Alongside this, a 4-week cohort study involving 76 1st year English Language female students was performed, whereby the participants were divided into 2 groups. Group A studied material from a paper source (referred to as the Print Medium), and Group B studied material from a digital source (Digital Medium). The dependent variables were grading of memory recall indexed by a 4 point grading system, and total frequency of item repetition. The study was facilitated by advanced computer software called Super Memo. Results showed that, contrary to prevailing evidence, the Digital Medium group showed no statistically significant differences in terms of the shift from Remember (Episodic) to Know (Semantic) when all confounding factors were accounted for. The shift from Random Guess and Familiar to Remember occurred faster in the Digital Medium than it did in the Print Medium.Keywords: digital medium, print medium, long-term memory recall, episodic memory, semantic memory, super memo, forgetting index, frequency of repetitions, total time spent
Procedia PDF Downloads 2893012 The Study of Solar Activity during Sun Eclipse and Its Relation to Earthquake
Authors: Hanieh Sadat Jannesari. Rahelehossadat Abtahi, Kourosh Bamzadeh, Alireza Nadimi
Abstract:
The earthquake is one of the most devastating natural hazards, in which hundreds of thousands have lost their lives as a result of it. So far, experts have tried to use precursors to identify the earthquake before it occurs in order to alert and save people, a part of which relates to solar activity and earthquakes. The purpose of this article is to investigate solar activity during the solar eclipse as a precursor to pre-earthquake awareness. Information from this article is derived from the Influences and USGS Daily Data Center. During solar activity, electric interactions between the solar wind and the celestial bodies are formed, and then gravitational lenses are formed. If, during this event, there is also an eclipse, the dispersed waves in space (in accordance with the theory of general relativity of Einstein) in contact with plasma-gravitational lenses in space will move in a straight line toward the earth. In addition to forming the focal point, these gravitational lenses reflect the source image either at their focal length or farther away. The image reflected in the earth by ionized particles in the form of energy transmission lines can cause material collapse and earthquakes. In this study, the correlation between solar winds and the celestial bodies during the solar eclipse is about 76% of the location of large earthquakes.Keywords: earthquake, plasma-gravitational lens, solar eclipse, solar spots
Procedia PDF Downloads 263011 Predictive Factors of Healthcare-Associated Infections and Antibiotic Use Patterns: A Cross-Sectional Survey at the Charles Nicolle Hospital of Tunis
Authors: Nouira Mariem, Ennigrou Samir
Abstract:
Background and aims: Healthcare-associated infections (HAI) represent a major public health problem worldwide. They represent one of the most serious adverse events in health care. The objectives of our study were to estimate the prevalence of HAI at the Charles Nicolle Hospital (CNH) and to identify the main associated factors as well as to estimate the frequency of antibiotic use. Methods: It was a cross-sectional study at the CNH with a unique passage per department (October-December 2018). All patients present at the wards for more than 48 hours were included. All patients from outpatient consultations, emergency, and dialysis departments were not included. The site definitions of infections proposed by the Centers for Disease Control and Prevention (CDC) were used. Only clinically and/or microbiologically confirmed active HAIs were included. Results: A total of 318 patients were included, with a mean age of 52 years and a sex ratio (female/male) of 1.05. A total of 41 patients had one or more active HAIs, corresponding to a prevalence of 13.1% (95% CI: 9.3%-16.9%). The most frequent site infections were urinary tract infections and pneumonia. Multivariate analysis among adult patients (>=18 years) (n=261) revealed that infection on admission (p=0.01), alcoholism (p=0.01), high blood pressure (p=0.008), having at least one invasive device inserted (p=0.004), and history of recent surgery (p=0.03), increased the risk of HAIs significantly. More than 1 of 3 patients (35.4%) were under antibiotics on the day of the survey, of which more than half (57.4%) were under two or more types of antibiotics. Conclusion: The prevalence of HAIs and antibiotic prescriptions at the CNH were considerably high. An infection prevention and control committee, as well as the development of an antibiotic stewardship program with continuous monitoring using repeated prevalence surveys, must be implemented to limit the frequency of these infections effectively.Keywords: prevalence, healthcare associated infection, antibiotic, Tunisia
Procedia PDF Downloads 823010 Free Vibration Analysis of Timoshenko Beams at Higher Modes with Central Concentrated Mass Using Coupled Displacement Field Method
Authors: K. Meera Saheb, K. Krishna Bhaskar
Abstract:
Complex structures used in many fields of engineering are made up of simple structural elements like beams, plates etc. These structural elements, sometimes carry concentrated masses at discrete points, and when subjected to severe dynamic environment tend to vibrate with large amplitudes. The frequency amplitude relationship is very much essential in determining the response of these structural elements subjected to the dynamic loads. For Timoshenko beams, the effects of shear deformation and rotary inertia are to be considered to evaluate the fundamental linear and nonlinear frequencies. A commonly used method for solving vibration problem is energy method, or a finite element analogue of the same. In the present Coupled Displacement Field method the number of undetermined coefficients is reduced to half when compared to the famous Rayleigh Ritz method, which significantly simplifies the procedure to solve the vibration problem. This is accomplished by using a coupling equation derived from the static equilibrium of the shear flexible structural element. The prime objective of the present paper here is to study, in detail, the effect of a central concentrated mass on the large amplitude free vibrations of uniform shear flexible beams. Accurate closed form expressions for linear frequency parameter for uniform shear flexible beams with a central concentrated mass was developed and the results are presented in digital form.Keywords: coupled displacement field, coupling equation, large amplitude vibrations, moderately thick plates
Procedia PDF Downloads 2263009 Harnessing Earth's Electric Field and Transmission of Electricity
Authors: Vaishakh Medikeri
Abstract:
Energy in this Universe is the most basic characteristic of every particle. Since the birth of life on this planet, there has been a quest undertaken by the living beings to analyze, understand and harness the precious natural facts of the nature. In this quest, one of the greatest undertaken is the process of harnessing the naturally available energy. Scientists around the globe have discovered many ways to harness the freely available energy. But even today we speak of “Power Crisis”. Nikola Tesla once said “Nature has stored up in this universe infinite energy”. Energy is everywhere around us in unlimited quantities; all of it waiting to be harnessed by us. Here in this paper a method has been proposed to harness earth's electric field and transmit the stored electric energy using strong magnetic fields and electric fields. In this paper a new technique has been proposed to harness earth's electric field which is everywhere around the world in infinite quantities. Near the surface of the earth there is an electric field of about 120V/m. This electric field is used to charge a capacitor with high capacitance. Later the energy stored is allowed to pass through a device which converts the DC stored into AC. The AC so produced is then passed through a step down transformer to magnify the incoming current. Later the current passes through the RLC circuit. Later the current can be transmitted wirelessly using the principle of resonant inductive coupling. The proposed apparatus can be placed in most of the required places and any circuit tuned to the frequency of the transmitted current can receive the energy. The new source of renewable energy is of great importance if implemented since the apparatus is not costly and can be situated in most of the required places. And also the receiver which receives the transmitted energy is just an RLC circuit tuned to the resonant frequency of the transmitted energy. By using the proposed apparatus the energy losses can be reduced to a very large extent.Keywords: capacitor, inductive resonant coupling, RLC circuit, transmission of electricity
Procedia PDF Downloads 3733008 Effects of Local Ground Conditions on Site Response Analysis Results in Hungary
Authors: Orsolya Kegyes-Brassai, Zsolt Szilvágyi, Ákos Wolf, Richard P. Ray
Abstract:
Local ground conditions have a substantial influence on the seismic response of structures. Their inclusion in seismic hazard assessment and structural design can be realized at different levels of sophistication. However, response results based on more advanced calculation methods e.g. nonlinear or equivalent linear site analysis tend to show significant discrepancies when compared to simpler approaches. This project's main objective was to compare results from several 1-D response programs to Eurocode 8 design spectra. Data from in-situ site investigations were used for assessing local ground conditions at several locations in Hungary. After discussion of the in-situ measurements and calculation methods used, a comprehensive evaluation of all major contributing factors for site response is given. While the Eurocode spectra should account for local ground conditions based on soil classification, there is a wide variation in peak ground acceleration determined from 1-D analyses versus Eurocode. Results show that current Eurocode 8 design spectra may not be conservative enough to account for local ground conditions typical for Hungary.Keywords: 1-D site response analysis, multichannel analysis of surface waves (MASW), seismic CPT, seismic hazard assessment
Procedia PDF Downloads 2463007 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single
Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa
Abstract:
Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600 minimum load impedance of the DAQ card with the 5 to 20 low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.Keywords: flux density, electrical steel, LabVIEW, magnetization
Procedia PDF Downloads 2913006 Uncovering Underwater Communication for Multi-Robot Applications via CORSICA
Authors: Niels Grataloup, Micael S. Couceiro, Manousos Valyrakis, Javier Escudero, Patricia A. Vargas
Abstract:
This paper benchmarks the possible underwater communication technologies that can be integrated into a swarm of underwater robots by proposing an underwater robot simulator named CORSICA (Cross platfORm wireleSs communICation simulator). Underwater exploration relies increasingly on the use of mobile robots, called Autonomous Underwater Vehicles (AUVs). These robots are able to reach goals in harsh underwater environments without resorting to human divers. The introduction of swarm robotics in these scenarios would facilitate the accomplishment of complex tasks with lower costs. However, swarm robotics requires implementation of communication systems to be operational and have a non-deterministic behaviour. Inter-robot communication is one of the key challenges in swarm robotics, especially in underwater scenarios, as communication must cope with severe restrictions and perturbations. This paper starts by presenting a list of the underwater propagation models of acoustic and electromagnetic waves, it also reviews existing transmitters embedded in current robots and simulators. It then proposes CORSICA, which allows validating the choices in terms of protocol and communication strategies, whether they are robot-robot or human-robot interactions. This paper finishes with a presentation of possible integration according to the literature review, and the potential to get CORSICA at an industrial level.Keywords: underwater simulator, robot-robot underwater communication, swarm robotics, transceiver and communication models
Procedia PDF Downloads 3003005 Perception of Public Transport Quality of Service among Regular Private Vehicle Users in Five European Cities
Authors: Juan de Ona, Esperanza Estevez, Rocío de Ona
Abstract:
Urban traffic levels can be reduced by drawing travelers away from private vehicles over to using public transport. This modal change can be achieved by either introducing restrictions on private vehicles or by introducing measures which increase people’s satisfaction with public transport. For public transport users, quality of service affects customer satisfaction, which, in turn, influences the behavioral intentions towards the service. This paper intends to identify the main attributes which influence the perception private vehicle users have about the public transport services provided in five European cities: Berlin, Lisbon, London, Madrid and Rome. Ordinal logit models have been applied to an online panel survey with a sample size of 2,500 regular private vehicle users (approximately 500 inhabitants per city). To achieve a comprehensive analysis and to deal with heterogeneity in perceptions, 15 models have been developed for the entire sample and 14 user segments. The results show differences between the cities and among the segments. Madrid was taken as reference city and results indicate that the inhabitants are satisfied with public transport in Madrid and that the most important public transport service attributes for private vehicle users are frequency, speed and intermodality. Frequency is an important attribute for all the segments, while speed and intermodality are important for most of the segments. An analysis by segments has identified attributes which, although not important in most cases, are relevant for specific segments. This study also points out important differences between the five cities. Findings from this study can be used to develop policies and recommendations for persuading.Keywords: service quality, satisfaction, public transportation, private vehicle users, car users, segmentation, ordered logit
Procedia PDF Downloads 1173004 The Relationship of Socioeconomic Status and Levels of Delinquency among Senior High School Students with Secured Attachment to Their Mothers
Authors: Aldrin Avergas, Quennie Mariel Peñaranda, Niña Karen San Miguel, Alexis Katrina Agustin, Peralta Xusha Mae, Maria Luisa Sison
Abstract:
The research is entitled “The Relationship of Socioeconomic Status and Levels of Delinquency among Senior High School Students with Secured Attachment to their Mothers”. The researchers had explored the relationship between socioeconomic status and delinquent tendencies among grade 11 students. The objective of the research is to discover if delinquent behavior will have a relationship with the current socio-economic status of an adolescent student having a warm relationship with their mothers. The researchers utilized three questionnaires that would measure the three variables of the study, namely: (1) 1SEC 2012: The New Philippines Socioeconomic Classification System was used to show the current socioeconomic status of the respondents, (2) Self-Reported Delinquency – Problem Behavior Frequency Scale was utilized to determine the individual's frequency in engaging to delinquent behavior, and (3) Inventory of Parent and Peer Attachment Revised (IPPA-R) was used to determine the attachment style of the respondents. The researchers utilized a quantitative research design, specifically correlation research. The study concluded that there is no significant relationship between socioeconomic status and academic delinquency despite the fact that these participants had secured attachment to their mother hence this research implies that delinquency is not just a problem for students belonging in the lower socio-economic status and that even having a warm and close relationship with their mothers is not sufficient enough for these students to completely be free from engaging in delinquent acts. There must be other factors (such as peer pressure, emotional quotient, self-esteem or etc.) that are might be contributing to delinquent behaviors.Keywords: adolescents, delinquency, high school students, secured attachment style, socioeconomic status
Procedia PDF Downloads 1863003 Genetics of Atopic Dermatitis: Role of Cytokines Genes Polymorphisms
Authors: Ghaleb Bin Huraib, Fahad Al Harthi, Misbahul Arfin, Abdulrahman Al-Asmari
Abstract:
Atopic dermatitis (AD), also known as atopic eczema, is a chronic inflammatory skin disease characterized by severe itching and recurrent relapsing eczema-like skin lesions, affecting up to 20% of children and 10% of adults in industrialized countries. AD is a complex multifactorial disease, and its exact etiology and pathogenesis have not been fully elucidated. The aim of this study was to investigate the impact of gene polymorphisms of T helper cell subtype Th1 and Th2 cytokines, interferon-gamma (IFN-γ), interleukin-6 (IL-6) and transforming growth factor (TGF)-β1on AD susceptibility in a Saudi cohort. One hundred four unrelated patients with AD and 195 healthy controls were genotyped for IFN-γ (874A/T), IL-6 (174G/C) and TGF-β1 (509C/T) polymorphisms using ARMS-PCR and PCR-RFLP technique. The frequency of genotypes AA and AT of IFN-γ (874A/T) differed significantly among patients and controls (P 0.001). The genotype AT was increased while genotype AA was decreased in AD patients as compared to controls. AD patients also had higher frequency of T containing genotypes (AT+TT) than controls (P = 0.001). The frequencies of allele T and A were statistically different in patients and controls (P = 0.04). The frequencies of genotype GG and allele G of IL-6 (174G/C) were significantly higher while genotype GC and allele C were lower in AD patients than controls. There was no significant difference in the frequencies of alleles and genotypes of TGF-β1 (509C/T) polymorphism between patient and control groups. These results showed that susceptibility to AD is influenced by presence or absence of genotypes of IFN-γ (874A/T) and IL-6 (174G/C) polymorphisms. It is concluded that T-allele and T-containing genotypes (AT+TT) of IFN-γ (874A/T) and G-allele and GG genotype ofIL-6 (174G/C) polymorphisms are susceptible to AD in Saudis.On the other hand, the TGF-β1 (509C/T) polymorphism may not be associated with AD risk in Saudi population however further studies with large sample size are required to confirm these findings.Keywords: atopic dermatitis, interferon-γ, interleukin-6, transforming growth factor-β1, polymorphism
Procedia PDF Downloads 1183002 Family Cohesion, Social Networks, and Cultural Differences in Latino and Asian American Help Seeking Behaviors
Authors: Eileen Y. Wong, Katherine Jin, Anat Talmon
Abstract:
Background: Help seeking behaviors are highly contingent on socio-cultural factors such as ethnicity. Both Latino and Asian Americans underutilize mental health services compared to their White American counterparts. This difference may be related to the composite of one’s social support system, which includes family cohesion and social networks. Previous studies have found that Latino families are characterized by higher levels of family cohesion and social support, and Asian American families with greater family cohesion exhibit lower levels of help seeking behaviors. While both are broadly considered collectivist communities, within-culture variability is also significant. Therefore, this study aims to investigate the relationship between help seeking behaviors in the two cultures with levels of family cohesion and strength of social network. We also consider such relationships in light of previous traumatic events and diagnoses, particularly post-traumatic stress disorder (PTSD), to understand whether clinically diagnosed individuals differ in their strength of network and help seeking behaviors. Method: An adult sample (N = 2,990) from the National Latino and Asian American Study (NLAAS) provided data on participants’ social network, family cohesion, likelihood of seeking professional help, and DSM-IV diagnoses. T-tests compared Latino American (n = 1,576) and Asian American respondents (n = 1,414) in strength of social network, level of family cohesion, and likelihood of seeking professional help. Linear regression models were used to identify the probability of help-seeking behavior based on ethnicity, PTSD diagnosis, and strength of social network. Results: Help-seeking behavior was significantly associated with family cohesion and strength of social network. It was found that higher frequency of expressing one’s feelings with family significantly predicted lower levels of help-seeking behaviors (β = [-.072], p = .017), while higher frequency of spending free time with family significantly predicted higher levels of help-seeking behaviors (β = [.129], p = .002) in the Asian American sample. Subjective importance of family relations compared to that of one’s peers also significantly predict higher levels of help-seeking behaviors (β = [.095], p = .011) in the Asian American sample. Frequency of sharing one’s problems with relatives significantly predicted higher levels of help-seeking behaviors (β = [.113], p < .01) in the Latino American sample. A PTSD diagnosis did not have any significant moderating effect. Conclusion: Considering the underutilization of mental health services in Latino and Asian American minority groups, it is crucial to understand ways in which help seeking behavior can be encouraged. Our findings suggest that different dimensions within family cohesion and social networks have differential impacts on help-seeking behavior. Given the multifaceted nature of family cohesion and cultural relevance, the implications of our findings for theory and practice will be discussed.Keywords: family cohesion, social networks, Asian American, Latino American, help-seeking behavior
Procedia PDF Downloads 683001 Pros and Cons of Distance Learning in Europe and Perspective for the Future
Authors: Aleksandra Ristic
Abstract:
The Coronavirus Disease – 2019 hit Europe in February 2020, and infections took place in four waves. It left consequences and demanded changes for the future. More than half of European countries responded quickly by declaring a state of emergency and introducing various containment measures that have had a major impact on individuals’ lives in recent years. Closing public lives was largely achieved by limited access and/or closing public institutions and services, including the closure of educational institutions. Teaching in classrooms converted to distance learning. In the research, we used a quantitative study to analyze various factors of distance learning that influenced pupils in different segments: teachers’ availability, family support, entire online conference learning, successful distance learning, time for themselves, reliable sources, teachers’ feedback, successful distance learning, online participation classes, motivation and teachers’ communication and theoretical review of the importance of digital skills, e-learning Index, World comparison of e-learning in the past, digital education plans for the field of Europe. We have gathered recommendations and distance learning solutions to improve the learning process by strengthening teachers and creating more tiered strategies for setting and achieving learning goals by the children.Keywords: availability, digital skills, distance learning, resources
Procedia PDF Downloads 1023000 Testing a Dose-Response Model of Intergenerational Transmission of Family Violence
Authors: Katherine Maurer
Abstract:
Background and purpose: Violence that occurs within families is a global social problem. Children who are victims or witness to family violence are at risk for many negative effects both proximally and distally. One of the most disconcerting long-term effects occurs when child victims become adult perpetrators: the intergenerational transmission of family violence (ITFV). Early identification of those children most at risk for ITFV is needed to inform interventions to prevent future family violence perpetration and victimization. Only about 25-30% of child family violence victims become perpetrators of adult family violence (either child abuse, partner abuse, or both). Prior research has primarily been conducted using dichotomous measures of exposure (yes; no) to predict ITFV, given the low incidence rate in community samples. It is often assumed that exposure to greater amounts of violence predicts greater risk of ITFV. However, no previous longitudinal study with a community sample has tested a dose-response model of exposure to physical child abuse and parental physical intimate partner violence (IPV) using count data of frequency and severity of violence to predict adult ITFV. The current study used advanced statistical methods to test if increased childhood exposure would predict greater risk of ITFV. Methods: The study utilized 3 panels of prospective data from a cohort of 15 year olds (N=338) from the Project on Human Development in Chicago Neighborhoods longitudinal study. The data were comprised of a stratified probability sample of seven ethnic/racial categories and three socio-economic status levels. Structural equation modeling was employed to test a hurdle regression model of dose-response to predict ITFV. A version of the Conflict Tactics Scale was used to measure physical violence victimization, witnessing parental IPV and young adult IPV perpetration and victimization. Results: Consistent with previous findings, past 12 months incidence rates severity and frequency of interpersonal violence were highly skewed. While rates of parental and young adult IPV were about 40%, an unusually high rate of physical child abuse (57%) was reported. The vast majority of a number of acts of violence, whether minor or severe, were in the 1-3 range in the past 12 months. Reported frequencies of more than 5 times in the past year were rare, with less than 10% of those reporting more than six acts of minor or severe physical violence. As expected, minor acts of violence were much more common than acts of severe violence. Overall, regression analyses were not significant for the dose-response model of ITFV. Conclusions and implications: The results of the dose-response model were not significant due to a lack of power in the final sample (N=338). Nonetheless, the value of the approach was confirmed for the future research given the bi-modal nature of the distributions which suggest that in the context of both child physical abuse and physical IPV, there are at least two classes when frequency of acts is considered. Taking frequency into account in predictive models may help to better understand the relationship of exposure to ITFV outcomes. Further testing using hurdle regression models is suggested.Keywords: intergenerational transmission of family violence, physical child abuse, intimate partner violence, structural equation modeling
Procedia PDF Downloads 2432999 Wrist Pain, Technological Device Used, and Perceived Academic Performance Among the College of Computer Studies Students
Authors: Maquiling Jhuvie Jane R., Ojastro Regine B., Peroja Loreille Marie B., Pinili Joy Angela., Salve Genial Gail M., Villavicencio Marielle Irene B., Yap Alther Francis Garth B.
Abstract:
Introduction: This study investigated the impact of prolonged device usage on wrist pain and perceived academic performance among college students in Computer Studies. The research aims to explore the correlation between the frequency of technological device use and the incidence of wrist pain, as well as how this pain affects students' academic performance. The study seeks to provide insights that could inform interventions to promote better musculoskeletal health among students engaged in intensive technology use to further improve their academic performance. Method: The study utilized descriptive-correlational and comparative design, focusing on bona fide students from Silliman University’s College of Computer Studies during the second semester of 2023-2024. Participants were recruited through a survey sent via school email, with responses collected until March 30, 2024. Data was gathered using a password-protected device and Google Forms, ensuring restricted access to raw data. The demographic profile was summarized, and the prevalence of wrist pain and device usage were analyzed using percentages and weighted means. Statistical analyses included Spearman’s rank correlation coefficient to assess the relationship between wrist pain and device usage and an Independent T-test to evaluate differences in academic performance based on wrist pain presence. Alpha was set at 0.05. Results: The study revealed that 40% of College of Computer Studies students experience wrist pain, with 2 out of every 5 students affected. Laptops and desktops were the most frequently used devices for academic work, achieving a weighted mean of 4.511, while mobile phones and tablets received lower means of 4.183 and 1.911, respectively. The average academic performance score among students was 29.7, classified as ‘Good Performance.’ Notably, there was no significant relationship between the frequency of device usage and wrist pain, as indicated by p-values exceeding 0.05. However, a significant difference in perceived academic performance was observed, with students without wrist pain scoring an average of 30.39 compared to 28.72 for those with wrist pain and a p-value of 0.0134 confirming this distinction. Conclusion: The study revealed that about 40% of students in the College of Computer Studies experience wrist pain, but there is no significant link between device usage and pain occurrence. However, students without wrist pain demonstrated better academic performance than those with pain, suggesting that wrist health may impact academic success. These findings imply that physical therapy practices in the Philippines should focus on preventive strategies and ergonomic education to improve student health and performance.Keywords: wrist pain, frequency of use of technological devices, perceived academic performance, physical therapy
Procedia PDF Downloads 142998 Optimization of the Self-Recognition Direct Digital Radiology Technology by Applying the Density Detector Sensors
Authors: M. Dabirinezhad, M. Bayat Pour, A. Dabirinejad
Abstract:
In 2020, the technology was introduced to solve some of the deficiencies of direct digital radiology. SDDR is an invention that is capable of capturing dental images without human intervention, and it was invented by the authors of this paper. Adjusting the radiology wave dose is a part of the dentists, radiologists, and dental nurses’ tasks during the radiology photography process. In this paper, an improvement will be added to enable SDDR to set the suitable radiology wave dose according to the density and age of the patients automatically. The separate sensors will be included in the sensors’ package to use the ultrasonic wave to detect the density of the teeth and change the wave dose. It facilitates the process of dental photography in terms of time and enhances the accuracy of choosing the correct wave dose for each patient separately. Since the radiology waves are well known to trigger off other diseases such as cancer, choosing the most suitable wave dose can be helpful to decrease the side effect of that for human health. In other words, it decreases the exposure time for the patients. On the other hand, due to saving time, less energy will be consumed, and saving energy can be beneficial to decrease the environmental impact as well.Keywords: dental direct digital imaging, environmental impacts, SDDR technology, wave dose
Procedia PDF Downloads 1942997 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 1052996 Comparative Study of Free Vibrational Analysis and Modes Shapes of FSAE Car Frame Using Different FEM Modules
Authors: Rajat Jain, Himanshu Pandey, Somesh Mehta, Pravin P. Patil
Abstract:
Formula SAE cars are the student designed and fabricated formula prototype cars, designed according to SAE INTERNATIONAL design rules which compete in the various national and international events. This paper shows a FEM based comparative study of free vibration analysis of different mode shapes of a formula prototype car chassis frame. Tubing sections of different diameters as per the design rules are designed in such a manner that the desired strength can be achieved. Natural frequency of first five mode was determined using finite element analysis method. SOLIDWORKS is used for designing the frame structure and SOLIDWORKS SIMULATION and ANSYS WORKBENCH 16.2 are used for the modal analysis. Mode shape results of ANSYS and SOLIDWORKS were compared. Fixed –fixed boundary conditions are used for fixing the A-arm wishbones. The simulation results were compared for the validation of the study. First five modes were compared and results were found within the permissible limits. The AISI4130 (CROMOLY- chromium molybdenum steel) material is used and the chassis frame is discretized with fine quality QUAD mesh followed by Fixed-fixed boundary conditions. The natural frequency of the chassis frame is 53.92-125.5 Hz as per the results of ANSYS which is found within the permissible limits. The study is concluded with the light weight and compact chassis frame without compensation with strength. This design allows to fabricate an extremely safe driver ergonomics, compact, dynamically stable, simple and light weight tubular chassis frame with higher strength.Keywords: FEM, modal analysis, formula SAE cars, chassis frame, Ansys
Procedia PDF Downloads 3472995 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)
Authors: Neda Zamani
Abstract:
The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company
Procedia PDF Downloads 752994 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 2962993 Computational Analysis and Daily Application of the Key Neurotransmitters Involved in Happiness: Dopamine, Oxytocin, Serotonin, and Endorphins
Authors: Hee Soo Kim, Ha Young Kyung
Abstract:
Happiness and pleasure are a result of dopamine, oxytocin, serotonin, and endorphin levels in the body. In order to increase the four neurochemical levels, it is important to associate daily activities with its corresponding neurochemical releases. This includes setting goals, maintaining social relationships, laughing frequently, and exercising regularly. The likelihood of experiencing happiness increases when all four neurochemicals are released at the optimal level. The achievement of happiness is important because it increases healthiness, productivity, and the ability to overcome adversity. To process emotions, electrical brain waves, brain structure, and neurochemicals must be analyzed. This research uses Chemcraft and Avogadro to determine the theoretical and chemical properties of the four neurochemical molecules. Each neurochemical molecule’s thermodynamic stability is calculated to observe the efficiency of the molecules. The study found that among dopamine, oxytocin, serotonin, alpha-, beta-, and gamma-endorphin, beta-endorphin has the lowest optimized energy of 388.510 kJ/mol. Beta-endorphin, a neurotransmitter involved in mitigating pain and stress, is the most thermodynamically stable and efficient molecule that is involved in the process of happiness. Through examining such properties of happiness neurotransmitters, the science of happiness is better understood.Keywords: happiness, neurotransmitters, positive psychology, dopamine, oxytocin, serotonin, endorphins
Procedia PDF Downloads 1542992 Infant and Young Child-Feeding Practices in Mongolia
Authors: Otgonjargal Damdinbaljir
Abstract:
Background: Infant feeding practices have a major role in determining the nutritional status of children and are associated with household socioeconomic and demographic factors. In 2010, Mongolia used WHO 2008 edition of Indicators for assessing infant and young child feeding practices for the first time. Objective: To evaluate the feeding status of infants and young children under 2 years old in Mongolia. Materials and Methods: The study was conducted by cluster random sampling. The data on breastfeeding and complementary food supplement of the 350 infants and young children aged 0-23 months in 21 provinces of the 4 economic regions of the country and capital Ulaanbaatar city were collected through questionnaires. The feeding status was analyzed according to the WHO 2008 edition of Indicators for assessing infant and young child feeding practices. Analysis of data: Survey data was analysed using the PASW statistics 18.0 and EPI INFO 2000 software. For calculation of overall measures for the entire survey sample, analyses were stratified by region. Age-specific feeding patterns were described using frequencies, proportions and survival analysis. Logistic regression was done with feeding practice as dependent and socio demographic factors as independent variables. Simple proportions were calculated for each IYCF indicator. The differences in the feeding practices between sexes and age-groups, if any, were noted using chi-square test. Ethics: The Ethics Committee under the auspices of the Ministry of Health approved the study. Results: A total of 350 children aged 0-23 months were investigated. The rate of ever breastfeeding of children aged 0-23 months reached up to 98.2%, while the percentage of early initiation of breastfeeding was only 85.5%. The rates of exclusive breastfeeding under 6 months, continued breastfeeding for 1 year, and continued breastfeeding for 2 years were 71.3%, 74% and 54.6%, respectively. The median time of giving complementary food was the 6th month and the weaning time was the 9th month. The rate of complementary food supplemented from 6th-8th month in time was 80.3%. The rates of minimum dietary diversity, minimum meal frequency, and consumption of iron-rich or iron-fortified foods among children aged 6-23 months were 52.1%, 80.8% (663/813) and 30.1%, respectively. Conclusions: The main problems revealed from the study were inadequate category and frequency of complementary food, and the low rate of consumption of iron-rich or iron-fortified foods were the main issues to be concerned on infant feeding in Mongolia. Our findings have highlighted the need to encourage mothers to enrich their traditional wheat- based complementary foods add more animal source foods and vegetables.Keywords: complementary feeding, early initiation of breastfeeding, exclusive breastfeeding, minimum meal frequency
Procedia PDF Downloads 480