Search results for: magnitude
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 833

Search results for: magnitude

143 Magnitude of Transactional Sex and Its Determinant Factors Among Women in Sub-Saharan Africa: Systematic Review and Meat Analysis

Authors: Gedefaye Nibret Mihretie

Abstract:

Background: Transactional sex is casual sex between two people to receive material incentives in exchange for sexual favors. Transactional sex is associated with negative consequences, which increase the risk of sexually transmitted diseases, including HIV/AIDS, unintended pregnancy, unsafe abortion, and physiological trauma. Many primary studies in Sub-Saharan Africa have been conducted to assess the prevalence and associated factors of transactional sex among women. These studies had great discrepancies and inconsistent results. Hence, this systematic review and meta-analysis aimed to synthesize the pooled prevalence of the practice of transactional sex among women and its associated factors in Sub-Saharan Africa. Method: Cross-sectional studies were systematically searched from March 6, 2022, to April 24, 2022, using PubMed, Google Scholar, HINARI, Cochrane Library, and grey literature. The pooled prevalence of transactional sex and associated factors was estimated using DerSemonial-Laird Random Effect Model. Stata (version 16.0) was used to analyze the data. The I-squared statistic was used to assess the studies' heterogeneity. A funnel plot and Egger's test were used to check for publication bias. A subgroup analysis was performed to minimize the underline heterogeneity depending on the study years, source of data, sample sizes and geographical location. Results: Four thousand one hundred thirty articles were extracted from various databases. The final thirty-two studies were included in this systematic review, including 108,075 participants. The pooled prevalence of transactional sex among women in Sub-Saharan Africa was 12.55%, with a confidence interval of 9.59% to 15.52%. Educational status (OR = .48, 95%CI, 0.27, 0.69) was the protective factors of transactional sex whereas, alcohol use (OR = 1.85, 95% CI: 1.19, 2.52), early sex debut (OR = 2.57, 95%CI, 1.17, 3.98), substance abuse (OR = 4.21, 95% CI: 2.05, 6.37), having history of sexual experience abuse (OR = 4.08, 95% CI: 1.38, 6.78), physical violence abuse (OR = 6.59, 95% CI: 1.17, 12.02), and sexual violence abuse (OR = 3.56, 95% CI: 1.15, 8.27) were the risk factors of transactional sex. Conclusion: The prevalence of transactional sex among women in Sub-Saharan Africa was high. Educational status, alcohol use, substance abuse, early sex debut, having a history of sexual experiences, physical violence, and sexual violence were predictors of transaction sex. Governmental and other stakeholders are designed to reduce alcohol utilization, provide health information about the negative consequences of early sex debut, substance abuse, and reduce sexual violence, ensuring gender equality through mass media, which should be included in state policy.

Keywords: women’s health, child health, reproductive health, midwifery

Procedia PDF Downloads 94
142 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination

Authors: Gilberto Goracci, Fabio Curti

Abstract:

This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.

Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field

Procedia PDF Downloads 105
141 Work Related Musculoskeletal Disorder: A Case Study of Office Computer Users in Nigerian Content Development and Monitoring Board, Yenagoa, Bayelsa State, Nigeria

Authors: Tamadu Perry Egedegu

Abstract:

Rapid growth in the use of electronic data has affected both the employee and work place. Our experience shows that jobs that have multiple risk factors have a greater likelihood of causing Work Related Musculoskeletal Disorder (WRMSDs), depending on the duration, frequency and/or magnitude of exposure to each. The study investigated musculoskeletal disorder among office workers. Thus, it is important that ergonomic risk factors be considered in light of their combined effect in causing or contributing to WRMSDs. Fast technological growth in the use of electronic system; have affected both workers and the work environment. Awkward posture and long hours in front of these visual display terminals can result in work-related musculoskeletal disorders (WRMSD). The study shall contribute to the awareness creation on the causes and consequences of WRMSDs due to lack of ergonomics training. The study was conducted using an observational cross-sectional design. A sample of 109 respondents was drawn from the target population through purposive sampling method. The sources of data were both primary and secondary. Primary data were collected through questionnaires and secondary data were sourced from journals, textbooks, and internet materials. Questionnaires were the main instrument for data collection and were designed in a YES or NO format according to the study objectives. Content validity approval was used to ensure that the variables were adequately covered. The reliability of the instrument was done through test-retest method, yielding a reliability index at 0.84. The data collected from the field were analyzed with a descriptive statistics of chart, percentage and mean. The study found that the most affected body regions were the upper back, followed by the lower back, neck, wrist, shoulder and eyes, while the least affected body parts were the knee calf and the ankle. Furthermore, the prevalence of work-related 'musculoskeletal' malfunctioning was linked with long working hours (6 - 8 hrs.) per day, lack of back support on their seats, glare on the monitor, inadequate regular break, repetitive motion of the upper limbs, and wrist when using the computer. Finally, based on these findings some recommendations were made to reduce the prevalent of WRMSDs among office workers.

Keywords: work related musculoskeletal disorder, Nigeria, office computer users, ergonomic risk factor

Procedia PDF Downloads 241
140 Media Coverage on Child Sexual Abuse in Developing Countries

Authors: Hayam Qayyum

Abstract:

Print and Broadcast media are considered to be the most powerful social change agents and effective medium that can revolutionize the deter society into the civilized, responsible, composed society. Beside all major roles, imperative role of media is to highlight the human rights’ violation issues in order to provide awareness and to prevent society from the social evils and injustice. So, by pointing out the odds, media can lessen the magnitude of happenings within the society. For centuries, the “Silent Crime” i.e. Child Sexual Abuse (CSA) is gulping down the developing countries. This study will explore that how the appropriate Print and Broadcast media coverage can eliminate Child Sexual Abuse from the society. The immense challenge faced by the journalists today; is the accurate and ethical reporting and appropriate coverage to disclose the facts and deliver right message on the right time to lessen the social evils in the developing countries, by not harming the prestige of the victim. In case of CSA most of the victims and their families are not in favour to expose their children to media due to family norms and respect in the society. Media should focus on in depth information of CSA and use this coverage is to draw attention of the concern authorities to look into the matter for reforms and reviews in the system. Moreover, media as a change agent can bring such issue into the knowledge of the international community to make collective efforts with the affected country to eliminate the ‘Silent Crime’ from the society. The model country selected for this research paper is South Africa. The purpose of this research is not only to examine the existing reporting patterns and content of print and broadcast media coverage of South Africa but also aims to create awareness to eliminate Child Sexual abuse and indirectly to improve the condition of stake holders to overcome this social evil. The literature review method is used to formulate this paper. Trends of media content on CSA will be identified that how much amount and nature of information made available to the public through the media General view of media coverage on child sexual abuse in developing countries like India and Pakistan will also be focused. This research will be limited to the role of print and broadcast media coverage to eliminate child sexual abuse in South Africa. In developing countries, CSA issue needs to be addressed on immediate basis. The study will explore the CSA content of the most influential broadcast and print media outlets of South Africa. Broadcast media will be comprised of TV channels and print media will be comprised of influential newspapers. South Africa is selected as a model for this research paper.

Keywords: child sexual abuse, developing countries, print and broadcast media, South Africa

Procedia PDF Downloads 579
139 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database

Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani

Abstract:

The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.

Keywords: residual analysis, GMPE, western balkan, strong motion, openquake

Procedia PDF Downloads 88
138 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 140
137 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard

Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni

Abstract:

The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.

Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model

Procedia PDF Downloads 143
136 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite

Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov

Abstract:

The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).

Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis

Procedia PDF Downloads 151
135 Identifying Areas on the Pavement Where Rain Water Runoff Affects Motorcycle Behavior

Authors: Panagiotis Lemonakis, Theodoros Αlimonakis, George Kaliabetsos, Nikos Eliou

Abstract:

It is very well known that certain vertical and longitudinal slopes have to be assured in order to achieve adequate rainwater runoff from the pavement. The selection of longitudinal slopes, between the turning points of the vertical curves that meet the afore-mentioned requirement does not ensure adequate drainage because the same condition must also be applied at the transition curves. In this way none of the pavement edges’ slopes (as well as any other spot that lie on the pavement) will be opposite to the longitudinal slope of the rotation axis. Horizontal and vertical alignment must be properly combined in order to form a road which resultant slope does not take small values and hence, checks must be performed in every cross section and every chainage of the road. The present research investigates the rain water runoff from the road surface in order to identify the conditions under which, areas of inadequate drainage are being created, to analyze the rainwater behavior in such areas, to provide design examples of good and bad drainage zones and to track down certain motorcycle types which might encounter hazardous situations due to the presence of water film between the pavement and both of their tires resulting loss of traction. Moreover, it investigates the combination of longitudinal and cross slope values in critical pavement areas. It should be pointed out that the drainage gradient is analytically calculated for the whole road width and not just for an oblique slope per chainage (combination of longitudinal grade and cross slope). Lastly, various combinations of horizontal and vertical design are presented, indicating the crucial zones of bad pavement drainage. The key conclusion of the study is that any type of motorcycle will travel for some time inside the area of improper runoff for a certain time frame which depends on the speed and the trajectory that the rider chooses along the transition curve. Taking into account that on this section the rider will have to lean his motorcycle and hence reduce the contact area of his tire with the pavement it is apparent that any variations on the friction value due to the presence of a water film may lead to serious problems regarding his safety. The water runoff from the road pavement is improved when between reverse longitudinal slopes, crest instead of sag curve is chosen and particularly when its edges coincide with the edges of the horizontal curve. Lastly, the results of the investigation have shown that the variation of the longitudinal slope involves the vertical shift of the center of the poor water runoff area. The magnitude of this area increases as the length of the transition curve increases.

Keywords: drainage, motorcycle safety, superelevation, transition curves, vertical grade

Procedia PDF Downloads 100
134 Gas-Phase Nondestructive and Environmentally Friendly Covalent Functionalization of Graphene Oxide Paper with Amines

Authors: Natalia Alzate-Carvajal, Diego A. Acevedo-Guzman, Victor Meza-Laguna, Mario H. Farias, Luis A. Perez-Rey, Edgar Abarca-Morales, Victor A. Garcia-Ramirez, Vladimir A. Basiuk, Elena V. Basiuk

Abstract:

Direct covalent functionalization of prefabricated free-standing graphene oxide paper (GOP) is considered as the only approach suitable for systematic tuning of thermal, mechanical and electronic characteristics of this important class of carbon nanomaterials. At the same time, the traditional liquid-phase functionalization protocols can compromise physical integrity of the paper-like material up to its total disintegration. To avoid such undesirable effects, we explored the possibility of employing an alternative, solvent-free strategy for facile and nondestructive functionalization of GOP with two representative aliphatic amines, 1-octadecylamine (ODA) and 1,12-diaminododecane (DAD), as well as with two aromatic amines, 1-aminopyrene (AP) and 1,5-diaminonaphthalene (DAN). The functionalization was performed under moderate heating at 150-180 °C in vacuum. Under such conditions, it proceeds through both amidation and epoxy ring opening reactions. Comparative characterization of pristine and amine-functionalized GOP mats was carried out by using Fourier-transform infrared, Raman, and X-ray photoelectron spectroscopy (XPS), thermogravimetric (TGA) and differential thermal analysis, scanning electron and atomic force microscopy (SEM and AFM, respectively). Besides that, we compared the stability in water, wettability, electrical conductivity and elastic (Young's) modulus of GOP mats before and after amine functionalization. The highest content of organic species was obtained in the case of GOP-ODA, followed by GOP-DAD, GOP-AP and GOP-DAN samples. The covalent functionalization increased mechanical and thermal stability of GOP, as well as its electrical conductivity. The magnitude of each effect depends on the particular chemical structure of amine employed, which allows for tuning a given GOP property. Morphological characterization by using SEM showed that, compared to pristine graphene oxide paper, amine-modified GOP mats become relatively ordered layered assemblies, in which individual GO sheets are organized in a near-parallel pattern. Financial support from the National Autonomous University of Mexico (grants DGAPA-IN101118 and IN200516) and from the National Council of Science and Technology of Mexico (CONACYT, grant 250655) is greatly appreciated. The authors also thank David A. Domínguez (CNyN of UNAM) for XPS measurements and Dr. Edgar Alvarez-Zauco (Faculty of Science of UNAM) for the opportunity to use TGA equipment.

Keywords: amines, covalent functionalization, gas-phase, graphene oxide paper

Procedia PDF Downloads 182
133 The Moderating Role of Test Anxiety in the Relationships Between Self-Efficacy, Engagement, and Academic Achievement in College Math Courses

Authors: Yuqing Zou, Chunrui Zou, Yichong Cao

Abstract:

Previous research has revealed relationships between self-efficacy (SE), engagement, and academic achievement among students in Western countries, but these relationships remain unknown in college math courses among college students in China. In addition, previous research has shown that test anxiety has a direct effect on engagement and academic achievement. However, how test anxiety affects the relationships between SE, engagement, and academic achievement is still unknown. In this study, the authors aimed to explore the mediating roles of behavioral engagement (BE), emotional engagement (EE), and cognitive engagement (CE) in the association between SE and academic achievement and the moderating role of test anxiety in college math courses. Our hypotheses are that the association between SE and academic achievement was mediated by engagement and that test anxiety played a moderating role in the association. To explore the research questions, the authors collected data through self-reported surveys among 147 students at a northwestern university in China. Self-reported surveys were used to collect data. The motivated strategies for learning questionnaire (MSLQ) (Pintrich, 1991), the metacognitive strategies questionnaire (Wolters, 2004), and the engagement versus disaffection with learning scale (Skinner et al., 2008) were used to assess SE, CE, and BE and EE, respectively. R software was used to analyze the data. The main analyses used were reliability and validity analysis of scales, descriptive statistics analysis of measured variables, correlation analysis, regression analysis, and structural equation modeling (SEM) analysis and moderated mediation analysis to look at the structural relationships between variables at the same time. The SEM analysis indicated that student SE was positively related to BE, EE, and CE and academic achievement. BE, EE, and CE were all positively associated with academic achievement. That is, as the authors expected, higher levels of SE led to higher levels of BE, EE, and CE, and greater academic achievement. Higher levels of BE, EE, and CE led to greater academic achievement. In addition, the moderated mediation analysis found that the path of SE to academic achievement in the model was as significant as expected, as was the moderating effect of test anxiety in the SE-Achievement association. Specifically, test anxiety was found to moderate the association between SE and BE, the association between SE and CE, and the association between EE and Achievement. The authors investigated possible mediating effects of BE, EE, and CE in the associations between SE and academic achievement, and all indirect effects were found to be significant. As for the magnitude of mediations, behavioral engagement was the most important mediator in the SE-Achievement association. This study has implications for college teachers, educators, and students in China regarding ways to promote academic achievement in college math courses, including increasing self-efficacy and engagement and lessening test anxiety toward math.

Keywords: academic engagement, self-efficacy, test anxiety, academic achievement, college math courses, behavioral engagement, cognitive engagement, emotional engagement

Procedia PDF Downloads 93
132 Thermoregulatory Responses of Holstein Cows Exposed to Intense Heat Stress

Authors: Rodrigo De A. Ferrazza, Henry D. M. Garcia, Viviana H. V. Aristizabal, Camilla De S. Nogueira, Cecilia J. Verissimo, Jose Roberto Sartori, Roberto Sartori, Joao Carlos P. Ferreira

Abstract:

Environmental factors adversely influence sustainability in livestock production system. Dairy herds are the most affected by heat stress among livestock industries. This clearly implies in development of new strategies for mitigating heat, which should be based on physiological and metabolic adaptations of the animal. In this study, we incorporated the effect of climate variables and heat exposure time on the thermoregulatory responses in order to clarify the adaptive mechanisms for bovine heat dissipation under intense thermal stress induced experimentally in climate chamber. Non-lactating Holstein cows were contemporaneously and randomly assigned to thermoneutral (TN; n=12) or heat stress (HS; n=12) treatments during 16 days. Vaginal temperature (VT) was measured every 15 min with a microprocessor-controlled data logger (HOBO®, Onset Computer Corporation, Bourne, MA, USA) attached to a modified vaginal controlled internal drug release insert (Sincrogest®, Ourofino, Brazil). Rectal temperature (RT), respiratory rate (RR) and heart rate (HR) were measured twice a day (0700 and 1500h) and dry matter intake (DMI) was estimated daily. The ambient temperature and air relative humidity were 25.9±0.2°C and 73.0±0.8%, respectively for TN, and 36.3± 0.3°C and 60.9±0.9%, respectively for HS. Respiratory rate of HS cows increased immediately after exposure to heat and was higher (76.02±1.70bpm; P<0.001) than TN (39.70±0.71bpm), followed by rising of RT (39.87°C±0.07 for HS versus 38.56±0.03°C for TN; P<0.001) and VT (39.82±0.10°C for HS versus 38.26±0.03°C for TN; P<0.001). A diurnal pattern was detected, with higher (P<0.01) afternoon temperatures than morning and this effect was aggravated for HS cows. There was decrease (P<0.05) of HR for HS cows (62.13±0.99bpm) compared to TN (66.23±0.79bpm), but the magnitude of the differences was not the same over time. From the third day, there was a decrease of DMI for HS in attempt to maintain homeothermy, while TN cows increased DMI (8.27kg±0.33kg d-1 for HS versus 14.03±0.29kg d-1 for TN; P<0.001). By regression analysis, RT and RR better reflected the response of cows to changes in the Temperature Humidity Index and the effect of climate variables from the previous day to influence the physiological parameters and DMI was more important than the current day, with ambient temperature the most important factor. Comparison between acute (0 to 3 days) and chronic (13 to 16 days) exposure to heat stress showed decreasing of the slope of the regression equations for RR and DMI, suggesting an adaptive adjustment, however with no change for RT. In conclusion, intense heat stress exerted strong influence on the thermoregulatory mechanisms, but the acclimation process was only partial.

Keywords: acclimation, bovine, climate chamber, hyperthermia, thermoregulation

Procedia PDF Downloads 218
131 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon

Authors: Jeffrey A. Amelse

Abstract:

Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.

Keywords: carbon dioxide, net zero, sequestration, biomass, leaves

Procedia PDF Downloads 129
130 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 176
129 Density Interaction in Determinate and Indeterminate Faba Bean Types

Authors: M. Abd El Hamid Ezzat

Abstract:

Two field trials were conducted to study the effect of plant densities i.e., 190, 222, 266, 330 and 440 10³ plants ha⁻¹ on morphological characters, physiological and yield attributes of two faba bean types viz. determinate (FLIP-87 -117 strain) and indeterminate (c.v. Giza-461). The results showed that the indeterminate plants significantly surpassed the determinate plants in plant height at 75 and 90 days from sowing, number of leaves at all growth stages and dry matter accumulation at 45 and 90 days from sowing. Determinate plants possessed greater number of side branches than that of the indeterminate plants, but it was only significant at 90 days from sowing. Greater number of flowers were produced by the indeterminate plants than that of the determinate plants at 75 and 90 days from sowing, and although shedding was obvious in both types, it was greater in the determinate plants as compared with the indeterminate one at 90 days from sowing. Increasing plant density resulted in reductions in number of leaves, branches flowers and dry matter accumulation per plant of both faba bean types. However, plant height criteria took a reversible magnitude. Moreover, under all rates of plant densities the indeterminate type plants surpassed the determinate plants in all growth characters studied except for number of branches per plant at 90 days from sowing. The indeterminate plant leaves significantly contained greater concentrations of photosynthetic pigments i.e., chl. a, b and carotenoids than those found in the determinate plant leaves. Also, the data showed significant reduction in photosynthetic pigments concentration as planting density increases. Light extinction coefficient (K) values reached their maximum level at 60 days from sowing, then it declined sharply at 75 days from sowing. The data showed that the illumination inside the determinate faba bean canopies was better than the indeterminate plants. (K) values tended to increase as planting density increases, meanwhile, significant interactions were reported between faba bean type as planting density on (K) at all growth stages. Both of determinate and indeterminate faba bean plant leaves reached their maximum expansion at 75 days from sowing reflecting the highest LAI values, then their declined in the subsequent growth stage. The indeterminate faba bean plants significantly surpassed the determinate plants in LAI up to 75 days from sowing. Growth analysis showed that NAR, RGR and CGR reached their maximum rates at (60-75 days growth stage). Faba bean types did not differ significantly in NAR at the early growth stage. The indeterminate plants were able to grow faster with significant CGR values than the determinate plants. The indeterminate faba bean plants surpassed the determinate ones in number of seeds/pod and per plant, 100-seed weight, seed yield per plant and per hectare at all rates of plant density. Seed yield increased with increasing plant densities of both types. The highest seed yield was attained for both types 440 103 plants ha⁻¹.

Keywords: determinate, indeterminate faba bean, Physiological attributes, yield attributes

Procedia PDF Downloads 236
128 Downward Vertical Evacuation for Disabilities People from Tsunami Using Escape Bunker Technology

Authors: Febrian Tegar Wicaksana, Niqmatul Kurniati, Surya Nandika

Abstract:

Indonesia is one of the countries that have great number of disaster occurrence and threat because it is located in not only between three tectonic plates such as Eurasia plates, Indo-Australia plates and Pacific plates, but also in the Ring of Fire path, like earthquake, Tsunami, volcanic eruption and many more. Recently, research shows that there are potential areas that will be devastated by Tsunami in southern coast of Java. Tsunami is a series of waves in a body of water caused by the displacement of a large volume of water, generally in an ocean. When the waves enter shallow water, they may rise to several feet or, in rare cases, tens of feet, striking the coast with devastating force. The parameter for reference such as magnitude, the depth of epicentre, distance between epicentres with land, the depth of every points, when reached the shore and the growth of waves. Interaction between parameters will bring the big variance of Tsunami wave. Based on that, we can formulate preparation that needed for disaster mitigation strategies. The mitigation strategies will take the important role in an effort to reduce the number of victims and damage in the area. It will reduce the number of victim and casualties. Reducing is directed to the most difficult mobilization casualties in the tsunami disaster area like old people, sick people and disabilities people. Until now, the method that used for rescuing people from Tsunami is basic horizontal evacuation. This evacuation system is not optimal because it needs so long time and it cannot be used by people with disabilities. The writers propose to create a vertical evacuation model with an escape bunker system. This bunker system is chosen because the downward vertical evacuation is considered more efficient and faster. Especially in coastal areas without any highlands surround it. The downward evacuation system is better than upward evacuation because it can avoid the risk of erosion at the ground around the structure which can affect the building. The structure of the bunker and the evacuation process while, and even after, disaster are the main priority to be considered. The power of bunker has quake’s resistance, the durability from water stream, variety of interaction to the ground, and waterproof design. When the situation is back to normal, victim and casualties can go into the safer place. The bunker will be located near the hospital and public places, and will have wide entrance supported by large slide in it so it will ease the disabilities people. The technology of the escape bunker system is expected to reduce the number of victims who have low mobility in the Tsunami.

Keywords: escape bunker, tsunami, vertical evacuation, mitigation, disaster management

Procedia PDF Downloads 492
127 A Strategic Sustainability Analysis of Electric Vehicles in EU Today and Towards 2050

Authors: Sven Borén, Henrik Ny

Abstract:

Ambitions within the EU for moving towards sustainable transport include major emission reductions for fossil fuel road vehicles, especially for buses, trucks, and cars. The electric driveline seems to be an attractive solution for such development. This study first applied the Framework for Strategic Sustainable Development to compare sustainability effects of today’s fossil fuel vehicles with electric vehicles that have batteries or hydrogen fuel cells. The study then addressed a scenario were electric vehicles might be in majority in Europe by 2050. The methodology called Strategic Lifecycle Assessment was first used, were each life cycle phase was assessed for violations against sustainability principles. This indicates where further analysis could be done in order to quantify the magnitude of each violation, and later to create alternative strategies and actions that lead towards sustainability. A Life Cycle Assessment of combustion engine cars, plug-in hybrid cars, battery electric cars and hydrogen fuel cell cars was then conducted to compare and quantify environmental impacts. The authors found major violations of sustainability principles like use of fossil fuels, which contribute to the increase of emission related impacts such as climate change, acidification, eutrophication, ozone depletion, and particulate matters. Other violations were found, such as use of scarce materials for batteries and fuel cells, and also for most life cycle phases for all vehicles when using fossil fuel vehicles for mining, production and transport. Still, the studied current battery and hydrogen fuel cell cars have less severe violations than fossil fuel cars. The life cycle assessment revealed that fossil fuel cars have overall considerably higher environmental impacts compared to electric cars as long as the latter are powered by renewable electricity. By 2050, there will likely be even more sustainable alternatives than the studied electric vehicles when the EU electricity mix mainly should stem from renewable sources, batteries should be recycled, fuel cells should be a mature technology for use in vehicles (containing no scarce materials), and electric drivelines should have replaced combustion engines in other sectors. An uncertainty for fuel cells in 2050 is whether the production of hydrogen will have had time to switch to renewable resources. If so, that would contribute even more to a sustainable development. Except for being adopted in the GreenCharge roadmap, the authors suggest that the results can contribute to planning in the upcoming decades for a sustainable increase of EVs in Europe, and potentially serve as an inspiration for other smaller or larger regions. Further studies could map the environmental effects in LCA further, and include other road vehicles to get a more precise perception of how much they could affect sustainable development.

Keywords: strategic, electric vehicles, sustainability, LCA

Procedia PDF Downloads 386
126 A Study on Relationship between Firm Managers Environmental Attitudes and Environment-Friendly Practices for Textile Firms in India

Authors: Anupriya Sharma, Sapna Narula

Abstract:

Over the past decade, sustainability has gone mainstream as more people are worried about environment-related issues than ever before. These issues are of even more concern for industries which leave a significant impact on the environment. Following these ecological issues, corporates are beginning to comprehend the impact on their business. Many such initiatives have been made to address these emerging issues in the consumer-driven textile industry. Demand from customers, local communities, government regulations, etc. are considered some of the major factors affecting environmental decision-making. Research also shows that motivations to go green are inevitably determined by the way top managers perceive environmental issues as managers personal values and ethical commitment act as a motivating factor towards corporate social responsibility. Little empirical research has been conducted to examine the relationship between top managers’ personal environmental attitudes and corporate environmental behaviors for the textile industry in the Indian context. The primary purpose of this study is to determine the current state of environmental management in textile industry and whether the attitude of textile firms’ top managers is significantly related to firm’s response to environmental issues and their perceived benefits of environmental management. To achieve the aforesaid objectives of the study, authors used structured questionnaire based on literature review. The questionnaire consisted of six sections with a total length of eight pages. The first section was based on background information on the position of the respondents in the organization, annual turnover, year of firm’s establishment and so on. The other five sections of the questionnaire were based upon (drivers, attitude, and awareness, sustainable business practices, barriers to implementation and benefits achieved). To test the questionnaire, a pretest was conducted with the professionals working in corporate sustainability and had knowledge about the textile industry and was then mailed to various stakeholders involved in textile production thereby covering firms top manufacturing officers, EHS managers, textile engineers, HR personnel and R&D managers. The results of the study showed that most of the textile firms were implementing some type of environmental management practice, even though the magnitude of firm’s involvement in environmental management practices varied. The results also show that textile firms with a higher level of involvement in environmental management were more involved in the process driven technical environmental practices. It also identified that firm’s top managers environmental attitudes were correlated with perceived advantages of environmental management as textile firm’s top managers are the ones who possess managerial discretion on formulating and deciding business policies such as environmental initiatives.

Keywords: attitude and awareness, Environmental management, sustainability, textile industry

Procedia PDF Downloads 233
125 Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters

Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut

Abstract:

In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.

Keywords: underwater vehicles, submarine, autonomous underwater vehicles, AUV, computational fluid dynamics, flow fields, pressure, turbulence, drag

Procedia PDF Downloads 91
124 Temperature Dependence of Photoluminescence Intensity of Europium Dinuclear Complex

Authors: Kwedi L. M. Nsah, Hisao Uchiki

Abstract:

Quantum computation is a new and exciting field making use of quantum mechanical phenomena. In classical computers, information is represented as bits, with values either 0 or 1, but a quantum computer uses quantum bits in an arbitrary superposition of 0 and 1, enabling it to reach beyond the limits predicted by classical information theory. lanthanide ion quantum computer is an organic crystal, having a lanthanide ion. Europium is a favored lanthanide, since it exhibits nuclear spin coherence times, and Eu(III) is photo-stable and has two stable isotopes. In a europium organic crystal, the key factor is the mutual dipole-dipole interaction between two europium atoms. Crystals of the complex were formed by making a 2 :1 reaction of Eu(fod)3 and bpm. The transparent white crystals formed showed brilliant red luminescence with a 405 nm laser. The photoluminescence spectroscopy was observed both at room and cryogenic temperatures (300-14 K). The luminescence spectrum of [Eu(fod)3(μ-bpm) Eu(fod)3] showed characteristic of Eu(III) emission transitions in the range 570–630 nm, due to the deactivation of 5D0 emissive state to 7Fj. For the application of dinuclear Eu3+ complex to q-bit device, attention was focused on 5D0 -7F0 transition, around 580 nm. The presence of 5D0 -7F0 transition at room temperature revealed that at least one europium symmetry had no inversion center. Since the line was unsplit by the crystal field effect, any multiplicity observed was due to a multiplicity of Eu3+ sites. For q-bit element, more narrow line width of 5D0 → 7F0 PL band in Eu3+ ion was preferable. Cryogenic temperatures (300 K – 14 K) was applicable to reduce inhomogeneous broadening and distinguish between ions. A CCD image sensor was used for low temperature Photoluminescence measurement, and a far better resolved luminescent spectrum was gotten by cooling the complex at 14 K. A red shift by 15 cm-1 in the 5D0 - 7F0 peak position was observed upon cooling, the line shifted towards lower wavenumber. An emission spectrum at the 5D0 - 7F0 transition region was obtained to verify the line width. At this temperature, a peak with magnitude three times that at room temperature was observed. The temperature change of the 5D0 state of Eu(fod)3(μ-bpm)Eu(fod)3 showed a strong dependence in the vicinity of 60 K to 100 K. Thermal quenching was observed at higher temperatures than 100 K, at which point it began to decrease slowly with increasing temperature. The temperature quenching effect of Eu3+ with increase temperature was caused by energy migration. 100 K was the appropriate temperature for the observation of the 5D0 - 7F0 emission peak. Europium dinuclear complex bridged by bpm was successfully prepared and monitored at cryogenic temperatures. At 100 K the Eu3+-dope complex has a good thermal stability and this temperature is appropriate for the observation of the 5D0 - 7F0 emission peak. Sintering the sample above 600o C could also be a method to consider but the Eu3+ ion can be reduced to Eu2+, reasons why cryogenic temperature measurement is preferably over other methods.

Keywords: Eu(fod)₃, europium dinuclear complex, europium ion, quantum bit, quantum computer, 2, 2-bipyrimidine

Procedia PDF Downloads 181
123 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall

Abstract:

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact

Procedia PDF Downloads 242
122 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration

Procedia PDF Downloads 166
121 Gravitational Water Vortex Power Plant: Experimental-Parametric Design of a Hydraulic Structure Capable of Inducing the Artificial Formation of a Gravitational Water Vortex Appropriate for Hydroelectric Generation

Authors: Henrry Vicente Rojas Asuero, Holger Manuel Benavides Muñoz

Abstract:

Approximately 80% of the energy consumed worldwide is generated from fossil sources, which are responsible for the emission of a large volume of greenhouse gases. For this reason, the global trend, at present, is the widespread use of energy produced from renewable sources. This seeks safety and diversification of energy supply, based on social cohesion, economic feasibility and environmental protection. In this scenario, small hydropower systems (P ≤ 10MW) stand out due to their high efficiency, economic competitiveness and low environmental impact. Small hydropower systems, along with wind and solar energy, are expected to represent a significant percentage of the world's energy matrix in the near term. Among the various technologies present in the state of the art, relating to small hydropower systems, is the Gravitational Water Vortex Power Plant, a recent technology that excels because of its versatility of operation, since it can operate with jumps in the range of 0.70 m-2.00 m and flow rates from 1 m3/s to 20 m3/s. Its operating system is based on the utilization of the energy of rotation contained within a large water vortex artificially induced. This paper presents the study and experimental design of an optimal hydraulic structure with the capacity to induce the artificial formation of a gravitational water vortex trough a system of easy application and high efficiency, able to operate in conditions of very low head and minimum flow. The proposed structure consists of a channel, with variable base, vortex inductor, tangential flow generator, coupled to a circular tank with a conical transition bottom hole. In the laboratory test, the angular velocity of the water vortex was related to the geometric characteristics of the inductor channel, as well as the influence of the conical transition bottom hole on the physical characteristics of the water vortex. The results show angular velocity values of greater magnitude as a function of depth, in addition the presence of the conical transition in the bottom hole of the circular tank improves the water vortex formation conditions while increasing the angular velocity values. Thus, the proposed system is a sustainable solution for the energy supply of rural areas near to watercourses.

Keywords: experimental model, gravitational water vortex power plant, renewable energy, small hydropower

Procedia PDF Downloads 290
120 Bi-objective Network Optimization in Disaster Relief Logistics

Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann

Abstract:

Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.

Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks

Procedia PDF Downloads 79
119 Hydrodynamic Analysis of Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters

Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut

Abstract:

- In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.

Keywords: underwater vehicles, submarine, autonomous underwater vehicles, auv, computational fluid dynamics, flow fields, pressure, turbulence, drag

Procedia PDF Downloads 78
118 Structural Analysis of Archaeoseismic Records Linked to the 5 July 408 - 410 AD Utica Strong Earthquake (NE Tunisia)

Authors: Noureddine Ben Ayed, Abdelkader Soumaya, Saïd Maouche, Ali Kadri, Mongi Gueddiche, Hayet Khayati-Ammar, Ahmed Braham

Abstract:

The archaeological monument of Utica, located in north-eastern Tunisia, was founded (8th century BC) By the Phoenicians as a port installed on the trade route connecting Phoenicia and the Straits of Gibraltar in the Mediterranean Sea. The flourishment of this city as an important settlement during the Roman period was followed by a sudden abandonment, disuse and progressive oblivion in the first half of the fifth century AD. This decadence can be attributed to the destructive earthquake of 5 July 408 - 410 AD, affecting this historic city as documented in 1906 by the seismologist Fernand De Montessus De Ballore. The magnitude of the Utica earthquake was estimated at 6.8 by the Tunisian National Institute of Meteorology (INM). In order to highlight the damage caused by this earthquake, a field survey was carried out at the Utica ruins to detect and analyse the earthquake archaeological effects (EAEs) using structural geology methods. This approach allowed us to highlight several structural damages, including: (1) folded mortar pavements, (2) cracks affecting the mosaic and walls of a water basin in the "House of the Grand Oecus", (3) displaced columns, (4) block extrusion in masonry walls, (5) undulations in mosaic pavements, (6) tilted walls. The structural analysis of these EAEs and data measurements reveal a seismic cause for all evidence of deformation in the Utica monument. The maximum horizontal strain of the ground (e.g. SHmax) inferred from the building oriented damage in Utica shows a NNW-SSE direction under a compressive tectonic regime. For the seismogenic source of this earthquake, we propose the active E-W to NE-SW trending Utique - Ghar El Melh reverse fault, passing through the Utica Monument and extending towards the Ghar El Melh Lake, as the causative tectonic structure. The active fault trace is well supported by instrumental seismicity, geophysical data (e.g., gravity, seismic profiles) and geomorphological analyses. In summary, we find that the archaeoseismic records detected at Utica are similar to those observed at many other archaeological sites affected by destructive ancient earthquakes around the world. Furthermore, the calculated orientation of the average maximum horizontal stress (SHmax) closely match the state of the actual stress field, as highlighted by some earthquake focal mechanisms in this region.

Keywords: Tunisia, utica, seimogenic fault, archaeological earthquake effects

Procedia PDF Downloads 45
117 Microplastics Accumulation and Abundance Standardization for Fluvial Sediments: Case Study for the Tena River

Authors: Mishell E. Cabrera, Bryan G. Valencia, Anderson I. Guamán

Abstract:

Human dependence on plastic products has led to global pollution, with plastic particles ranging in size from 0.001 to 5 millimeters, which are called microplastics (hereafter, MPs). The abundance of microplastics is used as an indicator of pollution. However, reports of pollution (abundance of MPs) in river sediments do not consider that the accumulation of sediments and MPs depends on the energy of the river. That is, the abundance of microplastics will be underestimated if the sediments analyzed come from places where the river flows with a lot of energy, and the abundance will be overestimated if the sediment analyzed comes from places where the river flows with less energy. This bias can generate an error greater than 300% of the MPs value reported for the same river and should increase when comparisons are made between 2 rivers with different characteristics. Sections where the river flows with higher energy allow sands to be deposited and limit the accumulation of MPs, while sections, where the same river has lower energy, allow fine sediments such as clays and silts to be deposited and should facilitate the accumulation of MPs particles. That is, the abundance of MPs in the same river is underrepresented when the sediment analyzed is sand, and the abundance of MPs is overrepresented if the sediment analyzed is silt or clay. The present investigation establishes a protocol aimed at incorporating sample granulometry to calibrate MPs quantification and eliminate over- or under-representation bias (hereafter granulometric bias). A total of 30 samples were collected by taking five samples within six work zones. The slope of the sampling points was less than 8 degrees, referred to as low slope areas, according to the Van Zuidam slope classification. During sampling, blanks were used to estimate possible contamination by MPs during sampling. Samples were dried at 60 degrees Celsius for three days. A flotation technique was employed to isolate the MPs using sodium metatungstate with a density of 2 gm/l. For organic matter digestion, 30% hydrogen peroxide and Fenton were used at a ratio of 6:1 for 24 hours. The samples were stained with rose bengal at a concentration of 200 mg/L and were subsequently dried in an oven at 60 degrees Celsius for 1 hour to be identified and photographed in a stereomicroscope with the following conditions: Eyepiece magnification: 10x, Zoom magnification (zoom knob): 4x, Objective lens magnification: 0.35x for analysis in ImageJ. A total of 630 fibers of MPs were identified, mainly red, black, blue, and transparent colors, with an overall average length of 474,310 µm and an overall median length of 368,474 µm. The particle size of the 30 samples was calculated using 100 g per sample using sieves with the following apertures: 2 mm, 1 mm, 500 µm, 250 µm, 125 µm and 0.63 µm. This sieving allowed a visual evaluation and a more precise quantification of the microplastics present. At the same time, the weight of sediment in each fraction was calculated, revealing an evident magnitude: as the presence of sediment in the < 63 µm fraction increases, a significant increase in the number of MPs particles is observed.

Keywords: microplastics, pollution, sediments, Tena River

Procedia PDF Downloads 73
116 Assessment the Implications of Regional Transport and Local Emission Sources for Mitigating Particulate Matter in Thailand

Authors: Ruchirek Ratchaburi, W. Kevin. Hicks, Christopher S. Malley, Lisa D. Emberson

Abstract:

Air pollution problems in Thailand have improved over the last few decades, but in some areas, concentrations of coarse particulate matter (PM₁₀) are above health and regulatory guidelines. It is, therefore, useful to investigate how PM₁₀ varies across Thailand, what conditions cause this variation, and how could PM₁₀ concentrations be reduced. This research uses data collected by the Thailand Pollution Control Department (PCD) from 17 monitoring sites, located across 12 provinces, and obtained between 2011 and 2015 to assess PM₁₀ concentrations and the conditions that lead to different levels of pollution. This is achieved through exploration of air mass pathways using trajectory analysis, used in conjunction with the monitoring data, to understand the contribution of different months, an hour of the day and source regions to annual PM₁₀ concentrations in Thailand. A focus is placed on locations that exceed the national standard for the protection of human health. The analysis shows how this approach can be used to explore the influence of biomass burning on annual average PM₁₀ concentration and the difference in air pollution conditions between Northern and Southern Thailand. The results demonstrate the substantial contribution that open biomass burning from agriculture and forest fires in Thailand and neighboring countries make annual average PM₁₀ concentrations. The analysis of PM₁₀ measurements at monitoring sites in Northern Thailand show that in general, high concentrations tend to occur in March and that these particularly high monthly concentrations make a substantial contribution to the overall annual average concentration. In 2011, a > 75% reduction in the extent of biomass burning in Northern Thailand and in neighboring countries resulted in a substantial reduction not only in the magnitude and frequency of peak PM₁₀ concentrations but also in annual average PM₁₀ concentrations at sites across Northern Thailand. In Southern Thailand, the annual average PM₁₀ concentrations for individual years between 2011 and 2015 did not exceed the human health standard at any site. The highest peak concentrations in Southern Thailand were much lower than for Northern Thailand for all sites. The peak concentrations at sites in Southern Thailand generally occurred between June and October and were associated with air mass back trajectories that spent a substantial proportion of time over the sea, Indonesia, Malaysia, and Thailand prior to arrival at the monitoring sites. The results show that emissions reductions from biomass burning and forest fires require action on national and international scales, in both Thailand and neighboring countries, such action could contribute to ensuring compliance with Thailand air quality standards.

Keywords: annual average concentration, long-range transport, open biomass burning, particulate matter

Procedia PDF Downloads 183
115 Reservoir-Triggered Seismicity of Water Level Variation in the Lake Aswan

Authors: Abdel-Monem Sayed Mohamed

Abstract:

Lake Aswan is one of the largest man-made reservoirs in the world. The reservoir began to fill in 1964 and the level rose gradually, with annual irrigation cycles, until it reached a maximum water level of 181.5 m in November 1999, with a capacity of 160 km3. The filling of such large reservoir changes the stress system either through increasing vertical compressional stress by loading and/or increased pore pressure through the decrease of the effective normal stress. The resulted effect on fault zones changes stability depending strongly on the orientation of pre-existing stress and geometry of the reservoir/fault system. The main earthquake occurred on November 14, 1981, with magnitude 5.5. This event occurred after 17 years of the reservoir began to fill, along the active part of the Kalabsha fault and located not far from the High Dam. Numerous of small earthquakes follow this earthquake and continue till now. For this reason, 13 seismograph stations (radio-telemetry network short-period seismometers) were installed around the northern part of Lake Aswan. The main purpose of the network is to monitor the earthquake activity continuously within Aswan region. The data described here are obtained from the continuous record of earthquake activity and lake-water level variation through the period from 1982 to 2015. The seismicity is concentrated in the Kalabsha area, where there is an intersection of the easterly trending Kalabsha fault with the northerly trending faults. The earthquake foci are distributed in two seismic zones, shallow and deep in the crust. Shallow events have focal depths of less than 12 km while deep events extend from 12 to 28 km. Correlation between the seismicity and the water level variation in the lake provides great suggestion to distinguish the micro-earthquakes, particularly, those in shallow seismic zone in the reservoir–triggered seismicity category. The water loading is one factor from several factors, as an activating medium in triggering earthquakes. The common factors for all cases of induced seismicity seem to be the presence of specific geological conditions, the tectonic setting and water loading. The role of the water loading is as a supplementary source of earthquake events. So, the earthquake activity in the area originated tectonically (ML ≥ 4) and the water factor works as an activating medium in triggering small earthquakes (ML ≤ 3). Study of the inducing seismicity from the water level variation in Aswan Lake is of great importance and play great roles necessity for the safety of the High Dam body and its economic resources.

Keywords: Aswan lake, Aswan seismic network, seismicity, water level variation

Procedia PDF Downloads 370
114 Specific Earthquake Ground Motion Levels That Would Affect Medium-To-High Rise Buildings

Authors: Rhommel Grutas, Ishmael Narag, Harley Lacbawan

Abstract:

Construction of high-rise buildings is a means to address the increasing population in Metro Manila, Philippines. The existence of the Valley Fault System within the metropolis and other nearby active faults poses threats to a densely populated city. The distant, shallow and large magnitude earthquakes have the potential to generate slow and long-period vibrations that would affect medium-to-high rise buildings. Heavy damage and building collapse are consequences of prolonged shaking of the structure. If the ground and the building have almost the same period, there would be a resonance effect which would cause the prolonged shaking of the building. Microzoning the long-period ground response would aid in the seismic design of medium to high-rise structures. The shear-wave velocity structure of the subsurface is an important parameter in order to evaluate ground response. Borehole drilling is one of the conventional methods of determining shear-wave velocity structure however, it is an expensive approach. As an alternative geophysical exploration, microtremor array measurements can be used to infer the structure of the subsurface. Microtremor array measurement system was used to survey fifty sites around Metro Manila including some municipalities of Rizal and Cavite. Measurements were carried out during the day under good weather conditions. The team was composed of six persons for the deployment and simultaneous recording of the microtremor array sensors. The instruments were laid down on the ground away from sewage systems and leveled using the adjustment legs and bubble level. A total of four sensors were deployed for each site, three at the vertices of an equilateral triangle with one sensor at the centre. The circular arrays were set up with a maximum side length of approximately four kilometers and the shortest side length for the smallest array is approximately at 700 meters. Each recording lasted twenty to sixty minutes. From the recorded data, f-k analysis was applied to obtain phase velocity curves. Inversion technique is applied to construct the shear-wave velocity structure. This project provided a microzonation map of the metropolis and a profile showing the long-period response of the deep sedimentary basin underlying Metro Manila which would be suitable for local administrators in their land use planning and earthquake resistant design of medium to high-rise buildings.

Keywords: earthquake, ground motion, microtremor, seismic microzonation

Procedia PDF Downloads 468